Helen & Dave Edwards: Becoming Synthetic

A lecture on Becoming Synthetic: What AI Is Doing To Us, Not Just For Us.

Headshot of Helen & Dave Edwards for their keynote at the Autonomous Summit 2025

We enjoyed giving a virtual keynote for the Autonomous Summit on December 4, 2025, titled Becoming Synthetic: What AI Is Doing To Us, Not Just For Us.

We talked about our research on how to maintain human agency & cognitive sovereignty, the philosophical question of what it means to be human, and our new(ish) approach to create better AI tools called unDesign.

unDesign is not the absence of design nor is it anti-design. It's design oriented differently. The history of design has been a project of reducing uncertainty. Making things legible. Signaling affordances. Good design means you never have to wonder what to do.

Undesign inverts this and uses "uns" as design material. The unknown. The unpredictable. The unplanned. These aren't bugs. They're the medium where value actually lives. Because uncertainty is the condition of genuine encounter.

unDesign doesn't design outcomes—it designs the space where outcomes can emerge.

You can watch the full keynote below. Check it out!

Watch/Listen on Apple, Spotify and YouTube.

Transcript:

Robin Vermeulen 0:00
So next session is actually going to be the final keynote on this track. Yeah, Helen and Dave Edwards, co founders of the Artificiality Institute, will tell us what AI is doing to us, and not just for us. There are husband and wife team of analysts, artificial philosophers and meta researchers, and will tell us how we're becoming synthetic. Very curious about that one, and I'll leave the floor to you too. Great.

Dave Edwards 0:29
Thank you very much for having us. I'm Dave, and that's Helen. If that isn't clear, we're coming to you from Bend, Oregon, as as Robin said, we run the Artificiality Institute. We study the human experience of AI, and we do this through a relatively unusual way of listening to people's stories to understand their actual experience. We use that research, we share it with people like you. We work with organizations to help them understand how to apply our research in best ways. And we also work on designing systems that work best for humans. So today we're going to take you through some of our core research and sort of frame it for you in a, you know, in a bigger picture way towards the end. And you're right, Go Ducks. And we will try to endeavor to finish a little bit with some some time for questions. But jump ahead to just say you can find us at artificiality institute.org you can reach out to us with that email, hello at that same URL and find us on LinkedIn. Please connect. We love to be connected to all of you. So I'm going to go ahead and share my screen. Start off with some slides, and we will get get working. Okay? Okay,

Helen Edwards 1:47
so where are we at in this moment with AI? What people tell us is that it feels very different from other tools that they describe, blending of thoughts and shared authorship and even a degree of emotional connection, in fact, quite a lot, in many cases. And we think that there's that there's two things potentially happening. One is that we might be watching an old human pattern of adaptation, but at a new speed and depth and with a different kind of technology. So there's that sort of adaptation part of it, or we might be looking at something entirely new, the early stage of human AI symbiosis, which is quite a different kind of thing that could emerge over the next few years. So our research, we think about three different parts of how we're how AI is changing us. It's changing our thinking, our being and identity and our sense of who we can become, our meaning making. So just take a moment to consider these three questions. When you use AI, do you think about it as a tool or as a teammate? What do you think about it as something else entirely? Do you think of some think of something that you did with AI that you were proud of, and if someone praised it? What did you say? Did you say it was AI? Did you say it was you? Did you say it was a bit of both? What did you say and has AI ever surprised you with an idea that changed what you were even trying to do? And these three questions essentially map to three ways that we think about AI again, thinking being and becoming. And we map these, these ways of being with AI on what we call a cube. Because what happens is that each time we use AI, it changes something about one of these things, and most in many cases. So the first one, how far AI is inside your reasoning, the thinking, the permeation of ai, ai generated ideas into how you think about your own reasoning and how you think about what, what, what, how to solve problems, what facts are those kinds of things. So we see this in our research with comments like this. You know, high cases, I often debate with a push back when it's wrong and treat it like an intern or an early career teammate. You've got that sort of wrestling with ideas. You walk away and those ideas become yours, or it's part of growing up to learn how to do your own work. I don't want to offload my thinking to a machine, so a high school student that's very cautious about taking any particular ideas from AI. The second one becoming how much AI shifts the meaning of things for you when context changes, what it's, what it's, what, what you how people use AI to do different things, to solve completely different problems. To take new meaning in the world. It's a very important symbolic aspect of how we think about using technology. So high cases, I used to think work meant showing up. Then I met AI, if effort no longer equals hours, what should we be paid for or low? It feels like a con to me. I'm not seeing any convincing reason to use it. So we see these different patterns emerge in the in stories from humans, and the last one here being how much AI fuses with your sense of self. And it's this is this part of identity is incredibly important and is changing a lot about how this is a sort of a core part of why AI is quite different than previous technologies, is our hypothesis. So in a high case chat, GPT helps me think, clarifies my own ideas, asks challenging questions and understands all my cultural references. It has a sense of humor similar to mine, although AI is your partner in crime, but it's not your identity. You're still human in the loop. So we see these sort of contrasts with how people think about identity. When we map these in three dimensions, we end up with eight roles that people put, AI into, a doer, a builder, a catalyst, a partner and an outsourcer, a framer, a creator and a co author. I'm just going to run through two of them to show you how we sort of think about this, the first one being a framer, where you are taking more ideas from AI to the point that you don't necessarily even know whether those ideas were from AI or from you. Your sense of identity is relatively stable and separate from AI, and your sense of of and what kinds of preconsolve is also relatively separate from Ai. So what AI is doing in the framer role is organizing your understanding. We see people using it for summarization, patterning, enhancing clarity, speed and synthesis, huge advantages and how we quickly grapple with new ideas and how those ideas become our ideas. It's meaningfully shaping our thinking, while the meaning stay quite coherent, even if they're sort of rearranged, and you learn to question these frames. It's a very important part of using AI in this role, is that you're able to really grapple with the fact that it's given you something that's meaningful but you not sure it's correct. And how do you fact check that? How do you make it so that it comes meaningful to you without what is kind of core to how we reason? And it helps build, it helps you build something but a mistake. It's easy to mistake a clean frame for a true one, because everything looks so nice coming from an AI, right? But it not necessarily true. So question for you, What's one way AI has organized your thinking that you wouldn't have arrived at on your own? It's a good check, like, how many times does this happen? For a lot of people, it happens quite a lot. And then I'm going to look at another one here, which is creator, which is towards the back of the cube, where identity is starting to become important. So you're not taking ideas directly from Ai at this point, but what it's doing is shifting your sense of identity, and it's shifting your sense of who you can become. Feels like an extension of your own imagination itself fuses with that, your own identity fuses with the generative flow. Get the sense of authorship being shared, and originality starts to feel collective between you and the AI. So you get this expressive amplification. Ideas materialize faster. It's richer and stranger. It's easy to get confused about the origin. Question is, at what point does that matter? May not when you create something with AI, how do you decide what counts as yours? It's kind of an Uber question. So those are just two of those four. And we use these a lot, and they matter, because it changes how we show up to the other people in our lives, the people we work with, and to our families. And the big question is, especially if we're moving into a world of human AI symbiosis, or even more than that, over time, the question is, how do we stay authors of our own mind? How do you stay author of your own mind? Self awareness, ethical reasoning, agency? Those are the things that make us uniquely human in this world of AI and this question of what makes a thought yours? What makes a thought yours in a way that you know where it came from, the choices that you made to have, the views and the actions that you want to take. And this sense of cognitive sovereignty as AI takes over more and more of our of our reasoning and our interaction and our and our medium with the world is really important.

Dave Edwards 9:47
So that's what we've seen in our research of individuals. We've gathered this through stories from 1000s of people hearing what they actually feel about their interactions with AI. I. Um, and how it changes how they think, how it changes how they make sense of the world, how it changes what they feel about themselves, how they identify themselves, and how it becomes sort of entangled with who we are. And it gets to this core question for us that drives our work, which is, what does it mean to be human? And obviously, now is, what does it mean to be human when we have these tools that are doing this in our cognitive spaces. But let's step back a bit. It isn't just about individual humans. This is happening to people all over the world, everywhere, all at once. We're wrapping the planet in intelligence. We're planning. We're wrapping it in this artificial intelligence, planetary scale systems that sense and predict, that respond, that generate new ideas. This is happening not just to each of us individually, but it's happening to us as groups, as societies, as a whole. And so it raises this question of, what does it mean to be human when an entire species is entangled with artificial minds that we've built, but we don't really, truly fundamental, fundamentally understand. And this isn't a very small this isn't a small question. It actually brings into question something that we've considered for hundreds of years. We've been telling ourselves as humans, a story about what makes us special. The Enlightenment told us that we were the ones who could find truth, as you can see in this photo, it's the idea that we can are the ones that can find truth among all of the all of the instead of looking to the cosmos. It's the story that what makes us the we're the ones who reason. We're the ones who can make meaning. But this project that we've been on for the last hundreds, some number of hundreds of years, is now changing, because we've now created intelligences that are also self learning. They're self improving. We're finding they're finding patterns we can't see. There's opportunity for it to make meaning that we can't make ourselves. And so as we go through these journeys of ourselves with these tools, have to understand that some of the uncomfortableness comes from the fact that it gets to this root of what makes us human, what does it mean to be human, and are we still really, truly exceptional in the way that we've considered so our work approaches this is not just a technological question, but a philosophical question of what, what it's doing to us as humans. So let's look at how we're responding. You know, we the broader society, we sort of have two different responses. On the left, you know, there's sort of this image of the idea that we think that these tools are going to be super intelligent, they're going to be beyond humans, and we should worship them. We should follow them. We should look to them for answers for everything in the world. Others, though, see this as a threat to what it means to be human and in their own sense of self, and they want to build walls. They want to protect humanity. Want to shut AI out. So we have this dual, this dual question of worship or fortress, acceleration or rejection. But in the end, they're kind of a false binary, and we don't think either one of them is really the path forward. Worship erases who we are, and fortress pretends that this other thing doesn't exist and that there is nothing to gain from it. So our saying, and it's becoming a bit of a mantra, is it doesn't have to be this way. We're not stuck with one story which is worshiping or one story which is rejecting we're looking to say that there is a different path, something new that we can create that's actually more for us. And so we think about it a little bit of what if we stopped designing AI for humans, or stopped designing humans to be able to use AI? What if we focused on designing what emerges between us, between AI and the humans that open space. It's a different kind of tradition. It's a different kind of design, because we're focusing on what can't be determined, what can't be predicted. We call this space between humans and AI the intimacy surface. It's a dynamic, multi dimensional space where humans and AI meet. It's not about it's not about setting specific interactions, about setting the conditions with when which these you know, humans and AI can come together. We know that there is this space. We know that it's there at the moment, the most products are used to are designed for people to have specific interactions and for machines to be able to extract from us, as we've done in the past. We know that there is this opportunity we'll tell these AI systems, all kinds of things, our hopes, our dreams, our plans, because that's how they'll be most useful to us. But we also know that we're trusting these systems with things that are perhaps more than we've ever trusted a machine before. So we think about this differently. To say, can we design for these spaces differently, not to eliminate the risk, like creating a fortress, or not to ignore the risk? Like the worship, but to create conditions where something new and genuine can emerge. I like to think about this a little bit like this, as a metaphor. This artwork here, if you will, by a man named Thomas Saraceno. He creates these things called hybrid webs, and he's created the space and the conditions for various spiders of different species to come together and create these fantastic things. They're built collaboratively, not by one species, but several that all come together in this space. As he says, multitudes observe themselves in the very act of becoming a community. I like to think that that's perhaps a great example of designing for emergence. When you can't predict and and control the outcome you create the conditions you can create constraints that enable rather than scripts what what's to become. His collaborator puts it this way, emergent dynamics can destroy the existing order, but they can also figure into collective hopes. So we think about this, and we try to apply it to our work and how we think about designing for humans and AI, and we've come to use this term of undesign. It's not the absence of design, it's not anti design, but it's designing differently. The entire history of design has been about reducing uncertainty, making things legible, extending the idea from the enlightenment that humans are the ones to reduce that uncertainty, to find truth, to create things that will actually make the world the way we want them. But undesign inverts this. It says, What if the unknown isn't the enemy, but it's the medium of the relationship? What if uncertainty is the condition of the genuine encounter? So we think about these kinds of basic principles, like conditions over solutions. We're not designing outcomes. We're designing the space where outcomes can emerge. We're not trusting in the structure or in some sort of extraction system. We're trusting in the encounter that happens between we're thinking about using uns, unknowns, unplanned at the unpredictable, as the design material. It helps us find a way to create space for the kinds of things that Helen mentioned. How can we think? How can we be? How can we become with these machines that isn't something that can be predetermined for any one individual, let alone all of us, all at once? So that briefly gives you an idea of who we are and what we're working on, and we hope that you'll consider joining us. This work is not done just by ourselves. This is doing it. We are doing this with our community, and you are likely part of it already because you're navigating these systems. But if you could join us to by subscribing to our work, bringing us to your organization, partnering with us on our research and our design projects, we hope to instill an idea that we can do this differently, and we hope you'll join us in that. Thank you.

Robin Vermeulen 18:15
All right, thanks a lot. Maybe starting with a comment from from the questions that were more of a comment than a question that Arthur is mentioning, that this is actually most abstract yet most intriguing presentation over the last two days. I think that's a nice compliment to start with. I'd say, let's go through a couple of questions we have. We have six minutes left so we can, we can definitely dive into into a couple of these, maybe, maybe just starting with one about that concept of fun design, like, Could you, could you almost see that then also translating in us humans, becoming sort of the manager of of AI or the other way around, like, what's your How do you See that, that relationship happening there? Well, I

Dave Edwards 19:02
think there's, there's sort of two different ways that people generally think about it now, is that, you know, we will be the ones controlling AI, or AI will be the ones controlling us. And there's a lot of sort of supplanting goals that is assumed in those two ideas. And you can think about it as sort of the world of symbiogenesis, when species come together and or different things interact to create something new, you sort of have this question of whose goals are going to be supplanted by the other. We'd like to think that there is a way to create a space, and what we're doing is is designing the space where the two come together to actually find mutual goals and find mutual ways of working together, but it's really not what we're used to designing. We're used to creating structures that and especially with machines and software. It's structures that humans can can use, and there's signifiers that tell you what you can use, and affordances that allow you to do certain things. But we know with these with AI. Systems that the ideas are coming from both sides. There is something that is unique about this. This is not a media like we've ever created before, where it's about creating a structure to communicate between or among humans. There's something on the other side of that surface that's bringing new ideas and new meaning to the relationship, and we've never designed for that before, and so that's why we think about this concept of designing for the unknown and designing for the unpredictable.

Robin Vermeulen 20:27
Yeah, maybe we related to that. A question from art here is like, how might we embrace AI becoming part of us, not being a separate entity in that sense? Then

Helen Edwards 20:41
that's a really good question. The you have to sort of step back and say, what does it mean to have it be part of us, as opposed to a separate entity? And what we like to think about it that at this at that point where you think about that question is that it's not just one AI, it's not just one different kind of intelligence. There's the potential for multiple diverse intelligences that are beyond just a how we would think about AI today, say, a large language model or a multimodal model. We're talking about a much more cybernetic future. And the the trick to it does go back to this idea of cognitive sovereignty, and what it is that we want to maintain as our own thought process or our own set of actions. And that comes down to sort of a per there's a lot of personal philosophy that comes in there about what your own choices mean for you and what they mean for the people around you. So this is very, very early days. Is the first thing I would say that, that if this symbiosis, symbiogenesis, which is a sort of a higher order example of how this might come about. The most important thing to keep in mind is, what are the choices that you want to make that are distinctly about building you into the person you want to be, because that's that's, in the end, what it means to if you have a degree of of conscious agency over that particular choice, then in some ways, it doesn't matter about whether there's any any different kind of separation, or whether it's part of you. So it all comes down to what is the part of this that's about the choice I want to make that feels like my choice doesn't feel like someone something else's choice, and then everything kind of flows from that point. That's why we talk about conscious adaptation. And the hard thing about conscious adaptation is that we have a biological imperative to offload as much as we possibly can, to be energy efficient. The thing that makes us different to Now, up until now, is that it's the human that we have this ability to reject efficiency and to choose something harder and more complex and less efficient, because it's more meaningful. So we focus much more on what a meaningful choice means, which could be with AI, could be not with AI. And that's that's what we encourage people to think about, and that's sort of the core of our of what we examine in the research.

Robin Vermeulen 23:28
Very interesting. We have a we have a minute left. So quick one to maybe related to that question from wanifa, why should we support the goals of AI with the could that be, yeah, considering that they have a broader scope of knowing and connecting.

Helen Edwards 23:44
Well, I think there's two ways to look at that. Not every goal is a good thing, right? So there's going to be definitely things that we don't want to support, and we have to be really clear minded about about the normative value of things that's good, that's bad. We don't want to support the bad. But the whole point of AI. And the thing that actually keeps us hopeful and excited is this is a different intelligence. It sees reality differently than we do, and we've got a lot to learn and gain from that. And whether it's searching for new molecules or whether it's finding different connections between us, those are all very sort of near term that we can imagine. We can't even imagine a whole bunch of other things. So I'm actually really hopeful about the ability to find completely new spaces. So and yes, Marisol, we are on LinkedIn.

Robin Vermeulen 24:34
Yes, super Yeah, we're going to be cut off in a second moving to the main stage, which

Helen Edwards 24:39
makes a lot of wonderful questions there, so reach out to us, and we'll respond. And we'll respond,

Robin Vermeulen 24:45
Yes, super. Thank you very much for a very intriguing presentation. Thank you. Thank you. We'll move to the wrap up with with Phil. Thank you. Thank you.

Transcribed by https://otter.ai

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.