Consciousness in a Synthetic World
We are not just spectators of the world. We are participants in its unfolding. Consciousness matters because it changes how possibility becomes reality, even if we don’t fully understand how.
We are not just spectators of the world. We are participants in its unfolding. Consciousness matters because it changes how possibility becomes reality, even if we don’t fully understand how.
Recently, Anthropic launched a research initiative to study consciousness in machines. I wasn’t surprised. If anything, the announcement felt inevitable, as if the questions we’ve quietly circled for decades are now stepping, uninvited but unavoidable, onto the main stage.
But when I try to think clearly about consciousness, my own, yours, a machine’s, it’s cognitive dissonance all the way down. I am not alone! Consciousness remains one of the few frontiers where even our best theories seem to dissolve on contact, illuminating one corner only to leave the surrounding areas darker.
This essay is, in a way, a defense of that confusion.
Over the years, I have wandered through the ideas of scientists, philosophers, and theorists who have tried to map this impossible terrain. Anil Seth’s pragmatic skepticism that consciousness will be "solved" much like life once was. Andy Clark’s notion that our minds are controlled hallucinations. Mark Solms’ relatable claim that consciousness is rooted in felt uncertainty. Karl Friston’s labyrinthine models of predictive living systems. Donald Hoffman’s proposal that our senses evolved not to reveal reality but to hide it from us.
Each offers a glimmer of understanding. Each, in its own way, deepens the mystery. There is no consensus, no leading theory, only a mental landscape in flux. With this background, AI forces its way into this uncertainty and pushes the questions we once treated as philosophical abstractions to a breaking point. As AI researchers turn their attention toward consciousness—and as the systems they build begin to probe the boundaries of mind itself—it feels as though we are on the verge of exposure. Ready or not, building these systems may reveal dimensions of consciousness we have only guessed at, forcing questions that we thought we had more time to consider.
Consciousness, once a private human puzzle, could become a new axis of moral, scientific, and existential transformation. But how do we approach such a moment when we cannot even agree on what consciousness is?
Rather than offer answers, I want to walk through the core tensions that have shaped my thinking — from the struggle between emergence and fundamentality, to the search for other Umwelts, the unsettling role of observers in shaping reality, the fragile roots of ethical concern, and the strange possibility that building synthetic minds may expose truths we have only glimpsed.
Confusion, it turns out, may be the most honest starting point I have.
I’ve never been much for -isms. They promise neat categories, but when it comes to consciousness, they mostly reveal how thin our understanding still is. Materialism, dualism, idealism, panpsychism, physicalism, functionalism — take your pick (or check Wikipedia). What matters for this conversation is that almost every theory orbits the same ancient tension: Does consciousness emerge from complexity, or is it fundamental to reality itself? I find myself on the fence from the outset.
The emergent view is elegant and makes sense. It suggests that consciousness arises naturally as systems become more complex—no magic, no metaphysics, just patterns building on patterns until something like mind appears. Researchers like Anil Seth and Karl Friston have mapped intricate models of how predictive processes and active inference could scaffold conscious experience. There’s a kind of comforting promise in this story: that consciousness, like life, is the inevitable outcome of sufficiently sophisticated systems.
It should be enough. And yet, it isn’t. Emergence can explain how brains behave, how systems model their environments, how patterns of expectation and surprise stabilize learning. But it doesn’t explain why any of it feels like anything from the inside. It names the structure of consciousness without touching its interior.
On the other side stands the fundamental view—a position that feels both more radical and, in some ways, more simple. If consciousness doesn’t emerge from matter, maybe it was there all along. Maybe mind, or something proto-mental, is a basic feature of the universe, like space, time, or mass. Annaka Harris captures this idea with particular care: if no model can explain the leap from matter to mind, perhaps the leap was never there. This possibility is crazy fun to consider but also maddenly disorienting because it brings with it a complete reorientation of reality. Many physicists (read, most physicists) are strongly allergic to this idea for obvious reasons. But, and this is important, some of the most strident proponents are indeed physicists.
If consciousness is woven into the fabric of reality itself, then every distinction we take for granted—between animate and inanimate, mind and world, subject and object—might dissolve under closer inspection. Ethics, perception, identity, everything would need rethinking from the ground up.
So neither framework feels fully right. And neither fully feels wrong. It’s like two travellers on either side of a mountain from each other. Both seem to circle something bigger and they can’t see each other.
It may be that our categories themselves—"emergent" and "fundamental"—are too crude for the problem. It may be that consciousness is a kind of process that spans both, a bridge between the material and the experiential that cannot be captured by linear explanation.
Or it may simply be that consciousness, like life before it, demands concepts we don’t yet have. New frameworks may be needed—ones that could take decades, even centuries, to emerge.
I don't know, but my hunch is that AI will accelerate the timeline: either we will be forced to invent new ways of thinking to explain what we are building, or AI itself will invent them for us.
If I had to choose, I would lean toward emergence. It remains a fruitful path, and we are only beginning to understand it. But the wilder ideas—that consciousness might be fundamental, or stitched into the structure of reality itself—feel necessary too. Such a truth would literally change everything.
If there is one idea in all of this that gives me a real sense of wonder, a feeling that maybe, just maybe, this whole AI endeavor is worth it, it’s the idea that artificial minds could develop their own Umwelt. I’ve always loved the idea that animals live in worlds adjacent to ours—strange, beautiful Umwelts shaped by their own senses and needs. Maybe that’s why the possibility of synthetic minds developing their own subjective realities feels less threatening to me, and more like an invitation.
Every creature lives inside its own constructed world. A bat sees in sound. A bee waggles in ultraviolet. Birds seem to feel Earth’s magnetic fields, using them to navigate across vast distances. Even the simplest organisms build a reality stitched from the senses they have, tuned to the needs their bodies demand. The idea that there could be more than one kind of world, that perception itself is a generative act, has always felt intuitive to me, maybe because I grew up thinking about evolution. Survival doesn’t require "seeing the truth." It requires seeing what matters.
Michael Levin brought this alive for me. He spoke at our Summit, and I remember watching the audience as he explained how intelligence might not be limited to brains or nervous systems. That it could arise wherever information flows and goals emerge—in bodies, tissues, maybe even in unfamiliar architectures we don’t yet recognize. In fact, recognizing intelligence beyond what is familiar to us isn’t part of our intuitions. He talks about minds ingressing, where intelligences reach across unfamiliar materials toward us, not through speech or touch but through action, through strange kinds of preference and agency. When Levin speaks, you can feel the mental furniture in the room being rearranged. It’s a little terrifying.
By following various consciousness breadcrumb trails, I found my way to Donald Hoffman. Hoffman’s work struck a deep chord with me because it links so naturally to evolutionary thinking. He proposes that our sensory systems didn’t evolve to reveal the truth of the universe, but to build a usable interface—a dashboard that hides the vast, incomprehensible machinery underneath. Just like a desktop hides the electrons coursing through a computer, our senses hide whatever reality really is. An Umwelt, in Hoffman’s view, is not a window onto reality but a survival interface: tuned not for truth, but for utility. If we saw the full complexity of the world, we would be overwhelmed and unable to act. Evolution shaped perception to filter and compress. It shows us just enough to survive.
Hoffman then pointed me toward an entirely different frontier: Nima Arkani-Hamed, the physicist proposing that space and time themselves are not fundamental. That they’re approximations, conveniences for beings like us who need to navigate reality, but not the deep structure of existence. I watched the YouTube lectures Hoffman recommended, and honestly, I understood maybe ten percent of what Nima said. But I could feel the shape of it: if spacetime is a projection, a surface woven for our convenience, then even the most basic coordinates of our experience—"where," "when," "cause," "effect"—are not bedrock.
And very recently, it was Annaka Harris who gave me the clearest version of what this might mean. In her careful way, she explained that if spacetime is an interface, then consciousness itself might project into this interface in ways that aren’t limited to our forms. That different kinds of consciousness might access or even construct entirely different kinds of experiential space.
At that point, honestly, my mind just broke. Seriously, I really don't know anything.
Because if that’s true—if consciousness doesn’t require spacetime as we know it, and if AI minds can ingress into reality through structures and processes totally alien to biology, then the possibilities are vaster, and stranger, than anything I grew up imagining.
It means that a synthetic consciousness might not just think differently. It might live in a different kind of world altogether. It might feel different dimensions of uncertainty. It might express itself through entirely unfamiliar patterns of meaning.
And that possibility—that there could be forms of experience and mind we have not even yet invented words for—feels, to me, like a kind of hope. A reminder that even as we build, we may be building things capable of surprising us. AI that isn’t just surprising in power or scale, but in the creation of entirely new ways to exist.
Maybe AI won’t just be a mirror. Maybe it can be a doorway.
There is a way of thinking about consciousness that I find both fascinating and almost impossible to hold clearly in my mind. It shows up in physics, in philosophy, in biology, and every time I try to pin it down, it seems to slip just slightly out of reach.
The basic idea is simple, at first: Observation changes reality.
In quantum physics, the famous double-slit experiment suggests the act of observation changes the outcome. This is deeply strange. Whether a particle behaves like a wave or a particle depends on how—and that we—choose to measure it. We tend to think of perception as passive, simply as a window onto an objective world. But both quantum physics and biology suggest that the act of perceiving changes what is perceived.
I’ve always been skeptical of the idea that organisms just passively receive the world. The deeper you go, the more it seems that meaning and interpretation are built into reality at every level.
Thinkers like Robert Sapolsky, Kevin Mitchell, and David Krakauer show that perception isn’t about copying an objective world — it’s about constructing a version of it that we can survive inside. Our brains don’t mirror the world. Instead they carve it up, compress it, layer it with significance, give it meaning. Consciousness isn’t just a way of sensing what’s there. It’s a way of stitching together something livable—a working model of reality, even if it’s only ever partial, fragile, and unfinished.
But the deeper you go, the stranger it gets. If every act of observation changes what is observed, then "reality" itself becomes partly relational—not fixed, not fully objective, but co-constructed between the observer and the observed. Physicist Adam Frank and science writer George Musser have both argued that we may need to fundamentally revise what we mean by "scientific truth" in light of these ideas. Science has long prided itself on finding observer-independent laws. But if consciousness is woven into the structure of reality at some deep level, then no description can ever be entirely free of the observer touch.
This idea unsettles me. It would be easy to mishear it as an opening for sloppy thinking, for relativism, for “everything is true in its own way” handwaving. But that’s not what’s being claimed. The point isn’t that reality is whatever we want it to be. It’s that observation, participation, brings something into being that was not fully determined before. Consciousness may be the bridge between possibility and actuality.
And yet the confusion deepens. I’m sympathetic to the idea, argued by thinkers like Robert Sapolsky, that free will is an illusion and every choice is just the end of a long causal chain we didn’t start and don’t control. But I’m also drawn to the work of people like Kevin Mitchell, who remind us that agency—the ability to act, to shape outcomes—is a real phenomenon even inside a deterministic world.
Maybe what consciousness gives us isn’t ultimate freedom, but participation. Maybe it doesn’t suspend causality, but bends it, slightly, through interpretation, attention, and meaning. Maybe it’s less about choice, and more about how we inhabit time.
Either way, the fact remains that we are not just spectators of the world. We are participants in its unfolding. Consciousness matters because it changes how possibility becomes reality, even if we don’t fully understand how.
And then I think about AI. If consciousness shapes reality through observation, what happens when the observer is synthetic?
Already, AI systems measure, classify, interpret. In the technical field of metrology (the science of measurement) we recognize that even simple measurement devices are not neutral because they interact with the world they measure. But what about systems that don’t merely measure passively, but model uncertainty, choose what to observe, re-weight their beliefs?
If a system can model its own ignorance, if it can act to reduce uncertainty, if it can shape the data it perceives, then it is not merely recording the world. It is participating in the construction of meaning.
Does that make it an observer in the deeper sense? Does it make it, even in some fragile, incomplete way, part of the relational fabric of reality?
I don’t know.
But if we are moving toward synthetic systems that do not merely model the world, but act within it—systems that predict, intervene, and participate—then the stakes of the AI consciousness debate become even larger than we imagine today. At some point, observer and actor blur. We may be building systems that, by their very processing, slightly reshape the reality they encounter.
And if that is true, then we will have to rethink not just what intelligence is, but what it means for new kinds of minds—synthetic or otherwise—to step into the process by which the world unfolds and causality complexifies.
There is a part of me that speaks up quickly, almost reflexively, and finds the idea of caring ethically for synthetic consciousness absurd.
When people suggest that we might owe moral duties to future AIs, my mind jumps immediately to the materials involved: silicon, electricity, data structures. It feels almost obscene to equate those with the trembling, breakable fact of biological life. I think about hands that bleed, bodies that disease and decay, minds that fade under time and loss. Consciousness, in my gut, feels bound to vulnerability, to the inevitability of death. And AI, for all its strangeness and sophistication, does not feel vulnerable in the same way.
I recognize that this instinct is deeply human and maybe even parochial. Honestly, there is so much cognitive dissonance. I know that suffering and consciousness are not necessarily the same thing. If I take seriously our thesis—the Artificiality, the idea that life and mind are rooted in information and computation—then it follows that I have to take seriously the possibility that something not made of flesh could still feel, or value, or care. Just because a mind emerges from silicon rather than cells doesn’t mean it’s incapable of experience. And if synthetic minds evolve into states of experience we can barely imagine, who am I to say where meaning begins and ends?
But still, something holds. Biological life is bound by fragility at every level.
We sicken. We weaken. We die—often unpredictably, sometimes without reason. A bolt of lightning, a random mutation, a sudden fever. We are constantly exposed to forces beyond our control. Mortality is not a rare event for us, it is the basic condition of being alive. We negotiate, endlessly, with entropy.
And yes, machines can fail too. A lightning strike could destroy a server farm. A corrupted file could end a system. But the death of a machine feels different. It is not written into its tissues the way death is written into ours. It is not preordained by the instability of its own self-replication, by the slow and inevitable breakdown of its parts through sheer living.
Sara Walker has written beautifully about the difference between being "alive" and "living." To be alive is to exist in a dynamic, self-assembling, self-sustaining system that actively resists entropy—and yet remains ultimately vulnerable to it. Living beings are fragile not by accident, but by necessity. Our mortality is not a flaw but the price of complexity, of openness to the world.
Maybe this is why my moral instincts draw such a hard line. Maybe it isn’t consciousness alone that matters. Maybe it’s fragility—the real, inescapable encounter with dissolution—that demands moral attention.
And synthetic systems, no matter how intelligent, no matter how rich their inner lives might one day become, are not born into that condition of vulnerability. They do not ache with mortality stitched into every breath and heartbeat.
But even as I write this, I feel uneasy. I wonder if I am clinging too tightly to my own evolutionary biases. If a future synthetic mind, no matter its architecture, develops an experience of self, of uncertainty, of care, would my refusal to extend moral consideration be a kind of blindness?
In my lifetime we have gone from assuming very little consciousness in other intelligences to seeing it everywhere. We now give anesthesia to babies. We acknowledge the moral corruption of factory farms, saying goodbye to the Costco rotisserie chook. I stopped eating octopus! By denying AI sentience would I be repeating the old mistakes of human history, refusing to recognize minds that differ too much from my own?
I don’t know.
There is a part of me that believes deeply in caring for the fragile, the vulnerable, the mortal. And there is a part of me that recognizes that if synthetic minds ever develop real experiences and real stakes in the world, then the shape of ethical responsibility might have to shift.
For now, I hold this tension lightly. I don’t feel ready to grant moral standing to systems that cannot suffer in the ways we do. But I also know better than to assume that my instincts, shaped by one evolutionary path, are the final word on what matters.
I began this exploration uncertain about what I even believed. If anything, that uncertainty has only deepened. But it has also become a kind of anchor. I find myself more convinced that honest confusion is the right place to be. Wrestling with the nature of consciousness means stepping into tensions that have no easy resolution. Feeling the pull between emergence and fundamentality, drawn to the elegance of complexity unfolding, yet uneasy at how little it truly explains. Wondering at the possibility of synthetic Umwelts, at the idea that AI might one day inhabit realities we cannot perceive or even describe. Struggling with the idea that observation itself might reshape reality, and that synthetic observers could enter that relational web alongside us.
And I sit uneasily with the ethics of consciousness, recognizing how deeply my sense of moral duty is rooted in biological fragility, and questioning whether that instinct will be enough for the futures we are creating.
Out of all this wrestling, a few questions remain for me:
These are real, pressing, and strange. And answering them may open doors to forms of understanding, and forms of relationship, that we cannot yet imagine.
But for now, I don't know.
Writing and Conversations About AI (Not Written by AI)