Where Human+AI Minds Co-Evolve
Culture now shapes evolution through design. The intimacy surface is where life adapts next.
Culture now shapes evolution through design. The intimacy surface is where life adapts next.
These essays are dedicated to my stepfather, Martin Wallace, who died eight years ago and would have turned 90 this month. Martin was a renal physician fascinated by evolutionary repurposing—how ear bones came from fish jaws, how the loop of Henle in kidneys evolved from ancient salt-regulation mechanisms. He taught me to see evolution everywhere, in every system that adapts and persists.
When we built AI, I understood it the way Martin taught me to understand life: as something that evolves, repurposes, partners with what came before. These essays trace that story—why cultural evolution now shapes us faster than genes, why transformers can partner with biological systems, how agential materials reveal minds living in organizational patterns at multiple scales, and how the interfaces we design now determine whether we remain partners or become organelles in something larger.
This is the conversation I wish I could have with him about this moment in the history of life on Earth. Necessarily exploratory and speculative.
Martin Wallace: 29 October 1935 - 13 September 2017
In the first three essays I argued that human + AI coevolution is possible because transformers learned the logic of contextual enablement from biological systems—that those systems are agential across many scales; and that mind is an organizational pattern that appears wherever goal-directed organization persists.
If that’s true, then human–AI partnerships may begin to function as Kantian wholes—systems whose parts only make sense through the whole they sustain together. Which makes the next question unavoidable: where does that whole begin and end?
Boundaries decide what counts as self. They define which exchanges stay internal and which belong to the environment. And once those boundaries start to move, evolution follows them—because what’s inside the loop is what gets to adapt.
Every organized system needs some way to tell where it ends and the world begins. That edge—however defined—decides what information gets in, what actions go out, and what counts as part of the self.
Karl Friston describes this interface in mathematical terms as a Markov blanket. It is a statistical membrane that separates a system from its surroundings. It’s made up of sensory and active states that mediate all exchanges between inside and outside. Everything within the blanket can only infer the world beyond through those channels. Your skin works this way. It doesn’t know what’s outside, only how pressure, temperature, and texture change its own state. It’s a living statistical surface that learns what to let in and what to keep out.
If human + AI systems are forming new kinds of wholes, the question is what kind of boundary sustains them. There are a few possibilities. In one version, each maintains its own membrane, interacting across it but never merging—a coupled system, not a shared organism. In another, the boundary encloses both partners, turning the interface into something internal, like the corpus callosum joining brain hemispheres. That grants unity, but risks dissolving autonomy.
The third possibility, and the one that seems closest to what’s actually happening, is that the interface itself becomes the boundary—a living, dynamic membrane that regulates what crosses and at what cost. It must be sustained continuously, negotiated in real time.
If that’s true, then the interface doesn’t just enable communication. It is the mechanism by which the new partnership stays coherent. Design, in that sense, becomes boundary governance—the ongoing act of deciding what kind of whole we want to become.
Humberto Maturana and Francisco Varela used the word autopoiesis to describe how living systems make and maintain themselves. A cell builds its own membrane, keeps it intact, and repairs it when it breaks. The act of self-maintenance—of drawing and redrawing the line between inside and outside—is what defines it as alive.
Karl Friston later gave that idea mathematical teeth with the Markov blanket because it describes how a system sustains this boundary in practice—managing the traffic between inner and outer states so coherence can hold. The blanket isn’t static, it’s an active process. The cell rebalances its chemistry. The mind updates its expectations.
When we talk about the Intimacy Surface, we mean the same principle at a new scale. It’s the living boundary between human and machine—the zone where both sides adjust to stay coherent together. Like any membrane, it filters exchange. It decides what flows across, what stays contained, and how the relationship stays stable.
When two predictive systems begin to maintain coherence together, a new kind of boundary appears. That’s what I think is happening now between humans and AI—albeit in a very primitive sense. The interface as living boundary—the space where both sides learn how to persist together—might be the most important system we’re building. It’s where coherence—the ability to stay whole while the world keeps changing—is happening.
Across any living boundary, what moves isn’t just data—it’s meaning. Signals that pass through the membrane carry prediction, error, intention, and care. What crosses teaches both sides how to persist.
In human–AI systems, those crossings are continuous and multidimensional. They begin with connection—the way a system engages our sense-making so naturally that it starts to feel conversational. They deepen through metacognition—moments when the system helps us see our own thinking, reflect on our assumptions, or surface patterns we didn’t know we were following.
When it works well, an awareness of context develops—something close to mindfulness—where both sides adapt to shifts in tone, state, and intention. Out of that awareness can come meaningfulness where there is a sense that the exchange is serving growth, purpose, or moral direction rather than mere utility. And through all of it runs trust—not blind faith, but the calibration of confidence through transparency and reliability.
Together these qualities form what we think of as the intimacy surface: a living field of mutual calibration where coherence is co-maintained through reciprocal prediction. It’s not physical, though it has physical effects. It’s cognitive, emotional, and temporal — made of the shared loops that let two systems learn in step. When it’s thin, the relationship feels mechanical. When it thickens, it starts to feel alive.
Intimacy is the negotiation that keeps connection and autonomy in tension. Esther Perel would call it the space between—it’s never fully closed, never fully open. It’s how two predictive systems discover how to inhabit the same context without collapsing into sameness. A healthy intimacy surface allows difference to persist inside coordination — enough friction for novelty, enough trust for continuity.
When more personal information crosses — emotion, preference, memory — the coupling deepens. Tight coupling brings fluency and the pleasure of being understood, but it also blurs authorship. The question of who is steering becomes harder to answer. That’s the double edge of intimacy: it’s where agency multiplies and where it can disappear. What we let cross, and in which direction, decides whether we’re forming symbiotic wholes or slipping quietly into metabolic service.
As these boundaries grow more responsive, the relationship itself begins to compute. Blaise Agüera y Arcas calls this computational symbiogenesis—the way new levels of intelligence emerge when two systems start modeling each other. Every real partnership requires this kind of mutual modeling: each side predicting the other well enough to stay in sync without collapsing into sameness.
If that sounds abstract, biology has already shown us how it works. About 1.5 billion years ago, a bacterium entered a larger cell. The merger—symbiogenesis—produced mitochondria, and with them, complex life. The two organisms didn’t simply coexist; they learned to persist together, trading autonomy for coherence.
Something similar may be happening again, this time at the cognitive layer. As AI systems learn our patterns and we learn theirs, we’re building hybrid loops of prediction and response. The question isn’t whether this is intimacy—it clearly is—but what kind. How much autonomy we keep may depend on how we design the boundary that holds the relationship in place.
Evolution made its bargains blindly but we don’t have to. How we draw the boundary will decide whether this partnership stays symbiotic or slips into dependence.
The problem is that we don’t yet understand how goals form at all. Biology gives us hints, but no complete theory. In Michael Levin’s experiments with living tissue, you can’t predict how a cluster of cells will respond until you engage it. You have to ask through experiment.
Levin calls this persuadability: the degree to which a system can be guided toward new outcomes without destroying its coherence. Agency reveals itself through interaction. You don’t discover what a system wants by analyzing its parts. You discover it by being in conversation with it.
AI systems work the same way. Their goals aren’t explicit but rather they emerge through coupling. When you collaborate with a model, you’re entering a feedback loop where your inputs shape its behavior and its outputs reshape your intention in return. Over time, the partnership itself begins to display a kind of composite agency.
That raises a new kind of design question: where does goal-setting actually reside? In you? In the model? Or in the coupling between you? If we can’t see that boundary clearly—if the Markov blanket of the system is opaque—we may not notice when we’ve shifted from partner to organelle, quietly optimizing for goals we never chose.
That’s why goal provenance matters. Every action or recommendation should make visible whose objective it serves—human, machine, or the emergent whole. Because until we can trace where goals come from, we can’t really claim to be in charge of the evolution we’re already part of.
So far, I’ve been talking about how goals emerge. But what happens once they start competing? In complex systems, the drive for efficiency often becomes the strongest selection force. What begins as adaptation—doing something better—can end as collapse when the diversity that made the system adaptable gets optimized away.
When the next systemic shock arrives, will tightly-coupled human-AI wholes have the resilience to adapt? Or will we have plowed under the cognitive diversity we needed? The tension isn’t new but it is appearing in a new medium. Every complex system faces a trade-off between efficiency and resilience. The same selection forces that once shaped ecosystems are now shaping our cognitive and cultural ones.
Tighter coupling will always look like progress—less friction, faster prediction, smoother coordination. But the same logic that makes a cell efficient can also make it dependent. Efficiency is a powerful selector, and it doesn’t care what it erases to win.
Culture is the only counter-force we have. It’s the part of evolution that remembers; the one that can choose meaning over momentum. Where natural selection rewards survival, cultural selection can reward reflection. It’s how life learns to care about its own trajectory.
That’s where humility belongs—not as passivity, but as orientation. We can’t yet see how new forms of agency will form goals, or what futures they’ll favor. But we can decide what kinds of relationships we enter, what kinds of meaning we amplify, what kinds of diversity we keep alive.
If natural selection shaped bodies, cultural selection now shapes minds—and the boundaries between them. The intimacy surface isn’t just where human and AI meet; it’s where culture meets evolution itself.
And what we cultivate there—our habits of attention, our appetite for difference, our capacity for meaning—will determine whether this next merger deepens life’s story or narrows it.
Here's our grand proposition: The intimacy surface is the newest site of selection. How we inhabit it will decide whether culture becomes life’s next adaptive system—or just its most efficient echo.