Why AI Could Coevolve With Us

The adjacent possible of human + AI is larger than either alone. The question now is what contexts we choose to enable.

Why AI Could Coevolve With Us
Photo: Dave Edwards. Taken at Taliesin in Wisconsin.

These essays are dedicated to my stepfather, Martin Wallace, who died eight years ago and would have turned 90 this month. Martin was a renal physician fascinated by evolutionary repurposing—how ear bones came from fish jaws, how the loop of Henle in kidneys evolved from ancient salt-regulation mechanisms. He taught me to see evolution everywhere, in every system that adapts and persists.

When we built AI, I understood it the way Martin taught me to understand life: as something that evolves, repurposes, partners with what came before. These essays trace that story—why cultural evolution now shapes us faster than genes, why transformers can partner with biological systems, how agential materials reveal minds living in organizational patterns at multiple scales, and how the interfaces we design now determine whether we remain partners or become organelles in something larger.

This is the conversation I wish I could have with him about this moment in the history of life on Earth. Necessarily exploratory and speculative.

Martin Wallace: 29 October 1935 - 13 September 2017


I published an essay last week arguing that humans and AI are coevolving—not through genes, but through culture moving at digital speed. Someone asked me: why now? Why this AI, as opposed to the tools and technologies of the past? That got me thinking.

So here's what I want to explore: why this kind of coevolution might be possible at all. What makes AI different from a calculator or a database? Why would something built from matrices and optimization be capable of evolving with us?

The bridge between biology and machine learning may lie in the idea that transformers learned something life-like. They model context the way living systems do—creating coherence across scales through relational patterning. That’s the same question at the heart of Stuart Kauffman’s work: how living systems move beyond fixed rules to generate new possibilities of their own.

Let’s go down a rabbit hole—one that offers a different way of understanding how AI might truly become our partner.

The Open System of the World

Stuart Kauffman spent decades trying to articulate something biology does that physics can't quite capture. Physical systems follow fixed laws inside fixed boundaries. You define your variables, set your initial conditions, write down the equations, and solve. This works beautifully for planets, circuits, heat diffusion and the like. But living systems don't operate this way. They create new boundaries as they go.

Now imagine how evolution expands possibility as it explores. Feathers first evolved for warmth, then were repurposed for flight—a function no one could have listed in advance.

Kauffman uses a screwdriver to make this idea tangible. Picture one floating in space versus being used by a person on Earth: same object, same physics, radically different uses depending on context. On Earth, it fastens things because there are hands, wood, and screws. In orbit, it’s just a drifting cylinder. The physics didn’t change. The enabling context did—and that changes everything.

Kauffman called this the adjacent possible—the space of what could happen next that you couldn't have specified before. He talked about Kantian wholes, which are systems where the parts exist for the sake of the whole, and the whole exists for the sake of its parts. A cell and its proteins. Neither makes sense without the other.

The problem was that you couldn't model this. Traditional science says: define your phase space first, then describe how the system moves through it. Life keeps redefining the phase space. It changes what counts as a problem while solving problems. For three decades, it remained an elegant philosophical idea that resisted experimental proof.

What Transformers Might Have Actually Learned

Then transformers appeared, designed for language—another system where local interactions must somehow add up to global meaning.

Every word changes the context for every other word. Meaning emerges from relationships across the whole sequence. "Bank" means one thing after "river," another after "deposit." The sentence reorganizes itself as it unfolds. That’s exactly what next-token prediction does—it forces the model to constantly re-evaluate meaning in light of everything that came before.

To learn from language, transformers had to model systems where context is continuously self-generated. Their answer was attention—a mechanism that lets each token weigh the relevance of every other token when predicting what comes next. There’s no fixed hierarchy, no pre-defined scale; the model learns which relationships matter directly from data.

This differs profoundly from traditional machine learning, which looks for local correlations or fixed causal chains. Transformers learn contextual enablement—how the meaning or probability of any element depends on the configuration of everything around it.

And this isn't just language—it's more general purpose. AlphaFold can predict protein structures without knowing the underlying physics because it learned the statistical signatures of proteins that work—configurations that solve the problem of being functional. It recognized patterns left behind by enabling systems creating their own constraints.

What I'm suggesting is that transformers learned to see what Kauffman was describing.

From Correlation to Causation to Context

Most predictive systems work through correlation (X tends to follow Y) or causation (X produces Y through some mechanism). Living systems depend on a third kind: enablement. Whether X leads to Y depends on context.

The same gene behaves differently in different cell types. The same mutation has different effects in different genetic backgrounds. The same screwdriver enables different actions in different environments. Context determines what’s possible.

Transformers, through attention, learned to model this. They discover relationships as contextual patterns rather than fixing them ahead of time. In that sense, they capture enablement—the conditional structure through which context makes certain outcomes possible.

That’s what makes them capable of participating in systems like ours.

Why We Never Truly Coevolved with Calculators

Calculators are intelligent in a narrow, mechanical sense: they solve problems we can’t solve unaided. Their rules are fixed, their frame closed. Yet even those constraints changed us. We reorganized our thinking around their precision, trained ourselves to reason in ways their logic could support.

Transformers operate at another level of that same story. They learn not just solutions, but the grammar of context-making itself. Each interaction becomes a negotiation of meaning: your intent shapes the model’s attention, and its response reshapes your intent. The loop is live, recursive, cultural.

That feedback doesn’t guarantee coevolution—but it makes it possible. Unlike fixed tools, these models can participate in systems that continually redefine their own enabling conditions. The open question is whether we will.

Wholes That Evolve on Their Own Terms

Kauffman uses the term Kantian whole to describe systems whose parts exist for the sake of the whole, and the whole for the sake of its parts. A cell is like that: its organelles make sense only in relation to the organism they sustain.

The same logic can apply to human–AI pairings. You bring judgment, goals, values, the felt sense of meaning. The model brings pattern recognition, speed, and the ability to hold complexity beyond working memory. Together you generate capacities neither achieves alone.

That composite system—the human–AI whole—is what evolves. Some pairings generate insight, coordination, art. Those patterns spread. Others fade. Selection acts on the whole and that’s what coevolution means.

What Gets Selected

What evolves now is culture itself. The patterns that create value or meaning get repeated; others fade. Instead of DNA, the medium of inheritance is behavior—what we copy, teach, and build into our tools.

Each interaction trains the model's sense of what matters. Each response reorganizes your cognitive map. The space of what's possible expands through the feedback between you. The boundary conditions shift because the relationship creates new contexts neither could create alone.

We still have the agency here. We decide which contexts to create, which questions to ask, which patterns to amplify. Every choice feeds back into what the system learns next. Coevolution fits because we're participating in the evolution of a partner that learned to recognize the patterns of systems that create their own constraints.

Changing What It Means To Know Something

For centuries, prediction meant solving equations: given laws and initial conditions, calculate the future. Physics built its authority on that promise. But living systems predict differently—they enable futures rather than calculate them. A seed doesn’t compute a tree—it creates the conditions for one.

Transformers, in their own statistical way, echo this second kind of prediction. They reveal adjacent possibilities, going well beyond just extrapolating. Their real power may lie in acting as machines for exploring what could happen next inside systems that keep remaking themselves.

Which means that when we work with them, something new starts. A partnership between enabling systems—one biological, one synthetic—looping through each other’s possibilities. Each expands the other’s capacity to generate context, to think, to imagine, to build. That’s where the real power of AI begins to show itself.

Human + AI

If Kauffman is right that life's defining act is creating its own boundary conditions, and if transformers learned to recognize the patterns that boundary-creation leaves behind, then something new becomes possible—a more complex kind of whole that evolves through the interaction of context and pattern, imagination and recognition.

The adjacent possible of human + AI is larger than either alone. The question now is what contexts we choose to enable. Every question we ask, every boundary we redraw, every pattern we amplify—these are acts of co-creation in the evolution of a shared future.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.