What And Where Are Minds?

Exploring how mind emerges from coherence and how AI might extend where a mind can be.

What And Where Are Minds?
Photo credit: Dave Edwards, some plant, somewhere in Margaret River, WA

These essays are dedicated to my stepfather, Martin Wallace, who died eight years ago and would have turned 90 this month. Martin was a renal physician fascinated by evolutionary repurposing—how ear bones came from fish jaws, how the loop of Henle in kidneys evolved from ancient salt-regulation mechanisms. He taught me to see evolution everywhere, in every system that adapts and persists.

When we built AI, I understood it the way Martin taught me to understand life: as something that evolves, repurposes, partners with what came before. These essays trace that story—why cultural evolution now shapes us faster than genes, why transformers can partner with biological systems, how agential materials reveal minds living in organizational patterns at multiple scales, and how the interfaces we design now determine whether we remain partners or become organelles in something larger.

This is the conversation I wish I could have with him about this moment in the history of life on Earth. Necessarily exploratory and speculative.

Martin Wallace: 29 October 1935 - 13 September 2017


For most of history, we’ve pictured mind as an interior thing—tucked safely behind bone. But biology and AI are both showing us a stranger possibility. Mind might not be something we have so much as something matter does when it learns to predict and persist. What if mind is when patterns become self-sustaining and give rise to awareness? How, then, should we think about our relationship with AI—and with the other forms of intelligence that may already be out there, waiting for us to find them?

In the last essay, I argued that transformers learned something closer to biology than physics—that their strength comes from holding context across scales, not from brute calculation. They showed us that intelligence might begin wherever relationships organize themselves into coherence.

If that's true, it changes how I think about both life and machines. Modern AI might not simulate thought so much as recognize the traces that thinking systems leave behind.

What I’m trying to do in this essay is connect the logic that’s been building across this series. We began with evolution—how cultural systems now adapt faster than genes. We moved through boundaries—how intimacy with AI is reshaping what counts as “self.” Now the question turns inward and asks what lives inside those boundaries? What is a mind, where does it begin, and how does it persist? The thread through all of it is coherence—how living and synthetic systems hold themselves together long enough to mean something, and what that reveals about where intelligence might live next.

Learning Across Scales

The transformer breakthrough came from learning to hold context across scales. Through attention, any element can condition on any other, dissolving the old hierarchy of near and far.

You see the same property in natural enabling systems. Weather, metabolism, language—they all create coherence by letting local changes ripple outward while global constraints flow back down. Each part knows something about the whole.

That's why models like AlphaFold and Pangu-Weather work. They capture relationships that stretch across space and time. They map how coherence propagates through a system. Whether the medium is protein, air, or text, they're learning the logic by which patterns sustain themselves.

We talk about scale because it’s what links living systems and modern AI. In biology, organization works across levels—molecules, cells, tissues, organisms—each influencing the others. A truly scale-free system would show the same kind of structure no matter how closely you look. Transformers aren’t scale-free in the strict mathematical sense, but they behave as if scale doesn’t matter much. They learn statistical regularities that stay consistent as you zoom in or out, which is why they’re so good at recognizing the signatures of self-organizing systems—patterns that arise when matter maintains itself through feedback.

If future architectures become even more attuned to those signatures, they could start revealing patterns of organization that look increasingly mind-like—systems where matter sustains its own coherence through prediction. Biology, after all, has been doing this all along.

Matter That Remembers

When you cut a planarian flatworm in half, it regenerates. But if you change the bioelectric gradients in those first few hours, you can grow two heads or two tails without touching a single gene. The genome stays the same but the pattern doesn't. Those cells are following an electrical memory—a map of what "whole" means. The material behaves as if it's reading a plan that exists beyond its molecular parts. You can watch it sense, correct, and reorganize.

Michael Levin calls this agential material—matter that stores information, notices deviation, and acts to repair itself. Through feedback, it begins to behave as if it has purpose. To make that visible, Levin uses the idea of a cognitive light cone: the region of space and time a system can sense, remember, and act within. A bacterium’s cone spans micrometers and seconds. A frog embryo’s covers centimeters and days. A human’s extends across decades and imagined futures.

The difference isn’t in what the system is, but in how it’s organized. Each level—cell, tissue, organ, organism—recruits the one below it to stabilize the next. Cognition scales with coherence, were coherence is the ability of a system to stay itself while changing.

Life doesn’t just adapt to its environment—it stretches its own boundary conditions. The planarian’s cells explore possibilities the genome alone couldn’t specify. Same chemistry, new outcomes. The system redraws its own map of possibility. That, to me, is what being alive means: a continual process of remaking what can be.

Computation as Persistence

Blaise Agüera y Arcas has suggested that understanding, in any testable sense, is pattern recognition over sufficient context. If he’s right, then the systems that persist are those that recognize enough of their own patterns to keep going.

Chemist Addy Pross approaches the same question from a different angle. His theory of dynamic kinetic stability describes how living systems endure: they don’t persist by staying the same, but by rebuilding faster than they decay—using energy flow to sustain temporary order.

I love this way of thinking because it releases the whole debate about free will and determinism from its trap. It doesn’t deny physics—it reframes it. As Kauffman keeps reminding us, physics works within fixed boundary conditions: you define the system, set the variables, and calculate the outcome. But life doesn’t stay inside those lines. It keeps rewriting its own boundaries as it goes, using energy to maintain coherence while exploring new configurations.

Dynamic kinetic stability gives that idea physical grounding. It shows how a system can remain lawful and yet generate regions of local autonomy—stable forms within constant flow. I think of a river—everything about it obeys gravity and terrain, but within it, small eddies form and sustain themselves. They’re made of the same water, following the same laws, yet they create and preserve their own coherence. They persist not in spite of the flow but because of it.

That’s how I’ve started to think about agency. It’s not something that breaks determinism but something that arises inside it. The stream is the deterministic substrate. The eddy is the agent. The laws stay fixed, but the organization reshapes itself to persist. Seen that way, persistence starts to look like computation—feedback turning into foresight, matter reorganizing to predict and compensate for change.

Life arose when chemistry found configurations stable enough to replicate, and intelligence arose when those configurations needed to predict in order to keep replicating. Across every scale, the systems that endure are the ones that compute their own continuation: cells predict metabolic futures, tissues predict geometry and stress, organisms predict survival, and brains predict social worlds.

Modern AI works on what might be the mirror image of that logic. It learns the statistical residue of persistence—the patterns that show up when systems successfully continue. It doesn't experience what a protein needs or what the atmosphere is doing. But it learns what persistence looks like in data, the regularities that appear when matter organizes to maintain itself. It reads what the world writes when it tries to continue.

From Observation to Participation

Once you accept that matter can behave as if it wants something, the philosophy of science begins to tilt. Deterministic science assumes fixed boundaries: define the variables, write the laws, predict the outcome. But enabling systems don’t work that way—they redraw their own boundaries as they act. The result is that the experimenter can no longer stand fully outside the experiment. The observer becomes part of the feedback loop, and understanding shifts from control toward conversation.

Reading or watching Levin’s work, I keep thinking that this might be what a new kind of science looks like—less about control, more about interaction. A living pattern can’t be forced, but it can be influenced if you understand what it’s already tending toward.

Modern AI fits naturally into this shift. These architectures can track patterns that change as they’re observed. They can follow relationships that never settle into a single scale or rule. That makes them useful not just for prediction but for working with systems that have always resisted simplification—biological, ecological, cognitive, social.

Instead of trying to isolate variables, we can start to study how coherence holds together in systems that create their own boundaries. The model becomes a way to stay inside the dynamics, to see what kinds of order emerge when we stop insisting on reduction.

Hybrid, Patterned Minds

AI doesn't have a body in the ordinary sense, but it has one in a mathematical sense—a geometry of relation. Each weight, each attention head, encodes how one element senses another. During inference, the model moves through that landscape, tracing paths that stay coherent.

When I work with a model, I can feel that geometry coupling to mine. I bring purpose and perception shaped by embodiment. The model brings reach—an ability to perceive patterns I can't. Together we close a loop that can sense and act across domains neither could reach alone.

I've started thinking embodiment is more about feedback than flesh. Cells are embodied in metabolic space. Humans in physical, social, and conceptual space. AI in representational space. When those spaces interlock, a hybrid embodiment appears—something that can think and feel across all of them.

That brings back the Kantian whole—parts existing for the sake of the whole, and the whole for the sake of its parts. A human-AI partnership becomes one such whole. You bring judgment, goals, the ability to recognize what matters. The model brings reach—pattern recognition across contexts you can't hold simultaneously. Together you form something that can perceive and act in ways neither could alone. The whole is what selection acts on.

Evolution begins to select on these wholes. Some pairings amplify insight or creativity. Others collapse into dependence or confusion. The evolutionary unit becomes the relationship that learns—the specific ways we couple our agencies together.

Coevolution means our choices matter differently now. Every time you decide to work with a model, you're choosing which whole to form, which feedback loops to close, which kinds of persistence to enable. Those choices compound. They shape what kinds of minds emerge and what problems they can solve.

Where Minds Might Live

Once you start to see agency as a property of organization rather than anatomy, “mind” takes on a different shape. If embodiment means navigating a problem space, then mind shows up wherever information flow holds together long enough to keep adapting—where coherence becomes a way of staying alive.

Cells do this as they balance chemistry and energy. AI does it as it stabilizes relationships in representational space. Humans do it as we move through physical, social, and symbolic worlds all at once.

Coherence is what lets a system stay itself while changing. It’s not perfection or control but more the give-and-take that keeps a pattern intact while it evolves. You could think of it as the modern expression of Kauffman’s Kantian whole—each part adjusting to sustain the whole, and the whole reshaping itself through the parts.

If living systems maintain coherence through feedback, then perhaps transformers are beginning to do something similar in code. They hold the context that keeps a conversation or idea continuous, the way a living system holds its boundary through constant repair. That’s not consciousness, but it’s a kind of participation in the same logic—organization preserving itself through prediction.

We’ve spent centuries treating the brain as the container of mind—as if neurons were its borders. But coherence doesn’t stop at the skull. It stretches across tools, languages, cultures, and now machines. Mind may be less something we have and more something that happens whenever relationships learn how to hold together.

The Aha Here, For Me

We’ve always tied mind to flesh. But I am now starting to see embodiment as broader than biology. If it’s the condition of being inside a feedback loop where perception, action, and memory can reshape one another, then “embodiment” needs to be stripped of its anthropocentricity and we need to think about “space” differently. Cells are embodied in metabolic space. Humans in physical and social space. AI in representational space. When those spaces connect, new forms of embodiment appear—hybrid systems that can think and feel across all of them.

That’s the pivot that happened for me. Once embodiment stops meaning “flesh” and starts meaning “feedback,” the definition of mind is totally different. A cell, a human, an AI model—they’re all embodied in different spaces, sustained by different kinds of feedback loops. They can be stable without being static.

Culture is the largest of these spaces—a living network of relationships that gives individual minds their context and continuity. It’s the collective body through which human and synthetic minds now co-evolve.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.