OpenAI’s aiOS
A Reminder: Our current newsletter format includes a full essay as well as links to other essays, podcasts, etc. So
An essay about how your personal choices about AI use, multiplied across millions of people, become evolutionary forces.
These essays are dedicated to my stepfather, Martin Wallace, who died eight years ago and would have turned 90 this month. Martin was a renal physician fascinated by evolutionary repurposing—how ear bones came from fish jaws, how the loop of Henle in kidneys evolved from ancient salt-regulation mechanisms. He taught me to see evolution everywhere, in every system that adapts and persists.
When we built AI, I understood it the way Martin taught me to understand life: as something that evolves, repurposes, partners with what came before. These essays trace that story—why cultural evolution now shapes us faster than genes, why transformers can partner with biological systems, how agential materials reveal minds living in organizational patterns at multiple scales, and how the interfaces we design now determine whether we remain partners or become organelles in something larger.
This is the conversation I wish I could have with him about this moment in the history of life on Earth. Necessarily exploratory and speculative. Martin Wallace: 29 October 1935 - 13 September 2017
Step back for a minute and think about how your life actually works. No matter how many plans you make, you don’t get to script the whole thing. You make choices, you exercise some agency, you course-correct along the way. And at the same time, you’re always looking for ways to make things easier, less effortful, less mentally demanding.
The tension between agency and efficiency shapes everything we do. By agency I mean your capacity to make choices that causally influence outcomes—the ability to deliberate, decide, and act in ways that matter. We could let GPS handle all our navigation, but sometimes we choose to read the map ourselves. We could use calculators for everything, but we still do some math in our heads. We’re constantly deciding which cognitive work to keep and which to hand off.
Agency is not the same as consciousness, which is about subjective experience. It can be exercised through unconscious habits just as much as deliberate reasoning. In fact, the most effective agency often runs below awareness, like the micro-adjustments you make when driving without thinking about them, yet still steering where you want to go.
Now we’re facing choices with how we use AI. For the first time, we’re offloading to something that is also cognitive, something that exhibits its own kind of agency. Your biology pushes you toward efficiency, but your agency lets you choose which efficiencies to accept and which to resist. Those choices matter because AI will act as an external brain, and it will feel completely natural to let it do your thinking for you. This is very early days, but we need to think ahead, because when two agentic systems start working together, they shape each other.
That’s what this essay is about: how your personal choices about AI use, multiplied across millions of people, become evolutionary forces.
Here’s the logic I want to run you through. First, if culture now drives human evolution, and if AI is a cultural technology, then AI is part of the evolutionary process. If humans are biologically wired to offload cognitive work, and if AI is the newest and most powerful place to offload it, then human–AI coevolution is already happening at cultural speed. Which leaves the core question: how do we maintain agency over the direction it takes?
The key insight for me, and one I hope to convince you of too, is that because AI coevolution operates through culture rather than genetics, individual choices about AI use can rapidly scale up to population-level changes. That’s what makes personal agency so crucial—your decisions aren’t just personal, they’re part of a fast-moving cultural evolutionary process.
I want to show you why coevolution is inevitable, then walk through four ways it might unfold.
A few years ago I stumbled on a paper published by the Royal Society that I found utterly mind-blowing. The researchers argued that humans are going through what they call an evolutionary transition in inheritance—we’re shifting from genes to culture as our primary inheritance system. Culture has greater adaptive potential than genetic inheritance and is probably driving human evolution now.
This transition is happening through culturally organized groups, not individuals. And culture is increasingly bypassing genetic evolution altogether, weakening genetic adaptive potential. The researchers conclude we’re witnessing “an ongoing transition in the human inheritance system.”
The authors talk about C-sections. They save lives, obviously. But daughters who are born by C-section are more likely to need them too. I never knew this nor even considered it. But it’s relatable and intuitive. The medical fix slowly undermines the biological selection that used to favor easier births. Culture changes the conditions of biological evolution itself.
Groups change the pace of culture. Bigger populations don’t just add more people, they change how fast things evolve. Languages with more speakers actually become more efficient because they turn over faster. The same with technology—population size alone predicts complexity, even if the environment doesn’t change. That’s a strange thought: a group can make each person inside it more inventive than they would be alone. If that’s true, then AI might matter most at this level. It doesn’t just help individuals because what it actually does is plug them into a much larger circuit, speeding up the same scaling effects that already make culture so powerful.
Here's the key takeaway: if culture is now driving evolution faster than genes, then cultural technologies become part of the evolutionary process. And AI may be the most powerful cultural technology we’ve ever built.
Alison Gopnik, a developmental psychologist at UC Berkeley, argues we should think about large language models as cultural technologies. They are sophisticated ways for humans to access the accumulated knowledge of other humans. Through this framing, I like to think that the focus shifts from whether AI will be smarter than us (the world of AGI and artificial superintelligence) to how it changes us, how human intelligence reshapes itself in response to AI.
Every cultural technology has done this. Writing changed what it meant to remember and share ideas across time. Printing changed how fast ideas could spread and who got to participate in knowledge dissemination. The internet supercharged information connection. Social media did the same with human connection.
But AI is different because it's about all of this and more—it thinks with you. The loops between the digital and physical world run faster and tighter than anything we’ve seen.
We’re in a relationship with something that doesn’t just store or transmit information like writing or printing, but thinks alongside us. It learns from us as we learn from it, tightening the feedback loop between culture and cognition.
Sit with that for a moment. We are now developing relationships with synthetic minds that shape what is selected for — not in the blunt way a hammer or a wheel did, but in ways that are becoming part of the mechanism of how ideas, habits, and identities evolve.
David Krakauer, who runs the Santa Fe Institute, has an insight about human psychology that makes him a bit of a pessimist. He argues that we’re biologically wired to offload cognitive work whenever we can. That drive toward efficiency is so strong it overrides our long-term thinking about what we might lose. As he puts it, “we will outsource everything if we can.”
He’s right about the biological drive, no question. But what I think he might miss here is that culture gives us the power to transcend it. We don’t only optimize for efficiency—we optimize for meaning, for agency, for narrative coherence.
Look at what people actually do. We could automate more of our cooking, our music-making, our conversations—but we don’t, because these activities matter to who we are. We could even automate more of our jobs, but we resist. Why? There are many reasons, but the crucial one is cultural: some kinds of work carry meaning that can’t be reduced to efficiency.
Culture—the accumulated wisdom of how to live well—allows us to be strategically inefficient. It teaches us which cognitive work to preserve because it serves our desire to “be human” and which to delegate because it doesn’t touch our core identity.
With AI, we’re facing this choice at a deeper level. We’re potentially offloading meaning-making itself—the interpretation of information, the construction of narratives, the synthesis of understanding. That’s the work of being human. It's the feeling that precedes, and is entangled with, the thinking.
The academic literature frames this mostly in terms of inheritance and individuality. My emphasis is on the more immediate human side. I am concerned here with how the small, everyday choices we make about what to delegate and what to preserve ripple outward into cultural patterns that, in turn, shape our future options.
Culture gives us the framework to resist when resistance matters. Those millions of individual choices about what to keep and what to delegate are shaping the cultural environment we all live in. When enough people preserve certain ways of thinking, those practices become the new normal.
The efficiency drive is real, but culture allows us to channel it toward what serves us rather than being captured by it. Stuart Kauffman has another way of putting this. He talks about “Kantian wholes” in biology, where the parts only make sense because they sustain the whole, and the whole only exists because of the parts. A cell is like that. Once a new whole emerges, it creates its own functions and opens up what he calls the adjacent possible—new ways of acting that couldn’t have been listed in advance. AI may be pulling us toward something similar. A human–AI whole could generate functions and paths we can’t pre-state, the same way biology outruns the predictions of physics. That’s why this change feels so open-ended. Once new wholes exist, they evolve on their own terms.
If coevolution is already underway, what does it look like? I think about it based on the kinds of interactions across layers of time.
Building Your Cognitive Reality
The fastest changes come from how you set up your cognitive environment. Look at what social media did—it rewired attention spans, changed relationships, created new ways of processing information.
AI is doing this more intimately. If you use it mostly for creative work—brainstorming, writing, ideation—your brain develops differently than someone who uses it mainly for data analysis. You’re training different mental muscles. And remember that the AI is learning from you too. It adapts to your preferences, reinforcing your starting point. Together you drift into a customized cognitive reality.
Scaled up, that drift shapes groups as much as individuals. Patterns of AI use cluster inside teams, communities, and professions, training different collective muscles. Over time, those patterns compound. Larger populations have always accelerated language change and technological complexity, and AI may work the same way—amplifying group differences in how problems are solved, how fast innovation moves, and what counts as knowledge.
Selection may start acting on the kinds of cognitive environments people build for themselves. Certain patterns of AI use that make a group more productive, coherent, or creative are likely to spread. Groups that train toward narrow optimization may gain efficiency but reduce adaptability. Groups that use AI for exploration and synthesis may generate more novelty but at the cost of speed. In the long run, it is the collective styles of thinking that prove sustainable alongside synthetic minds that are likely to remain.
Skills That Spread
Some people are getting genuinely skilled at working with AI: they develop an intuition for prompting, for knowing when to trust and when to doubt. These skills don’t stay with one person. A useful workflow gets picked up and imitated until it becomes part of the group’s rhythm. Coders share methods for spotting where models fail. Managers learn which decisions can’t be left to the system.
These skills move quickly. A prompt pattern discovered in one corner of the internet can be circulating through whole industries in weeks. Communities absorb these practices and reorganize around them. Roles shift, expectations reset, and professions bend toward the patterns that catch on. This is adaptation happening at the cultural level, with skills turning into traits that selection can act on, moving from local habit to collective norm in the space of a few years.
Selection may also operate on the skills themselves. Prompt fluency, error detection, and judgment about when to override AI can all be copied, taught, and reinforced. Communities that stabilize these skills may grow more effectively than those that treat AI use as ad hoc or purely individual. The pressure will be on coherence: groups that coordinate shared practices around AI may scale more smoothly, while fragmented approaches may not. The skills that spread fastest are likely to be the ones that deliver clear collective advantage and become embedded as norms.
Communities That Tip and Lock In
Sometimes whole communities don’t just evolve gradually—they flip. These shifts behave more like phase changes than steady progress. Push water past a threshold and it freezes into ice; push it further and it boils into steam. Human systems can do the same.
Think about how cities form their identities. Silicon Valley didn’t slowly inch into being the tech capital. Once enough companies, talent, and money clustered there, the whole ecosystem reorganized into something self-reinforcing. Once that phase change happens, it’s incredibly hard to undo. Infrastructure, culture, and skills all reinforce the new state.
What you need to think about here is that the whole way a system is organized changes. Ice isn’t just colder water—it’s a crystal lattice, a completely different structure. Steam isn’t just hotter water—it’s a gas, governed by different dynamics. When a community tips past a threshold with AI, the same thing happens: the whole system of organization reorganizes into something qualitatively new.
AI integration could follow a similar pattern of phase shifting. Some communities may remain mostly human-centered—AI as a tool, humans making the big calls. Others may cross a threshold into much deeper integration, where AI is actively guiding workflows, planning, even decision-making. Once past that point, the community reorganizes itself around the new mode of thinking.
Selection may act on which equilibrium proves more durable. Communities that integrate AI deeply could be favored for their speed and scale, while those that remain human-centered could be favored for creativity, resilience, or judgment. The patterns that persist will be the ones that balance internal coherence with external competitiveness. Once a community locks into one of these states, the selection pressure reinforces it, making the path self-sustaining.
The result is not one evolutionary path but branching ones. Tightly integrated communities could outcompete others in speed, scale, and output. More human-centered communities might retain strengths in creativity, judgment, or resilience. Each represents a different equilibrium, a distinct cognitive-cultural style that could persist for generations.
Some researchers have suggested that what’s happening could eventually resemble a “major evolutionary transition” — the kind of shift where once-separate entities reorganize into a new kind of whole. That may or may not be where this goes. My aim here is not to make that claim, but to propose how, at the cultural level, we’re already seeing the kinds of feedback loops and tipping points that make such possibilities worth taking seriously.
This explains why AI-driven change is so hard to predict, even though tech leaders act as if they can. Once systems reorganize, they follow different rules. And once the phase change has happened, it’s very difficult to reverse.
The Genetics
The slowest pathway is natural selection. If collaborating with AI really did become essential for survival, then over many generations certain cognitive traits might spread — sharper working memory, better pattern recognition, stronger social intelligence. That’s how gene–culture feedback loops have always worked. Dairy farming made lactose tolerance useful. Living at altitude made oxygen-efficient blood useful. But the mechanism is slow. By the time any trait gained ground, the AI environment would already have changed. Cultural adaptation will keep outpacing biology here. And yes—you’re probably thinking, “I’ll be dead, so who cares.”
The transhumanist narrative imagines a fixed trajectory—uploading minds, merging with machines. But that’s backwards. The mechanisms of change are cultural, not genetic, and culture moves on the scale of years and decades. The choices people make right now—what to delegate, what to keep, how to structure AI into their lives—are the forces steering this trajectory.
It’s strange to admit, but we may actually be living at one of those rare moments when human evolution takes a new direction. That sounds almost unbelievable—maybe even arrogant—but culture sometimes moves that way. The Enlightenment reshaped knowledge, politics, and identity within a few generations. AI could mark a shift of similar scale. And in this case, it’s happening through the small, ordinary choices each of us makes about how to think and work with it.
Understanding these coevolutionary forces gives you leverage over them. The biological drive toward efficiency is real, as Krakauer says. But agency is what lets you override it when other values matter more. When you recognize that your individual choices are part of larger cultural currents, you can decide which ones to strengthen.
Our bodies are an evolutionary archive—every bone and nerve carrying the record of billions of years of adaptation. We are now layering AI on top of that cultural and biological history. We've learned that in major evolutionary transitions, individuals often cede some of their agency to a higher-level organization, and it’s that larger unit that selection acts upon. It’s quite possible we are moving toward that place now. It’s bizarre, but also kind of awesome, to think that how each of us adapts to and with AI might actually matter for future humans.
It’s deeply strange to consider that we might be among the last humans who get to make real choices without AI. Maybe our sense of choice is only temporary scaffolding in a much larger reorganization. That possibility makes the question of how we use our agency somewhat of a paradox. It feels even more urgent—even as it may be more fragile, than we like to admit. 🫶