On Unpredictability and the Work of Being Human

We act in the parts of reality AI can’t predict because they don’t yet exist.

On Unpredictability and the Work of Being Human
Chanterelle. Image by Lucy Waghorn, Eugene OR

Back in 2016, when the first deep learning revolution had everyone worried about robots taking our jobs, we did our own study. Oxford and McKinsey had done theirs and we wanted to understand what made humans distinct. The answer was a surprise to us—unpredictability. People don't particularly enjoy uncertainty—most of us would rather have things nailed down—but we’re built for it in ways that AI systems, even sophisticated ones, fundamentally aren’t.

That study came up with other findings too. Humans were the creative ones, bringing freshness to problems. We predicted that workers would shift into more meta roles—for instance, writers becoming editors—doing curation and oversight rather than direct production. And yes, humans would handle the emotional, interpersonal dimensions of work. 

But now that whole definition of creativity has split apart. Generative AI is astonishing at combinatorial creativity, mixing and combining patterns in genuinely inventive ways. What it isn’t doing, at least with current architectures, is transformational creativity. It doesn't drop the existing constraints entirely, thereby opening up a new design space. Whether that form of creativity is even possible for these systems is still unclear.

And AI has become a surprisingly competent mirror for the emotional side of things. People already use it for confidence, advice, reassurance, and all the soft-skill terrain we once assumed was exclusively human. So we can’t fall back on the old claim that humans are simply the creative or emotional ones, radiating empathy while the machines handle the logic. That distinction doesn’t hold anymore.

The core insight about unpredictability wasn’t just about humans being flexible or adaptive in some vague sense. It was more specific. Humans operate in parts of reality where the relevant information hasn’t arrived yet. Where the world is still becoming, where patterns are still forming, where what matters hasn’t been determined.

Unpredictability shows up in several ways. There’s the complexity of the world itself, constantly changing in ways we can’t fully predict. AI systems lag behind this because they’re anchored to their training data. There’s also unpredictability in how we value things. In spaces of genuine ambiguity or hard choices, things aren’t easily quantifiable. Value systems differ and can’t be nailed down with any finality. I still subscribe to what the philosopher C. Thi Nguyen wrote about this: values are fundamentally ephemeral. The minute they can be codified and put into an algorithm, they stop being values and become code. There’s always a frontier, and that frontier keeps moving as we codify more and more.

Then there’s what Stuart Kauffman calls the adjacent possible. This is something genuinely beyond training data—truly different patterns in the environment that an intelligence can spot and use to create new possibility spaces. The adjacent possible isn’t just “things we haven’t thought of yet.” It’s the space of what could be combined or discovered given what exists now. It’s the set of genuinely new configurations that emerge from the current state of the world.

Agents remake the environment. The information about what will matter next isn’t available in advance because it depends on choices that haven’t been made, interactions that haven’t happened, discoveries that emerge from the process itself.

Human intelligence developed in exactly these conditions. Non-stationary environments where other agents are constantly changing the game. Where what worked yesterday might not work tomorrow. Where the relevant information for the next decision doesn’t exist yet because the situation is still forming.

AI’s False Promise

Organizations, as Daniel Kahneman said, are factories for making decisions. Decisions are about judgments and actions. Judgment is essentially balancing things that can’t be fully known. The AI narrative smuggles in a false promise for decision makers. The promise is that there’s always a right answer. That if we just put more things into data, turn everything into vectors and numbers, quantify all the relationships, we’ll somehow create answers for things that are genuinely unanswerable.

This simply is not true. Some things are inherently ambiguous. You can’t fully compare one option against another when you don’t even know which is better. These core trade-offs are things that AI can help us model, but ultimately it can’t weigh up all those values for us. Because in the end, it matters how people feel. Feelings come first, Antonio Damasio says, so we weigh everything up and then we make a value judgment.

We are driven by the feeling that we might be able to control outcomes. This is empowerment in the technical sense. Curiosity comes directly out of that feeling. When people believe their actions matter, they explore. When that belief drops, curiosity drops with it. So when AI systems take over decision-making—when they remove people’s sense that they can affect or shape what happens—you kill curiosity, the basic drive to engage with the world and test what’s possible. Curiosity is how we push into the adjacent possible, how we find things that weren’t in anyone’s training data.

There’s another dimension of human intelligence we’re only beginning to understand scientifically: care. Stuart Russell once noted that we don’t pay caregivers very well—teachers, nurses, elder care workers, early childhood educators—because we don’t really understand what it means to build a good human. I don’t think that’s the whole story, but it’s not unreasonable to think that as we codify more caregiving algorithms, we might start to see and value care with greater fidelity. Something similar happened with ethics in AI. Once we began analyzing large datasets, the patterns of human bias became visible in a way they hadn’t been before.

There’s now research on what care actually requires—what kind of intelligence it takes. In economics or AI terms, power is when agent A’s goals override agent B’s. Care is the opposite—agent A sets their own goals aside to support another agent’s. This shows up in developmental studies where we see children taking different kinds of risks when a parent is present than when they’re not. For any parent this is obvious, but once those findings are formalized and quantified, the value of care becomes harder to dismiss. People pay attention in a different way. There’s something about seeing a human phenomenon expressed in data that makes it feel more legitimate—an odd cultural habit, but completely aligned with how we’ve come to talk about AI. Why care seems more “real” once it’s in numbers is its own question, but it’s clearly part of how this moment is unfolding.

Another way to think about unique human value is to appreciate the role of diversity and dissent. Human perspectives and reasoning are myriad. Disagreement spurs investigation. Diverse combinations of disciplines produce the greatest advances and the most robust facts. This matters economically because the same principles apply to all value creation. When findings are surprising or anomalous to one framework unsurprising to another, that’s how we discover new possibility spaces—which is how markets and products and value get created. When an AI output looks wrong to you right to someone else, that divergence is information. It reveals the edges of models, where training data runs out, where new opportunities might exist.

Real human interaction isn’t like two metronomes in perfect unison—it’s turbulent, with moments of coming together and pulling apart. The misalignments, the places where people fall out of sync, are often where the most transformative understanding happens. I think this is the meta-insight—as we begin to mathematize these relationships, we may finally start to value what’s been there all along. And we may also see how little we actually understand. Human sociality, human intelligence—there’s as much frontier there as in AI itself.

Tech leaders use frontier language to describe AI capabilities—the next breakthrough, the unprecedented, the transformative potential. We can use frontier language too. Not to compete with AI, but to describe what we’re discovering about human intelligence that we never understood before. The frontier of care as a form of intelligence. The frontier of productive disagreement and how it creates knowledge. The frontier of empowerment and curiosity as drivers of innovation. The valuable frontier is in the parts of human collaboration that resist quantification.

Remember the promise of AI: the tasks that are easy to verify are the first to be automated. One right answer. By contrast, the unverifiable and the hard-to-verify resist automation. There’s no single right answer for care, for empowerment, for how to resolve disagreement or cooperate—this is the frontier of how we move the human project forward.

The Systems We’re Actually Building

When you hear how today’s tech leaders talk about AI and work, you hear two different stories. They move between the mode of the frontier, where they exercise agency, and transition mode, where all the consequences fall on everyone else.

The frontier is the realm they understand themselves to be responsible for. They are building the models, pushing scientific boundaries, increasing productivity. Here they speak with conviction. They own the acceleration. The frontier is exciting because it is framed as progress—the next invention, the next breakthrough. On the frontier, agency is concentrated. A small number of actors matter. Their decisions shape the trajectory.

The transition is something else entirely. The transition is what happens when the frontier collides with the rest of society. This is the world of job displacement, income shocks, institutional strain, cultural upheaval. The transition is where inequality widens before it narrows, where some people lose footing for years or generations. Here, leaders speak very differently—abstractly, vaguely, with hand-waving about resilience. Suddenly the subject shifts from “I” or “we” to “society,” an undefined collective that’s expected to absorb whatever follows.

This split—frontier as theirs, transition as everyone’s—explains the tone of the current moment: confident when they talk about what they control, diffuse when they talk about the consequences. It’s how the same person can describe AI as an unprecedented economic reordering and, in the next breath, reassure us that everything will “sort itself out.” They’re switching modes. On the frontier, they feel accountable. In the transition, accountability disperses into abstraction. 

Once you see these two modes, it’s less confusing. The frontier story is about capability—what AI can do. The transition story is about distribution—who absorbs the costs, who captures the gains, how quickly institutions can reorganize. One story is about invention. The other is about consequence.

And I think this disconnect would be less problematic if the technology was being built and deployed in a moment where it wasn’t capital-favoring. The problem is not the technology itself, it’s the culture in which it’s being placed. In our current cultural moment—efficiency focused, hyper-capitalism, cultural fracturing and post-truth—AI is being deployed for replacement because replacement is a lot easier to scale than amplification. 

Amplification is fundamentally harder. It means giving a human context-specific, reliable information in real time. That requires systems that understand your situation, not a generic one, that don’t hand you probabilistic outputs you can’t trust, and that track the pace and shape of your work.

But that’s not what we’re building at scale. We’re building general models trained on everything, optimized for breadth, not for context, reliability, or real-time fit. The investment logic pushes in the other direction anyway because replacement scales while amplification doesn’t.

So we find ourselves in a situation where we are told one giant model can replace a workforce. That’s the invention. Building systems that know you—your domain, your constraints, your evolving problem—doesn’t have the same return profile. This is how we end up with tools designed to minimize labour, not to strengthen human capability. That’s the consequence. 

Then there’s the automation story's timeline, and this is where the frontier/transition split becomes really clear. Five years to AGI, some say. AGI is the uber answer. The one right answer writ large.

The AGI framing makes it sound like a force of nature. But the AGI story is really just an automation story. Daron Acemoglu has a quick test I like: take any sentence with “AGI” in it and just replace it with “total automation.” The whole thing reads differently. You see the decisions underneath it—capital, labor, incentives—not some inevitable frontier moment.

The assumption seems to be that AGI somehow solves all dimensions of intelligence in one unified system. But look at the technical realities. There are massive open problems such as real-time continuous learning. It's likely these aren’t going to be solved by one model. What we’re building—what we’ll continue building—are systems of software. Multiple models, multiple tools, multiple approaches that get combined and orchestrated.

This sounds a lot more like an ecosystem not a single invention. And ecosystems require navigation. Judgment about which tool to use when. Understanding how different models might give you different answers and what that divergence means. If we’re heading toward systems of specialized AI software rather than one general intelligence, then human judgment doesn’t become obsolete—it becomes the essential integration layer. We’re not facing replacement by a singular superior intelligence. We’re facing an explosion of specialized capabilities that humans will need to navigate, combine, evaluate, and make accountable decisions about.

Why This Generates Unpredictability

“May we scale smoothly, exponentially and uneventfully through superintelligence.” That’s Sam Altman’s hope as he writes about it in The Gentle Singularity. He assumes that we can move exponentially on the frontier while the transition remains smooth and uneventful. Transitions don’t work that way. The problem is sequencing.

Speeds are asymmetric. Job displacement happens in quarters. New job creation happens in years. Social safety nets happen in decades. Building political consensus for something like UBI, fighting entrenched interests, creating infrastructure, implementing at scale—this is generational work. You cannot compress these timescales without causing chaos in the gap between them.

Costs don’t reduce evenly—and some reductions require societal ruptures. Sure, education costs could drop if everyone uses ChatGPT instead of going to college. Sam Altman recently said a bachelor’s degree should go to zero value. Okay, then what? Universities collapse. They’re massive employers and anchors of local economies—especially in smaller cities where the university IS the economy. 

Healthcare costs could drop if everyone used ChatGPT for medical advice. Maybe AI eventually prescribes drugs. But who resists that? The AMA, insurance companies, pharmaceutical distributors, device manufacturers, hospital systems—entire industries built around the current structure. That’s millions of jobs, trillions in capital, decades of regulation. You can’t “make things cheaper” without dismantling the institutions that organize economic activity. And those institutions don’t go quietly—they lobby, litigate, stall, or fail in ways that make everything less predictable.

The UBI argument only works if you believe two things at once: that productivity will explode fast enough to make everything dramatically cheaper, and that this will happen before the social and economic damage shows up. I don’t see either happening. Productivity doesn’t reprice whole sectors overnight. And the only thing that would make costs collapse at that speed is a level of institutional failure so severe that the economy would have to be rebuilt anyway. When people lose income faster than new systems form, consumer demand falls immediately, not eventually. Sequencing is the problem—the losses happen in quarters, the political will for UBI takes years, and the administrative capacity to deliver it takes even longer. By the time UBI arrives—if it does—you’ve already hollowed out the demand the economy relies on. It’s not the idea itself that fails. It’s the timing.

And this is the part worth noting. The kind of social and economic chaos produced when exponential change hits rigid institutions creates exactly the conditions where AI systems perform worst. We saw this during the pandemic, when supply-chain models built on stable patterns suddenly stopped being useful because the world shifted in ways outside their training data. Systems trained on established patterns can’t navigate the turbulence they help create.

So here’s what I think: the “smooth, exponential” disruption creates the opposite—conditions that shift too fast for models, where human judgment and improvisation become essential again.

What Humans Are For

The transition doesn’t just reveal what humans are good at—it requires it. Judgment under incomplete information. Moving between competing frameworks. Reading emerging constraints. Supporting others’ development when outcomes are uncertain. Disagreeing productively and spotting anomalies. Acting when the relevant information hasn’t arrived because the situation is still forming.

And the faster the frontier moves, the more turbulent the transition becomes, and the more we rely on these human capacities to navigate non-stationary environments. This isn’t a temporary gap while AI catches up. The structure is recursive, so capability growth creates transition complexity, and that complexity creates the conditions where human intelligence—built for unpredictable, agent-driven environments—becomes indispensable.

Once everyone automates the same tasks with the same models, efficiency stops being an advantage. It becomes baseline. Then you’re back to needing actual novelty—new products, markets, and knowledge. Back to the human frontier. And that depends on the capabilities we’ve already been talking about: judgment, diversity, curiosity, and care that develops talent. This isn’t a soft argument. It’s structural. Capital still needs growth, and growth comes from new value, not from squeezing more efficiency out of the same stack. Demand gets created when humans figure out what’s worth building next.

So what do we need to do? Part of the answer is looking at what we need to avoid. And that’s harder than it sounds, because one of the futures we should be wary of is also one that many people actively want. A world where you generate your own movies, design your own worlds, and live inside experiences tailored so precisely to you that nothing unpredictable gets through. AI gives you whatever you ask for—efficient, frictionless, always aligned.

This isn't a dystopia imposed upon us. It’s the natural endpoint of a culture that already chooses convenience, speed, and personalization whenever the option is there. Broken realities don’t get fixed. They get replaced. And you end up with millions of parallel worlds, each optimized for the individual, but none capable of supporting shared life. This represents the collapse of the economic basis for cooperation itself. 

Remember empowerment and curiosity? In a personalized AI world, you can have perfect control—but only over a closed system. Real empowerment requires acting where outcomes aren’t guaranteed.

Remember synchrony and misalignment? Human connection depends on encountering people who don’t match us, who interrupt our expectations. You lose that when experience collapses into customization.

Remember care? Care only works when two people’s goals aren’t aligned by default. It requires navigating difference, not eliminating it.

Remember knowledge creation? New knowledge comes from encounters that don’t fit your preferences or your model of the world. A system designed to satisfy your prior preferences can’t produce that.

Remember values? Values emerge at the edge of what can’t be neatly compared. If AI collapses everything into preference optimization, values lose their force.

Sometimes I look at work being done and think, yeah, an AI could probably do that. Then I catch myself—I’m viewing work as isolated verifiable tasks rather than the messy thing of navigating institutions that can’t move at software speed, creating shared meaning in turbulent conditions, judging what matters when the relevant information doesn’t exist yet.

We haven’t solved most problems. Customer service is horrible when everyone wants it at the same time. Insurance approvals are nightmares. These aren’t problems waiting for better AI. They’re problems that exist at the intersection of institutions, incentives, values, a complex world, and human coordination—exactly where unpredictability concentrates.

Back to where we started—humans handle unpredictability. That was true in 2016, and it’s still true now. If anything, more so. There really is no one right answer.

We’ve traced two stories here. The frontier story—what AI can do, how capability scales, what gets automated when tasks are verifiable. And the transition story—what happens when that capability collides with institutions that move on social time. The mismatch between these timescales generates the conditions where human intelligence becomes most essential. 

And we can discover new things about human intelligence just as AI is advancing. We will turn AI’s microscope back on ourselves and discover that human intelligence is stranger, more distributed, more dependent on our bodies and our social bonds than we thought. 

These days I worry more about coherence than capability. The frontier runs on software time while the transition runs on human time. Building the structures that let us navigate that gap without losing the shared world we act from—that’s the work. 

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.