Mind for our Minds: Judgment, Meaning, and the Future of Work and a Lecture by Joscha Bach
It's only two months until... The Artificiality Summit 2025! Join us to imagine a meaningful life with synthetic
AI excels at generating opportunities but struggles with deciding what matters. New research reveals why human judgment—not technical skill—becomes the scarce premium as AI reshapes expertise and organizational decision-making.
For years we’ve spoken of the potential of AI as a mind for our minds — a way of imagining what it might mean to live alongside a personal intelligence, one that could help us navigate, decide, and live more meaningfully. Long before today’s wave of tools, this phrase captured the sense that AI could also be a cognitive companion, a presence that would sit beside us as we thought.
Steve Jobs called the computer a bicycle for the mind: a machine that extended our thinking the way a bicycle lets the body travel farther with the same legs. It was a metaphor of speed and reach. But AI goes beyond leverage. It feels less like an extension of our legs and more like another rider—a second mind shaping what we notice, nudging how we decide.
Recently, three economists we’ve long admired—Ajay Agrawal, Joshua Gans, and Avi Goldfarb—have sharpened what this shift really means. Their earlier books, Prediction Machines and Power and Prediction, reframed AI as a force reorganizing decision-making itself. In their latest work, they return to Jobs’ bicycle metaphor and give it economic form, showing how AI changes not just what we can do, but how expertise itself is valued. They have given the “bicycle for the mind” metaphor a skeleton: a clear model of what cognitive tools actually do to work and expertise.
Their previous work argued that AI makes implementation cheap and judgment scarce. That was already an important insight. But their new work shows that judgment itself divides into two kinds—opportunity and payoff—and that AI interacts with each in very different ways. This refinement matters because it explains why expertise is in flux. Machines are flooding the world with opportunities, but the human burden of deciding what matters has only grown heavier.
When we read their latest work beside what our own research has uncovered in lived experience, we think the picture gets even richer. Together, these perspectives show why expertise feels so unsettled, why teams and organizations are in cultural flux, and why the real future premium will rest on the human ability to decide what matters.
Agrawal, Gans, and Goldfarb break work into three parts.
First, implementation: the doing—coding, drafting, calculating.
Second, opportunity judgment: seeing where improvement or innovation is possible.
Third, payoff judgment: deciding what is worth pursuing once options are on the table.
Computers and AI substitute for implementation. But they generally complement opportunity judgment—more openings become valuable when the cost of acting on them falls—though the economists note this depends on specific conditions. And computers and AI sometimes complement payoff judgment.
This structure explains why, in the early stages of AI adoption, productivity gaps inside organizations often shrink. When the doing is automated, novices suddenly look closer to experts. But the economists show that under certain conditions, the curve can bend back: as tools improve further, those with superior judgment may pull ahead again, creating a U-shaped pattern in inequality. Whether this "middle ground erosion" occurs depends on the specific distribution of skills in each context.
The model treats opportunity and payoff judgment as parallel categories, but the world does not. AI is extraordinary at generating opportunities but less good at deciding which of them matter.
Take AlphaFold. The system predicts millions of possible protein structures. That is opportunity judgment at a superhuman scale. But the bottleneck isn't prediction. It is deciding which of those millions should be synthesized and tested in a lab. That requires payoff judgment—weighing resources, contexts, and the messy realities of experimentation.
The same thing exists in drug discovery, where AI proposes thousands of candidate molecules in hours. The constraint is the choice of which few are worth pursuing with limited budgets, clinical trials, and regulatory pathways.
Beyond scientific applications, the same pattern is everywhere. A design team may be flooded with hundreds of AI-generated options, but the real value of that abundance lies in the human judgment of which direction carries meaning—for the project, the client, and the community.
AI overproduces opportunity. Humans are left with the burden of payoff. The future premium rests not on spotting what could be done, but on deciding what should be done.
The economists model these as economic categories, but in our research on lived experience, we see them as psychological orientations people inhabit when they work with AI.
Cognitive Permeability (CP) is the orientation that shapes opportunity judgment. High CP professionals let AI’s suggestions seep into their reasoning, treating the system as a partner in spotting openings. Low CP professionals filter tightly, drawing on their own instincts. Both are ways of exercising opportunity judgment, but they feel very different in practice.
Identity Coupling (IC) is most visible in payoff judgment. Deciding whether to put your name on an AI-drafted contract, or to base a clinical recommendation on a machine’s analysis, is not only a technical call. It is a question of ownership: can I stand behind this decision as mine? Payoff judgment is always tethered to accountability.
Symbolic Plasticity (SP) is what makes payoff judgment meaningful. Choosing between hundreds of design drafts is not only about selecting the most polished option. It is about reframing outputs into significance: which of these carries value for this client, this community, this moment? SP moderates everything else. It turns CP’s openness into discernment, and IC’s accountability into cultural relevance.
Seen this way, the economists' prediction that judgment is generally complemented by AI tools plays out unevenly in lived experience. But our research shows why it feels uneven: CP, IC, and SP are the orientations that govern how judgment actually shows up in people’s lives—porous or bounded, owned or outsourced, meaningful or empty.
If implementation is cheap and opportunities are overproduced, then expertise can no longer rest on execution or even on spotting openings. The scarce premium becomes payoff judgment—and within it, the human capacity to carry meaning and responsibility.
But wait, there's more. Doesn't AI now participate in that too? Payoff judgment is no longer purely human. AI is also beginning to participate in testing ideas as well as generating them. AlphaFold it ranks proteins by likelihood of stability. Drug discovery systems simulate molecular binding properties. Even in everyday work, AI runs micro-experiments and tells us which ones perform.
This shifts payoff judgment into a hybrid space. Machines can suggest what might work, measure what does work, but only humans can decide what matters. Accountability and meaning can’t be automated. This is where Symbolic Plasticity takes center stage. SP is what allows us to reinterpret AI’s rankings, embed them in context, and take responsibility for choosing.
This is why SP becomes decisive. AI can flood us with options, but it cannot decide what matters. That act of reframing, of interpreting in context, is where expertise now lives. We observe that CP appears to influence how much of the flood people can see, IC shapes whether they can stand behind a choice, and SP seems to turn both into discernment.
Expertise, then, is in flux. It is moving away from mastery of craft toward mastery of judgment. The new expert is not the fastest implementer, but the one who can say: this is worth pursuing, and I will take responsibility for it.
AI shifts the balance between doing and judging at the level of individuals. But it also reshapes how organizations need to structure decision-making. The economists show that when implementation is cheap, the gains from AI depend on where judgment sits. If judgment is concentrated too narrowly, the organization drowns in abundance. If it is distributed more effectively, the value of the tool is unlocked—but only if trust, communication, and accountability are in place.
Take a product team using AI to generate hundreds of design concepts. In many firms, every option has to flow upward to senior executives because only they are authorized to decide. AI produces abundance, yet causes paralysis, and the organization gains little. The communication costs of channeling everything through a narrow apex of judgment overwhelm the benefits.
So the real issue is not only where decisions sit, but what kind of judgment is required, and who is equipped to exercise it. AI already excels at generating opportunities, so the premium skill is payoff judgment: deciding which concepts are worth pursuing. Senior leaders may lack the context to exercise it well. Their CP tends to be lower, filtering heavily, and their SP often limited by distance from customers and the design floor. They see volume, but struggle to discern meaning.
Shift the decision rights closer to the team, however, and the picture changes. Mid-level product managers, embedded in context, can use higher CP to treat AI as a generative partner. Strong SP allows them to reframe outputs into viable directions, interpreting what matters for this client or market. And with firm IC, they can stand behind the decision and be accountable for moving the best options forward. In this configuration, AI’s abundance becomes raw material for situated judgment rather than a source of noise.
The deeper point is that the productivity frontier has moved. It no longer lies in faster doing, but in how organizations design for judgment—who gets to act, under what conditions, and with what responsibility. AI shifts the economics of tasks, but culture decides whether those gains are realized.
AI adoption is about how decisions get made, not just what tools are used. Who is authorized to act? How is accountability carried? Which opportunities are allowed to matter? These are the cultural architectures of judgment.
The economists also reveal why full automation remains elusive despite AI's advancing capabilities. Their model shows that automation requires pre-specifying all judgment parameters—essentially encoding human wisdom into fixed rules before the work begins. But this creates what they call "multiplicative penalties": the loss of flexibility in opportunity recognition, the inability to adapt judgment to new contexts, and the compounding effects of both over time. Even small limitations multiply together, creating a formidable barrier that automation must overcome through either superior implementation or dramatic cost savings.
So here we have a paradox. As AI tools become more powerful at augmenting human judgment, full automation may become less attractive rather than more so. The flexibility advantage of human-AI partnerships grows more valuable precisely because the tools amplify what humans can do when they spot opportunities and exercise contextual judgment.
The future may belong not to fully automated systems, but to increasingly sophisticated forms of human-AI collaboration where judgment remains irreducibly human. The bottleneck may shift to an even more fundamental level. The economists' model assumes humans can specify what constitutes "payoff," but AI's growing analytical power reveals how much of what we call judgment is actually about deciding what the decision should optimize for in the first place.
When AI can both predict protein structures and analyze which might make promising drug targets, the human question becomes: should we prioritize rare diseases or common ones? What's our risk tolerance worth? How do we weigh speed against safety? These aren't payoff judgments in the economists' sense—they're questions about values, priorities, and meaning that precede any technical analysis.
Perhaps the real premium in an AI-abundant world lies not in weighing options, but in the deeper human capacity to decide what should matter at all. This is where judgment becomes inseparable from wisdom, and where the partnership between human and artificial intelligence finds its most essential boundary.