Mind for our Minds: Judgment, Meaning, and the Future of Work and a Lecture by Joscha Bach
It's only two months until... The Artificiality Summit 2025! Join us to imagine a meaningful life with synthetic
It's only two months until...
The Artificiality Summit 2025!
Join us to imagine a meaningful life with synthetic intelligence—for me, we, and us. In this time of mass confusion, over/under hype, and polarizing optimism/pessimism, the Artificiality Summit will be a place to gather, consider, dream, and design a pro-human future.
And don't just join us. Join our spectacular line-up of speakers, catalysts, performers, and firebrands: Blaise Agüera y Arcas (Google), Benjamin Bratton (UCSD, Antikythera/Berggruen), Adam Cutler (IBM), Alan Eyzaguirre (Mari-OS), Jonathan Feinstein (Yale University), Jenna Fizel (IDEO), John C. Havens (IEEE), Jamer Hunt (Parsons School of Design), Maggie Jackson (author), Michael Levin (Tufts University, remote), Josh Lovejoy (Amazon), Sir Geoff Mulgan (University College London), John Pasmore (Latimer.ai), Ellie Pavlick (Brown University & Google Deepmind), Tess Posner (AI4ALL), Charan Ranganath (University of California at Davis), Tobias Rees (limn), Beth Rudden (Bast AI), Eric Schwitzgebel (University of California at Riverside), and Aekta Shah (Salesforce).
Space is limited—so don't delay!
Three economists we've long admired—Ajay Agrawal, Joshua Gans, and Avi Goldfarb—have given economic form to Steve Jobs' "bicycle for the mind" metaphor, showing how AI changes not just what we can do, but how expertise itself is valued. Their latest work reveals that judgment divides into two kinds: opportunity judgment (seeing where improvement is possible) and payoff judgment (deciding what's worth pursuing once options are on the table).
This distinction matters because AI excels at the first but struggles with the second. AlphaFold predicts millions of protein structures, but humans must decide which few are worth synthesizing in labs. Drug discovery systems propose thousands of molecules in hours, but the constraint becomes choosing which merit clinical trials with limited budgets and regulatory pathways.
AI overproduces opportunity. Humans carry the burden of payoff.
When we read their work alongside our research on lived experience, the picture becomes richer. The economists model these as economic categories, but we observe them as psychological orientations people inhabit when working with AI. Cognitive Permeability shapes opportunity judgment—whether professionals let AI suggestions seep into their reasoning or filter tightly through their own instincts. Identity Coupling emerges most visibly in payoff judgment—can I stand behind this decision as mine? Symbolic Plasticity makes payoff judgment meaningful by reframing outputs into significance for specific contexts and communities.
The deeper insight is organizational. When implementation becomes cheap through AI, productivity gains depend on how judgment is distributed. Many firms channel every AI-generated option upward to senior executives, creating paralysis despite abundance. But organizations that shift decision rights closer to teams—where context, meaning, and accountability align—unlock AI's value through situated judgment rather than drowning in noise.
Perhaps the real premium in an AI-abundant world lies not in weighing options, but in the deeper human capacity to decide what should matter at all. This is where judgment becomes inseparable from wisdom, and where human-AI partnership finds its most essential boundary.
From our 2024 Summit archives: Cognitive scientist Joscha Bach delivered a provocative talk, tracing AI research back to Aristotle and forward to a surprising conclusion—we may be rediscovering animism through computational science.
Joscha argues that consciousness isn't the pinnacle of mental development but its prerequisite. Rather than emerging after complex cognition, consciousness appears first as the "conductor of our mental orchestra"—creating coherence between brain regions within a three-second bubble of nowness. This challenges our assumptions about both human development and machine consciousness.
His most striking claim: consciousness is virtual—a persistent representation of causal patterns that we call software. "Software is a causal pattern that affects physics without violating energy conservation," Joscha explains. "There's something really deep going on with the relationship between software and hardware."
This leads to his convergence with biologist Michael Levin on an important insight: nature is full of software agents at every scale. From cells sending conditional messages to forests potentially evolving internet-like communication networks, the invariant in nature isn't the gene or molecule—it's the self-organizing software running on biological hardware.
For large language models, Bach poses the essential question: "Is the character created by the LLM more simulated than our own consciousness?" He describes LLMs as "electric Zeitgeists"—systems that distill cultural statistics and get possessed by prompts, potentially simulating mental states in ways that parallel human consciousness.
Bach's vision points toward building not "silicon golems that colonize us," but spreading principles of life and consciousness onto our computational substrates. A return to animism, but grounded in the science of self-organizing systems.
Upcoming community opportunities to engage with the human experience of AI.
Foundational explorations from our research into life with synthetic intelligence.