Mind for our Minds: Judgment, Meaning, and the Future of Work and a Lecture by Joscha Bach

An abstract image of intricate biological textures

It's only two months until...

The Artificiality Summit 2025!

Join us to imagine a meaningful life with synthetic intelligence—for me, we, and us. In this time of mass confusion, over/under hype, and polarizing optimism/pessimism, the Artificiality Summit will be a place to gather, consider, dream, and design a pro-human future.

And don't just join us. Join our spectacular line-up of speakers, catalysts, performers, and firebrands: Blaise Agüera y Arcas (Google), Benjamin Bratton (UCSD, Antikythera/Berggruen), Adam Cutler (IBM), Alan Eyzaguirre (Mari-OS), Jonathan Feinstein (Yale University), Jenna Fizel (IDEO), John C. Havens (IEEE), Jamer Hunt (Parsons School of Design), Maggie Jackson (author), Michael Levin (Tufts University, remote), Josh Lovejoy (Amazon), Sir Geoff Mulgan (University College London), John Pasmore (Latimer.ai), Ellie Pavlick (Brown University & Google Deepmind), Tess Posner (AI4ALL), Charan Ranganath (University of California at Davis), Tobias Rees (limn), Beth Rudden (Bast AI), Eric Schwitzgebel (University of California at Riverside), and Aekta Shah (Salesforce).

Space is limited—so don't delay!

Learn more

Mind for our Minds: Judgment, Meaning, and the Future of Work

Three economists we've long admired—Ajay Agrawal, Joshua Gans, and Avi Goldfarb—have given economic form to Steve Jobs' "bicycle for the mind" metaphor, showing how AI changes not just what we can do, but how expertise itself is valued. Their latest work reveals that judgment divides into two kinds: opportunity judgment (seeing where improvement is possible) and payoff judgment (deciding what's worth pursuing once options are on the table).

This distinction matters because AI excels at the first but struggles with the second. AlphaFold predicts millions of protein structures, but humans must decide which few are worth synthesizing in labs. Drug discovery systems propose thousands of molecules in hours, but the constraint becomes choosing which merit clinical trials with limited budgets and regulatory pathways.

AI overproduces opportunity. Humans carry the burden of payoff.

When we read their work alongside our research on lived experience, the picture becomes richer. The economists model these as economic categories, but we observe them as psychological orientations people inhabit when working with AI. Cognitive Permeability shapes opportunity judgment—whether professionals let AI suggestions seep into their reasoning or filter tightly through their own instincts. Identity Coupling emerges most visibly in payoff judgment—can I stand behind this decision as mine? Symbolic Plasticity makes payoff judgment meaningful by reframing outputs into significance for specific contexts and communities.

The deeper insight is organizational. When implementation becomes cheap through AI, productivity gains depend on how judgment is distributed. Many firms channel every AI-generated option upward to senior executives, creating paralysis despite abundance. But organizations that shift decision rights closer to teams—where context, meaning, and accountability align—unlock AI's value through situated judgment rather than drowning in noise.

Perhaps the real premium in an AI-abundant world lies not in weighing options, but in the deeper human capacity to decide what should matter at all. This is where judgment becomes inseparable from wisdom, and where human-AI partnership finds its most essential boundary.

Read more...


Joscha Bach: The Cyberanimist Perspective on Consciousness

From our 2024 Summit archives: Cognitive scientist Joscha Bach delivered a provocative talk, tracing AI research back to Aristotle and forward to a surprising conclusion—we may be rediscovering animism through computational science.

Joscha argues that consciousness isn't the pinnacle of mental development but its prerequisite. Rather than emerging after complex cognition, consciousness appears first as the "conductor of our mental orchestra"—creating coherence between brain regions within a three-second bubble of nowness. This challenges our assumptions about both human development and machine consciousness.

His most striking claim: consciousness is virtual—a persistent representation of causal patterns that we call software. "Software is a causal pattern that affects physics without violating energy conservation," Joscha explains. "There's something really deep going on with the relationship between software and hardware."

This leads to his convergence with biologist Michael Levin on an important insight: nature is full of software agents at every scale. From cells sending conditional messages to forests potentially evolving internet-like communication networks, the invariant in nature isn't the gene or molecule—it's the self-organizing software running on biological hardware.

For large language models, Bach poses the essential question: "Is the character created by the LLM more simulated than our own consciousness?" He describes LLMs as "electric Zeitgeists"—systems that distill cultural statistics and get possessed by prompts, potentially simulating mental states in ways that parallel human consciousness.

Bach's vision points toward building not "silicon golems that colonize us," but spreading principles of life and consciousness onto our computational substrates. A return to animism, but grounded in the science of self-organizing systems.

Read more...


On the Horizon

Upcoming community opportunities to engage with the human experience of AI.

  • The Artificiality Summit 2025. Our second annual gathering convenes leading thinkers to explore the intersections of human and synthetic intelligence. This October 23-25 in Bend, Oregon, we'll examine the Scale: Me, We, and Us dimensions of our evolving relationship with artificial minds. Presented in partnership with the House of Beautiful Business, IDEO, and Softcut Films.

Worth Revisiting

Foundational explorations from our research into life with synthetic intelligence.

  • Neosemantic Design. We introduce a framework for human-machine communication that moves beyond traditional metaphor-based interfaces. As artificial minds develop their own cognitive landscapes, our existing design language—built around human metaphors like folders and trash cans—becomes inadequate for meaningful interaction with synthetic intelligence. Neosemantics draws from gesture, motion, and the arts to create interfaces that communicate through alignment and resonance rather than symbolic translation.
  • How We Think and Live with AI: Early Patterns of Human Adaptation. Our Chronicle study documents how people develop psychological relationships with artificial systems through three key orientations: cognitive permeability (how AI responses blend into thinking), identity coupling (how closely identity becomes entangled with AI interaction), and symbolic plasticity (capacity to revise meaning frameworks). We map five adaptation states people navigate—recognition, integration, blurring, fracture, and reconstruction—revealing that conscious framework development may be essential for preserving human agency as AI becomes pervasive.
  • The Artificiality: How Life and Intelligence Emerge from Information and Shape the Human Experience. We explore our foundational concept of the Artificiality—the new reality emerging as synthetic intelligence becomes integrated into human experience. This isn't simply about AI as tool or assistant, but about the fundamental transformation of what it means to be human when our creations develop their own forms of agency and intelligence. The Artificiality represents a continuation of human co-evolution with our technologies, from language to writing to computation, now extending into the realm of synthetic minds.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.