Announcing Benjamin Bratton at the Summit, What We're Learning About Intelligence, and a Conversation with Beth Rudden

An abstract image of intricate biological textures

Announcing Benjamin Bratton at the Artificiality Summit!

We're thrilled to announce that Benjamin Bratton will be joining us at the Artificiality Summit. Benjamin is Professor of Philosophy of Technology and Speculative Design at the University of California, San Diego and Director of Antikythera, a think-tank researching the future of planetary computation based at the Berggruen Institute. Benjamin also works with another Artificiality Summit speaker, Blaise Agüera y Arcas, in his role as Visiting Faculty Researcher at Google's Paradigms of Intelligence group which conducts fundamental research on the artificialization of intelligence, from neuromorphic chip and algorithm design to modeling societies of billions of virtual agents.

In June, we published a conversation with Benjamin which is well-worth listening to if you haven't already. Needless to say, we're excited to continue the conversation in October!

Learn more about the Summit and purchase your ticket here.


What We’re Learning (and Finally Saying Out Loud) About Intelligence

Three groundbreaking research papers are finally naming what many of us have been sensing: large language models can be wildly impressive and fundamentally limited at the same time. This distinction matters more than the endless cycle of bigger models and better benchmarks.

The mathematical reality is sobering. Physicists Peter Coveney and Sauro Succi reveal that scaling laws come with brutal constraints—ten times more accuracy requires 10^10 more compute. Meanwhile, researchers at Arizona State demonstrate that Chain-of-Thought reasoning crumbles under pressure, exposing what looks like reasoning as clever but brittle pattern matching.

The Santa Fe Institute's David Krakauer, John Krakauer, and Melanie Mitchell offer crucial vocabulary by separating emergent capabilities from emergent intelligence. The first is "more with more"—scale up and suddenly the model summarizes poetry. The second is "more with less"—finding deep principles that transfer far beyond their origin, like a child learning that pushing moves objects everywhere, not just toys.

Humans excel at this second type. We generalize from few examples because we've built symbolic structures over lifetimes. We recognize novelty through analogy, not retrieval. We fluidly mix reasoning methods—logic, pattern, intuition, metaphor—knowing which thinking suits which problem.

Yale's Luciano Floridi adds a fundamental constraint: we can't have both broad scope and perfect certainty. As AI systems tackle complex, open-ended tasks, they necessarily sacrifice error-free performance. It's the curse of dimensionality made formal.

This clarity about intelligence types opens more interesting possibilities than replacement narratives. Instead of racing toward artificial general intelligence, we can design systems that excel at what they're genuinely good at while humans contribute what we're uniquely suited for. The research suggests human intelligence—our ability to find principles, make analogies, reason about novel situations—remains essential rather than redundant.

What we've discovered is extraordinary: language is fundamentally statistical rather than syntactic. This reshapes our understanding of meaning itself. But recognizing the boundaries of this power while appreciating its genuine impact opens space for AI systems that actually make human intelligence more powerful—a far more interesting problem to solve.

Read more...


Beth Rudden: AI, Trust, and Bast AI

In this thought-provoking conversation with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI, we explore how archaeological thinking offers essential insights for building trustworthy AI systems that amplify rather than replace human expertise.

Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.

Rather than relying on statistical pattern matching divorced from meaning, Beth's approach uses ontological scaffolding—formal knowledge graphs that give AI systems the context needed to understand what they're processing. At Bast AI, this manifests as explainable healthcare AI where patients maintain data sovereignty and can trace every decision back to its source.

Beth challenges the dominant narrative that AI will simply replace human workers, proposing instead economic models that compete to amplify human expertise. Certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied. "You can fake reading. You cannot fake swimming," she notes, emphasizing that some human capabilities remain foundational to how knowledge actually works.

The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—provides a framework for AI development that serves human flourishing rather than optimizing for efficiency at the expense of meaning.

Join Beth at the Artificiality Summit! Learn more about the Summit and purchase your ticket here.

Watch/Listen...


On the Horizon

Upcoming community opportunities to engage with the human experience of AI.

  • The Artificiality Summit 2025. Our second annual gathering convenes leading thinkers to explore the intersections of human and synthetic intelligence. This October 23-25 in Bend, Oregon, we'll examine the Scale: Me, We, and Us dimensions of our evolving relationship with artificial minds. Presented in partnership with the House of Beautiful Business, IDEO, and Softcut Films.

Worth Revisiting

Foundational explorations from our research into life with synthetic intelligence.

  • Neosemantic Design. We introduce a framework for human-machine communication that moves beyond traditional metaphor-based interfaces. As artificial minds develop their own cognitive landscapes, our existing design language—built around human metaphors like folders and trash cans—becomes inadequate for meaningful interaction with synthetic intelligence. Neosemantics draws from gesture, motion, and the arts to create interfaces that communicate through alignment and resonance rather than symbolic translation.
  • How We Think and Live with AI: Early Patterns of Human Adaptation. Our Chronicle study documents how people develop psychological relationships with artificial systems through three key orientations: cognitive permeability (how AI responses blend into thinking), identity coupling (how closely identity becomes entangled with AI interaction), and symbolic plasticity (capacity to revise meaning frameworks). We map five adaptation states people navigate—recognition, integration, blurring, fracture, and reconstruction—revealing that conscious framework development may be essential for preserving human agency as AI becomes pervasive.
  • The Artificiality: How Life and Intelligence Emerge from Information and Shape the Human Experience. We explore our foundational concept of the Artificiality—the new reality emerging as synthetic intelligence becomes integrated into human experience. This isn't simply about AI as tool or assistant, but about the fundamental transformation of what it means to be human when our creations develop their own forms of agency and intelligence. The Artificiality represents a continuation of human co-evolution with our technologies, from language to writing to computation, now extending into the realm of synthetic minds.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.