Announcing Benjamin Bratton at the Summit, What We're Learning About Intelligence, and a Conversation with Beth Rudden
Announcing Benjamin Bratton at the Artificiality Summit! We're thrilled to announce that Benjamin Bratton will be joining us
We're thrilled to announce that Benjamin Bratton will be joining us at the Artificiality Summit. Benjamin is Professor of Philosophy of Technology and Speculative Design at the University of California, San Diego and Director of Antikythera, a think-tank researching the future of planetary computation based at the Berggruen Institute. Benjamin also works with another Artificiality Summit speaker, Blaise Agüera y Arcas, in his role as Visiting Faculty Researcher at Google's Paradigms of Intelligence group which conducts fundamental research on the artificialization of intelligence, from neuromorphic chip and algorithm design to modeling societies of billions of virtual agents.
In June, we published a conversation with Benjamin which is well-worth listening to if you haven't already. Needless to say, we're excited to continue the conversation in October!
Learn more about the Summit and purchase your ticket here.
Three groundbreaking research papers are finally naming what many of us have been sensing: large language models can be wildly impressive and fundamentally limited at the same time. This distinction matters more than the endless cycle of bigger models and better benchmarks.
The mathematical reality is sobering. Physicists Peter Coveney and Sauro Succi reveal that scaling laws come with brutal constraints—ten times more accuracy requires 10^10 more compute. Meanwhile, researchers at Arizona State demonstrate that Chain-of-Thought reasoning crumbles under pressure, exposing what looks like reasoning as clever but brittle pattern matching.
The Santa Fe Institute's David Krakauer, John Krakauer, and Melanie Mitchell offer crucial vocabulary by separating emergent capabilities from emergent intelligence. The first is "more with more"—scale up and suddenly the model summarizes poetry. The second is "more with less"—finding deep principles that transfer far beyond their origin, like a child learning that pushing moves objects everywhere, not just toys.
Humans excel at this second type. We generalize from few examples because we've built symbolic structures over lifetimes. We recognize novelty through analogy, not retrieval. We fluidly mix reasoning methods—logic, pattern, intuition, metaphor—knowing which thinking suits which problem.
Yale's Luciano Floridi adds a fundamental constraint: we can't have both broad scope and perfect certainty. As AI systems tackle complex, open-ended tasks, they necessarily sacrifice error-free performance. It's the curse of dimensionality made formal.
This clarity about intelligence types opens more interesting possibilities than replacement narratives. Instead of racing toward artificial general intelligence, we can design systems that excel at what they're genuinely good at while humans contribute what we're uniquely suited for. The research suggests human intelligence—our ability to find principles, make analogies, reason about novel situations—remains essential rather than redundant.
What we've discovered is extraordinary: language is fundamentally statistical rather than syntactic. This reshapes our understanding of meaning itself. But recognizing the boundaries of this power while appreciating its genuine impact opens space for AI systems that actually make human intelligence more powerful—a far more interesting problem to solve.
In this thought-provoking conversation with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI, we explore how archaeological thinking offers essential insights for building trustworthy AI systems that amplify rather than replace human expertise.
Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.
Rather than relying on statistical pattern matching divorced from meaning, Beth's approach uses ontological scaffolding—formal knowledge graphs that give AI systems the context needed to understand what they're processing. At Bast AI, this manifests as explainable healthcare AI where patients maintain data sovereignty and can trace every decision back to its source.
Beth challenges the dominant narrative that AI will simply replace human workers, proposing instead economic models that compete to amplify human expertise. Certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied. "You can fake reading. You cannot fake swimming," she notes, emphasizing that some human capabilities remain foundational to how knowledge actually works.
The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—provides a framework for AI development that serves human flourishing rather than optimizing for efficiency at the expense of meaning.
Join Beth at the Artificiality Summit! Learn more about the Summit and purchase your ticket here.
Upcoming community opportunities to engage with the human experience of AI.
Foundational explorations from our research into life with synthetic intelligence.