Artificiality Book Awards 2025 plus a working library of past must-reads

Congratulations to the winners of the Artificiality Book Awards 2025!

Book covers of the eight books receiving the Artificiality Book Award in 2025 (listed below)

This year, the best books I’ve read about AI have been about the same thing, approached from different angles: the convergence of different kinds of intelligence, the growing body of scholarship examining these interactions, and what it means to be human now that intelligence exists in very real forms beyond us.

Three books in particular have defined 2025 for me on this front: Blaise Agüera y Arcas’ What Is Intelligence?, N. Katherine Hayles’ From Bacteria to AI, and Chris Summerfield’s These Strange New Minds. I encourage you to read all three. Each offers a distinct account of the nature of intelligence, but taken together they form a strong and useful triangulation of what I believe is a genuine shift in how we understand intelligence and, therefore, ourselves.

It does not feel coincidental that these books were published so close together; they reflect a shared sense of urgency about how AI is already changing how we see ourselves.

What Is Intelligence? — Blaise Agüera y Arcas

Book cover of What is Intelligence? by Blaise Agüera y Arcas

I’ve heard people describe Blaise’s latest book as the most important work on intelligence since Gödel. What they mean is that it reorganizes a fragmented intellectual landscape into a single, coherent frame. It’s a one-in-a-generation book that brings together ideas from computer science, machine learning, biology, and neuroscience in a way that creates a new structure of meaning rather than just adding another argument to the pile.

Blaise tells the story of intelligence as something that spans molecules, organisms, societies, and modern AI, and in doing so makes it difficult to maintain our old boundaries between computation and lived experience. After reading it, it becomes clear that some AI systems already exhibit genuinely new forms of intelligence, which means we need to rethink where intelligence begins and ends—and where we sit within it. For those who came to our Summit, this line of thinking will feel familiar.

If you haven’t yet been Blaise-pilled—if you haven’t gone through the shift of seeing life and intelligence as grounded in computation, and therefore ourselves as well—this should be your next read.

Bacteria to AI — N. Katherine Hayles

Book cover of Bacteria to AI by N. Katherine Hayles

This book was music to my ears. A literary scholar with a long history of engaging deeply with philosophy, Hayles is writing directly about the paradigmatic shifts in meaning that follow from seeing cognition as something that operates across many scales, creatures, and systems. If you’re drawn to the ideas Artificiality is built on—that we are just one form of meaning-making intelligence among many—this book will feel immediately at home.

As with all of her work, this is a deep book. That depth matters, because the challenge Hayles puts forward—thinking seriously about the ecological and relational dimensions of cognition, and how those dimensions should inform the design of technologies now participating in meaning-making—is ahead of much of the current field.

These Strange New Minds — Christopher Summerfield

Book cover of These Strange New Minds by Christopher Summerfield

We love Chris. He’s a joy to interview (new episode coming in January), and his book carries the same what you see is what you get quality. It’s written in a voice that feels genuinely human—you have the sense that he’s sitting with you, thinking it through as you read.

The book is grounded in a single, deceptively simple idea: that the fact AI has learned as much as it has from data, transformers, and compute—that is, from statistical structure alone—is one of the most important discoveries of the twenty-first century. Chris lays out clearly how we got here and what it means for human cognition as it becomes increasingly entangled with AI, both in the form of large language models and whatever comes next. There are no “stochastic parrots” here. It’s a generous, clarifying read whether you’re new to AI or have been working with it for years.

When Everyone Knows That Everyone Knows — Steven Pinker

Book cover of When Everyone Knows That Everyone Knows by Steven Pinker

Pinker can be a polarizing figure, and many people strongly disagree with his broader worldview. Even so, I’ve consistently found deep insight and plain-spoken clarity in his work, and this book is no exception.

Here, Pinker dissects the logic of “common knowledge” and shows how shared awareness shapes norms, coordination, and behaviour across human systems. Much of how we communicate works through signalling what’s really going on without stating it directly. That matters more now that machines generate language constantly while remaining outside those real-world signalling dynamics. I don’t yet know exactly how this will play out, but Pinker gives us useful tools for thinking about how information cascades through networks—an essential lens for anyone building or studying AI-mediated systems.

Deep Change — Kees Dorst

Book cover of Deep Change by Kees Dorst

Kees Dorst is one of the few design theorists who can work seriously with complexity without oversimplifying it. We’ve followed his work for years and interviewed him about his earlier book Frame Innovation for that reason.

It’s a challenge to think in non-linear, open, complex ways. So some of his ideas—while easy to grasp and intuit— are genuinely hard to put into practice (we know this firsthand; we ran complex problem-solving courses and workshops based on his work for several years). Deep Change is an honest attempt to take the core ideas from Frame Innovation and make them easier to understand and apply.

It’s an honest book. Change is hard, and change that endures is harder still. If you haven’t encountered systems thinking or complexity-based approaches to design before, but want to learn through concrete case studies grounded in theory, this is a strong place to start. Kees is a great writer, and the book flows as it walks you through the challenge of designing for the world we actually live in—not an idealized one—where problems are open, complex, and networked, and solutions are inherently harder to find.

Also blurbed by our great friend and supporter, Don Norman.

How Progress Ends — Carl Benedikt Frey

Book cover of How Proress Ends by Carl Benedikt Frey

I’m a Frey fan. We went toe to toe with his work in 2016 on job automation, and I found his earlier book, The Technology Trap, especially useful for the way it unpacked how technology reshapes work over time. This book is related but different: a long historical sweep showing how economic and technological progress often gives way to stagnation or collapse, and why “progress” is not inevitable. By tracing the tension between decentralized innovation and bureaucratic scale, Frey draws out institutional lessons that matter for a world now betting heavily on AI for growth. It’s a reminder not to assume that everything trends smoothly upward. There are no guarantees of progress, and no guarantee that technology will save us.

Artificial Humanities — Nina Beguš

Book cover of Artificial Humanitis by Nina Beguš

This book is a dissertation, and it reads like one—but it’s a good one. Beguš explores how literature, history, and art illuminate the cultural and ethical dimensions of AI, tracing lines from Eliza Doolittle to the ELIZA chatbot and on to today’s large language models. She shows how gendered virtual assistants, science fiction, and social robotics shape our expectations long before any technology is deployed.

The argument runs deeper than most discussions—even many academic ones—about the myths that continue to be reproduced through our stories about AI, whether it’s Her, Ex Machina, or similar cultural touchstones. This is a call to take fiction seriously as a real force in technological development. One implication, at least for me, is that writers are already participating in AI design by shaping the symbolic worlds these systems inherit. If we want better AI, we may need better stories.

The Cost of Conviction — Steven Sloman

Book cover of The Cost of Conviction by Steven Sloman

The Cost of Conviction feels like a natural continuation of The Knowledge Illusion, which is what first pulled us into the science of how people actually think. That earlier book helped establish a simple but unsettling idea: our intelligence is largely social, our understanding is thinner than it feels, and much of what we call knowing is borrowed from the world around us. That frame has shaped a lot of how we think about people, institutions, and now AI.

Steve focuses on how decisions are made when values are involved, distinguishing between consequentialist reasoning and decisions anchored in sacred beliefs. He shows how deeply held values structure judgment, group belonging, and polarization, often outside our awareness. For me, this is especially relevant for AI: we are introducing systems that generate language at scale into precisely these social and value-laden contexts. The implication I took from the book is that we need to understand the kinds of reasoning people are already using—and how easily conviction, rather than understanding, can be amplified when language becomes cheap and ubiquitous.

Working Library

I list these books because they’re some of the ones that shaped how I think, long before we called it the artificiality. They share a way of approaching intelligence, behaviour, and technology that is evolutionary, relational, and based on how humans actually decide, coordinate, and make meaning. None of them offers a single theory that explains everything, and that’s part of the point. Together, they form a working library for thinking about the convergence of humans and machines.

Behave — Robert Sapolsky

This is the book I pick up when I want to remember just how many layers sit underneath any single human action, and how little sense it makes to talk about behaviour without biology, history, and context.

The Book of Minds — Philip Ball

This was the first book that made it obvious to me that the way we’ve been talking about “mind” and “intelligence” was already outdated, long before AI forced the issue.

The Extended Mind — Annie Murphy Paul

A calm, generous account of something we already do every day: think with tools, environments, and other people—and a useful reminder that outsourcing cognition is not a failure mode but a feature.

Ways of Being — James Bridle

I don’t have a clean explanation for why this book worked for me. It may have simply been validation that we are all part of something a lot bigger—a planetary intelligence worth knowing.

Metazoa — Peter Godfrey-Smith

Reading this permanently changed my sense of where consciousness begins and ends, and made it much harder to talk casually about “human uniqueness” without feeling sloppy.

The Ascent of Information — Caleb Scharf

This was the book that made me see humans less as users of data and more as part of an information metabolism—feeding, shaping, and sustaining the systems we’re now building.

The Idea of the Brain — Matthew Cobb

A historical account of how neuroscientists have tried to understand the brain, and how much those efforts have depended on the metaphors and machines of their time.

The Social Brain — Camilleri, Rockey & Dunbar

When Dunbar turns his attention from group size to collective intelligence, it becomes impossible to ignore how much thinking actually happens between people rather than inside them.

The Technology Trap — Carl Benedikt Frey

This is the history book I recommend when people want reassurance about AI, because it doesn’t do reassurance. Rather, it replaces it with much better questions about power, work, and choice.

The Alignment Problem — Brian Christian

Still one of the clearest accounts of why aligning AI with “human values” is not a technical puzzle but a deeply human one, full of ambiguity and trade-offs we’d rather avoid.

Power and Progress — Daron Acemoglu (with Simon Johnson)

A useful counterweight to techno-optimism, grounding the AI conversation in political economy and reminding us that technology amplifies existing power structures unless actively redirected.

Being You — Anil Seth

A good entry point into consciousness that takes subjective experience seriously without drifting into mysticism, even if the story it tells may not be the final one.

Palo Alto — Malcolm Harris

This book stripped away any remaining romance I had about Silicon Valley by placing it squarely inside a much longer history of capitalism, extraction, and empire.

A Brief History of Intelligence — Max Bennett

An accessible walk through the layers of intelligence that gets much right and some things wrong, which is exactly why it’s useful to read alongside Hayles rather than instead of her.

Great Philosophical Objections to AI — Eric Dietrich, Chris Fields, John P. Sullins, Bram Van Heuveln, Robin Zebrowski

Not light reading, but essential if you want to understand why the idea of AGI has always been contested on conceptual grounds, not just practical ones.

The Nature of Technology — W. Brian Arthur

The only older book on this list, and the canonical account of how technologies actually evolve and reshape the world.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.