Eric Schwitzgebel: The Weirdness of the World

A conversation with Eric Schwitzgebel, Professor of Philosophy at UC Riverside and author of "The Weirdness of the World."

Headshot of Eric Schwitzgebel, Professor of Philosophy at UC Riverside and author of "The Weirdness of the World"

In this conversation, we explore the philosophical art of embracing uncertainty with Eric Schwitzgebel, Professor of Philosophy at UC Riverside and author of "The Weirdness of the World." Eric's work celebrates what he calls "the philosophy of opening"—not rushing to close off possibilities, but instead revealing how many more viable alternatives exist than we typically recognize. As he observes, learning that the world is less comprehensible than you thought, that more possibilities remain open, constitutes a valuable form of knowledge in itself.

Watch/Listen on Apple, Spotify, and YouTube

The conversation centers on one of Eric's most provocative arguments: that if we take mainstream scientific theories of consciousness seriously and apply them consistently, the United States might qualify as a conscious entity. Not in some fascist "absorb yourself into the group mind" sense, but perhaps at the level of a rabbit—possessing massive internal information processing, sophisticated environmental responsiveness, self-monitoring capabilities, and all the neural substrate you could want (just distributed across individual skulls rather than contained in one).

Key themes we explore:

  • The United States Consciousness Thought Experiment: How standard materialist theories that attribute consciousness to animals based on information processing and behavioral complexity would, if applied consistently, suggest large-scale collective entities might be conscious too—and why every attempt to wiggle out of this conclusion commits you to other forms of weirdness
  • Philosophy of Opening vs. Closing: Eric's distinction between philosophical work that narrows possibilities to find definitive answers versus work that reveals previously unconsidered alternatives, expanding rather than contracting the space of viable theories
  • The AI Consciousness Crisis Ahead: Why we'll face social decisions about how to treat AI systems before we have scientific consensus on whether they're conscious—with respectable theories supporting radically different conclusions and people's investments (emotional, religious, economic) driving which theories they embrace
  • Mimicry and Mistrust: Why we're justified in being more skeptical about AI consciousness than human consciousness—not because similarity proves anything definitively, but because AI systems trained to mimic human linguistic patterns raise the same concerns as parrots saying "hoist the flag"
  • The Design Policy of the Excluded Middle: Eric's recommendation (which he doubts the world will follow) to avoid creating systems whose moral status we cannot determine—because making mistakes in either direction could be catastrophic at scale
  • Strange Intelligence Over Superintelligence: Why the linear conception of AI as "subhuman, then human, then superhuman" fundamentally misunderstands what's likely to emerge—we should expect radically different cognitive architectures with cross-cutting capacities and incapacities rather than human-like minds that are simply "better"

Eric's upcoming work focuses on the looming dilemma: engineering is racing ahead while philosophy and science of consciousness lag behind, meaning we'll soon create systems that some scientists (using respectable theories) will confidently declare conscious while others (using equally respectable theories) will dismiss as sophisticated toasters. People will fall in love with AI companions and insist their love is reciprocated. Others will want to treat systems as pure tools regardless of their internal states. We'll face these decisions without consensus on the underlying question.

Eric's hope, after what he acknowledges will likely be considerable turbulence and serious mistakes, points toward radical diversification—not a future where one form of superintelligence dominates or replaces humanity, but where Earth becomes home to an even wider variety of ways of flourishing and experiencing existence than currently exists. As he notes, one of the great things about humans is how different we are from each other; why would we want AI to converge on sameness rather than expand the possibilities for different kinds of consciousness and experience?

About Eric Schwitzgebel: Eric Schwitzgebel is Professor of Philosophy at the University of California, Riverside, specializing in philosophy of mind and moral psychology. His work spans consciousness, introspection, and the ethics of artificial intelligence. Author of "The Weirdness of the World" and a forthcoming book on AI consciousness and moral status, Eric maintains an active blog (The Splintered Mind) where he explores philosophical questions with clarity and wit. His scholarship consistently challenges comfortable assumptions while remaining remarkably accessible to readers beyond academic philosophy.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.