Bubbles and the Invention-Imagination Gap | How to Read AI Usage Studies | A Conversation with De Kai
We’ve entered the Bubble Prediction phase of AI. Some see valuations floating safely forward; others warn of collapse. The
We’ve entered the Bubble Prediction phase of AI. Some see valuations floating safely forward; others warn of collapse. The real challenge isn’t forecasting the burst—it’s imagining the inventions that could justify today’s bets.
Bubbles are not new. From biotech in the 80s to the internet in the 90s to cleantech in the 00s, valuations raced ahead of reality: some companies failed, others endured. I had a front-row seat as an equity research analyst during the internet and cleantech bubbles.
Here are 3 lessons that I think might be useful during today’s AI bubble.
Every bubble exaggerates the present and underestimates the future. Biotech, the internet, and cleantech all stumbled through frenzy and collapse, yet each remade the world in ways critics never imagined. AI will likely do the same. What matters now is separating spectacle from substance—and imagining the inventions that will bridge the gap.
As a research institute that studies the human experience of AI, it's no wonder that we're interested in AI usage studies. In this essay, Helen provides a comprehensive analysis of three recent studies Anthropic's Economic Index Report, OpenAI's How People Use ChatGPT, and our own Chronicle project: How We Think and Live with AI: Early Patterns of Human Adaptation. Each study has its advantages and disadvantages, value and gaps.
The Three Research Approaches:
Key Insights:
Bottom Line: No single study captures the full complexity of human-AI interaction. Understanding each methodology's strengths and blind spots helps you apply research insights more effectively to your specific context and make better decisions about AI integration.
In this conversation, we talk with De Kai, a professor, pioneering AI researcher, and author of Raising AI. Drawing on insights from developmental psychology and complex systems, De Kai's "Raising AI" framework emphasizes conscious human responsibility in shaping how these artificial minds develop. Rather than viewing this as an overwhelming burden, he frames it as an opportunity for humans to become more intentional about the values and behaviors they model—both for AI systems and for each other.
It's less than 5 weeks until...
The Artificiality Summit 2025!
Join us to imagine a meaningful life with synthetic intelligence—for me, we, and us. In this time of mass confusion, over/under hype, and polarizing optimism/pessimism, the Artificiality Summit will be a place to gather, consider, dream, and design a pro-human future.
And don't just join us. Join our spectacular line-up of speakers, catalysts, performers, and firebrands: Blaise Agüera y Arcas (Google), Benjamin Bratton (UCSD, Antikythera/Berggruen), Adam Cutler (IBM), Alan Eyzaguirre (Mari-OS), Jonathan Feinstein (Yale University), Jenna Fizel (IDEO), Jamer Hunt (Parsons School of Design), Maggie Jackson (author), Michael Levin (Tufts University, remote), Josh Lovejoy (Amazon), Sir Geoff Mulgan (University College London), John Pasmore (Latimer.ai), Ellie Pavlick (Brown University & Google Deepmind), Tess Posner (AI4ALL), Charan Ranganath (University of California at Davis), Tobias Rees (limn), Beth Rudden (Bast AI), Eric Schwitzgebel (University of California at Riverside), and Aekta Shah (Salesforce).
Space is limited—so don't delay!