Bubbles | On Unpredictability and the Work of being Human | Something Entertaining
In This Issue: * Bubbles. Are We? Aren't We? Read more below for my current take... * On Unpredictability and
In our Artificiality Pro update for December, we covered several key industry updates in AI and introduced mechanistic interpretability and memory vs. margins.
In this episode, we provide updates from our Artificiality Pro presentation, including key developments in mechanistic interpretability for understanding AI models and considerations around the costs of large language models: aka memory vs margins.
Developing expertise now requires fluency in both core disciplines and leveraging AI for insights, posing an uneasy paradox.
In this episode, we speak with cognitive neuroscientist Stephen Fleming about theories of consciousness and how they relate to artificial intelligence.
Our obsession with intelligence: AI that promotes collective intelligence, not collective stupidity.
In this episode, we dive into the exciting announcement of Google's new foundation model for AI, Gemini, exploring three key aspects of this important, new technology.
An interview with Steven Sloman, professor of cognitive, linguistic, and psychological sciences at Brown University, about LLMs and deliberative reasoning.
Before AI can reason and plan, it needs some "system 2" thinking.
New research reveals that large language models can generate superior prompts for themselves through automated techniques, reducing reliance on specialized fine-tuning.
Our obsession with the parallels between human and machine intelligence.
The Biden Administration's Executive Order has stirred up discussion about the regulatory capture in AI. For good reason.