Bubbles | On Unpredictability and the Work of being Human | Something Entertaining
In This Issue: * Bubbles. Are We? Aren't We? Read more below for my current take... * On Unpredictability and
Our obsession with intelligence: AI that promotes collective intelligence, not collective stupidity.
Of all the interesting parts of Google’s Gemini announcement, one is keeping me up at night wondering about the possibilities for the future: dynamic coding.
In this episode, we dive into the exciting announcement of Google's new foundation model for AI, Gemini, exploring three key aspects of this important, new technology.
An interview with Steven Sloman, professor of cognitive, linguistic, and psychological sciences at Brown University, about LLMs and deliberative reasoning.
Before AI can reason and plan, it needs some "system 2" thinking.
After four years, we are relaunching Artificiality with a new site, new focus, and new business model. Woohoo!
New research reveals that large language models can generate superior prompts for themselves through automated techniques, reducing reliance on specialized fine-tuning.
Our obsession with the parallels between human and machine intelligence.
We don’t yet know what OpenAI will look like after the dust settles. But here are our main takeaways at the moment. AI Regulatory Capture, LLMs Thinking about Space and Time, and Generative AI Agents.
The Biden Administration's Executive Order has stirred up discussion about the regulatory capture in AI. For good reason.
Rather than reactively banning technology or doubling down on ineffective surveillance, we must proactively develop new pedagogical muscles for this algorithmic age—scaffolding metacognitive discernment and critical thinking while leveraging AI as a valuable asset.