Bubbles | On Unpredictability and the Work of being Human | Something Entertaining
In This Issue: * Bubbles. Are We? Aren't We? Read more below for my current take... * On Unpredictability and
In our Artificiality Pro update for January, we covered our 10 research obsessions for 2024.
We're revisiting one of our most thought-provoking episodes, originally recorded in April 2022, featuring Barbara Tversky, the author of Mind in Motion: How Action Shapes Thought.
Artificiality Co-founders, Helen and Dave Edwards, gave a presentaiton on AI & Higher Education for the Board of Regents of the Montana University System.
The emergence of complexity from simple algorithms is a phenomenon we see in both natural and artificial systems. It's a classic example of complexity: even straightforward algorithms can lead to immense complexity over time.
For the past year, we’ve lived in a world overwhelmed by news of large AI, especially large language models like GPT, the model behind OpenAI’s ChatGPT. The general genius of large language models, however, comes at a cost—and that cost may not be worth it in plenty of use cases.
In our Artificiality Pro update for December, we covered several key industry updates in AI and introduced mechanistic interpretability and memory vs. margins.
In this episode, we provide updates from our Artificiality Pro presentation, including key developments in mechanistic interpretability for understanding AI models and considerations around the costs of large language models: aka memory vs margins.
Developing expertise now requires fluency in both core disciplines and leveraging AI for insights, posing an uneasy paradox.
In this episode, we speak with cognitive neuroscientist Stephen Fleming about theories of consciousness and how they relate to artificial intelligence.