It's Time for Better GenAI Design | What We Know Before Saying It
In This Issue: * It's Time for Better GenAI Design. More below... * What We Know Before Saying It. Helen
ChatGPT is about to celebrate its first birthday. It’s time for it to graduate from an experiment into a true product.
A new paper argues for analyzing AI systems like GPT through a "teleological" lens focused on the specific problem they were optimized to solve during training. 7 min read
Grounding her work in the problem of causation, Alicia Juarrero challenges previously held beliefs that only forceful impacts are causes. Constraints, she claims, bring about effects as well, and they enable the emergence of coherence.
New research from the Stanford Center for Research on Foundation Models shows that foundation models are largely mysterious.
Jai Vipra is a research fellow at the AI Now Institute where she focuses on competition issues in frontier AI models. She recently published the report Computational Power and AI which focuses on compute as a core dependency in building large-scale AI.
Here’s the issue: the current business model doesn’t make sense because increasing usage conflicts with profits.
Intimacy with technology has been the territory of science fiction. What happens if we are able to live those stories ourselves?
An interview with University of British Columbia professor, Wendy Wong, about her book We, the Data: Human Rights in the Digital Age.
Apple isn’t being left behind in generative AI—it’s playing a different game. While every other tech company is spending billions on inference compute—Apple is being paid for it.
Higher ed grapples with AI: Student learning and job impact top concerns, but confidence and preparedness vary. Proactive dialogue on AI needed.
An interview with Chris Summerfield about his book Natural General Intelligence.
Artificiality Co-founders, Helen and Dave Edwards, gave a keynote presentation at Lane Community College on AI & Higher Education.