Bubbles | On Unpredictability and the Work of being Human | Something Entertaining
In This Issue: * Bubbles. Are We? Aren't We? Read more below for my current take... * On Unpredictability and
In this episode, we have a conversation with Scott Stephenson, Co-founder and CEO of Deepgram, a company which has built an end-to-end deep learning speech recognition system.
The math of COVID-19 may mean some level of opt-in tracking is vital to stop repeated outbreaks
In this episode, I have the pleasure of talking with Will Griffin, Chief Ethics Officer of Hypergiant, an Austin Texas-based AI product and services company.
Humans think in terms of 1,2,3,4 lots and lots, while machines think in billions.
In this episode, I have the pleasure of interviewing Chelsea Barabas a PhD candidate at MIT’s Media Lab. We talk about her work on bias in the criminal justice system as well as her most recent work applying the concept of “studying up” from anthropology to the data science world.
A bumper issue including privacy, safety, barriers, and opportunities.
My key takeaways from the series I wrote for Quartz about AI's Power Problem.
In this episode, Dave interviews Helen about her recent article in Quartz, “Are AI ethicists making any difference?”
Artificiality co-founders, Helen and Dave Edwards, gave a presentation at the State of Oregon's Talent Summit on AI & the Future of Work.
The most dangerous AI bias is the bias of the more powerful over the less powerful.
A growing cadre of academics, activists, technologists, lawyers, and designers are confronting biases and attempting to understand and mitigate them. The attempt to grapple with AI bias will force us to confront the biases in ourselves.
Regulation needs to be proactive. Here’s two ways that can happen.