It's Time for Better GenAI Design | What We Know Before Saying It
In This Issue: * It's Time for Better GenAI Design. More below... * What We Know Before Saying It. Helen
For three years, I’ve criticized the design of AI chat systems. I have hoped the industry would slow down and design for people rather than using people as testers. But my hopes continue to be dashed and it’s time for change.
Example: We took a moment to try the new (at least to us) group chat feature in ChatGPT. While there are a few capabilities that are impressive, this feature should never have been released.
Let’s start with the good bits. ChatGPT ably navigates multiple users at the same time. We haven’t thoroughly stress tested the system’s capability to navigate multiple comments at the same time but our first impression is that it does a pretty good job. More than one person can submit prompts at the same time and ChatGPT is able to navigate responding to each or all at the same time.
Honestly, that’s it for the good bits.
The reality is that the rest of the experience should have kept this feature unreleased. When you invite someone to a chat, you get no indication of what this means. You aren’t told that the existing chat will be shared in its entirety. You aren’t told that the system’s memory of you will be forgotten. Likewise, when you join a chat through an invitation, you don’t know that the ChatGPT in this new chat doesn’t know anything about you—you are essentially a new user. That disconnect continues across all shared chats—even when the same users are in more than one shared chat.
All of this is a mess. Imagine building a relationship with a coach or mentor who then has relationship amnesia every time you bring someone new into a conversation. It’s the same person but they break every mental model you have for a person. This disconnect is what OpenAI has decided to launch on all of us.
I’ll stop my critique now to say: it doesn’t have to be this way.
Yes, collaboration is hard. And designing AI to effectively work with multiple humans will be very hard. But getting this right will be incredibly helpful and important. And so I’m honestly glad that OpenAI is getting this wrong—because we plan to get it right.
Those at the Artificiality Summit will remember that we briefly mentioned our plans to launch a venture studio in 2026. Getting collaboration and collective intelligence right is one of our core missions—and we’re happy to see the big AI companies leaving this as an open opportunity.
If you’re interested in partnering, investing, or supporting this effort, we’d love to talk with you.
Helen reflects on a recent article by Sara Walker by asking: what if experience gets organized—compressed, translated into something holdable—before we ever reach for words? If that's true, then AI is downstream in a specific way. Every piece of training data is something a human already processed. Experience already pressed into symbols. AI works brilliantly with that output. But it never does the original organizing. It only sees what's already become language.
Don't miss the Artificiality Summit 2026!
October 22-24, 2026 in Bend, Oregon
Ticket prices increase in two weeks!
Our theme will be Unknowing. Why? For centuries, humans believed we were the only species with reason, agency, self-improvement. Then came AI. We are no longer the only system that learns, adapts, or acts with agency. And when the boundary of intelligence moves, the boundary of humanity moves with it.
Something is happening to our thinking, our being, our becoming. If AI changes how we think, and how we think shapes who we become, then how might AI change what it means to be human?
Unknowing is how we stay conscious and make space for emergence.
Becoming is what happens when we do.
Register now—Super Early discounted rate expires on December 31.