Three Things from Us | Takeaways from Building a Manifesto for AI with AI | Something Entertaining
Three Things Thing One: Summit 2026 Tickets for the Artificiality Summit 2026 are now available! Same place (Bend, Oregon) and
We tried using AI to shape a collective manifesto for the future of AI. My takeaway is that AI doesn’t necessarily make this process easier, although I did see moments that it did.
At our recent Summit, we ran an experiment to write a manifesto for AI, using AI. Starting from a "seed" manifesto, the group was prompted to answer a series of questions designed to elicit ideas to shape a new, group manifesto for society-scale decisions with AI. We took photos of answers written on stickies, uploaded them to the AI, and then watched the manifesto take shape on a shared screen.
We wanted to know what it would be like to include an AI in a social construct—would it help or hinder the process of collective authoring? Well, after talking to people who were there, it seems that half the room loved it and half hated it.
Setting aside setup and logistical challenges, the experiment ultimately revealed that there’s no single answer—or single experience—when it comes to using AI in a social context.
Jenna Fizel and Dave Vondle of IDEO designed it to be ambitious—an experiment in collective emergence. They worked hard on it, and I'm grateful. But like any experiment worth running, it showed tensions we needed to see. The people who hated it weren't wrong. They were tracking real problems that cut right to the center of what makes collective work with AI so difficult. People who struggled were responding to legitimate structural issues.
First, the cognitive load was enormous. Wait, what? Contribute to a collective statement without knowing what framework others are using, what they mean by key terms, or where this is going? Difficult at the best of times. I think we all fell into a hidden assumption—that AI would naturally make a hard job easier, an inefficient process efficient. If your orientation requires structure before exploration—and this is completely valid—it feels like being asked to build a bridge from the middle out.
Then there’s the power question—mixed up with transparency, or maybe a new kind of transparency altogether. Who (or in this case, what) decides what makes it into the final manifesto? Whose voice gets weighted? When you can't clearly see the selection mechanism aka the algorithm, contributing feels risky. Some people felt the hazard more than others.
Some people found the open-ended prompts generative, others saw them as a kind of category error—you can’t will collective insight into existence. The resistance wasn’t to collaboration itself, but to a particular theory of how collaboration works. It reminded me of that awkward moment at the end of a seminar when the floor opens for questions and everyone has to figure out what they’re actually asking, and why. There’s a kind of frame conflict—your mental model has to meet the presenter’s on common ground. My takeaway is that AI doesn’t necessarily make this process easier, although I did see moments that it did.
We did have an LLM handle the coordination—synthesizing contributions, finding themes, producing the manifesto. And that's exactly where the trust problem showed up. For some people, the LLM handling complexity felt like relief—finally, something that can process all this input. For others, it felt opaque—I can't see how my contribution is being weighted, what's being kept or dropped, whose framing is winning.
The LLM moved the coordination problem rather than solving it. Now instead of humans struggling to synthesize, we're struggling to trust a synthesis we can't see inside. Aekta Shah and Beth Rudden both hit on this in their workshops—transparency and trust aren't universal. They work differently for different people. Some people trust process when they can see the algorithm. Others trust it when they can see the human judgment. We needed to calibrate that before running the exercise, not assume the LLM would paper over it.
Charan Ranganath said memory is about the future—it's the platform for prediction. Individual memory. But what should the AI have in its memory for an exercise like this? Each person's contribution history? Their stated preferences? The group's past coordination patterns? The whole of Western democratic society? I'm left with a bigger question now about what the AI needs to remember to actually help rather than just synthesize blindly.
Maggie Jackson's frame—uncertainty as wakefulness—is so good! But it takes skill and practice. People aren't in the same place with it. The manifesto split tracked uncertainty tolerance. Some people treat ambiguity as generative. Others treat it as risk, legitimately. You can't design as if everyone relates to uncertainty the same way.
Julian Yocum's provocation about realization makes me wonder about whether we knew what we wanted in the first place. What does it take to move from vague preferences to actual realization of what you want?
Maybe the manifesto exercise revealed we don't actually know what we want from collective work with AI until we're in it. How different would it have been if everyone had clarity on what they wanted the process to do for them before contributing? Or would that just lock us in? Perhaps we asked people to build collectively before they'd realized what they wanted to build or how they wanted to build it. That's not a small thing to skip.
The manifesto showed me the "We" layer—active coordination to build together—is where AI creates friction. Next time we will start with what each person needs to trust process. We'll build shared memory about actual coordination patterns. And we will design for multiple uncertainty orientations. If AI is in the room, slow it down enough for human feedback to work.