Announcing the 2026 Summit | Cognitive Sovereignty | Tess Posner
We are thrilled to announce the Artificiality Summit 2026! AI is dissolving our inherited boundaries—between knowing and not-knowing, between
We are thrilled to announce the Artificiality Summit 2026!
AI is dissolving our inherited boundaries—between knowing and not-knowing, between thinking and acting, between who we have been and who we might still become.
The world is changing faster than our mental models can keep up.
Our instinct is to search for answers.
This year, we choose to embrace something else:
Don't miss an experience that our community has described as:
"I feel like I just attended the first TED."
"I feel like I just attended the first Royal Society."
"I feel like I just attended the first Burning Man."
For centuries, humans believed we were the only species with reason, agency, self-improvement.
Then came AI.
We are no longer the only system that learns, adapts, or acts with agency. And when the boundary of intelligence moves, the boundary of humanity moves with it.
We are entering a period of unknowing—not ignorance, but a necessary release of inherited assumptions. We don’t yet know what AI will become, and we don’t yet know what we will become in relationship to it.
Unknowing is the space between—the place where neither side is fixed, and something new can emerge.
Something is happening to our thinking, our being, our becoming. If AI changes how we think, and how we think shapes who we become, then how might AI change what it means to be human?
Unknowing is how we stay conscious and make space for emergence.
Becoming is what happens when we do.
This is the space we’re entering together.
Early registration is open now, with a limited early-commitment rate. If you’re ready to explore the space between what is known and what is possible, reserve your seat.
What's included:
Following our keynote for the Oregon Community College Association Annual Conference, Helen has written an essay describing how our research on the human experience of AI can be used for cognitive sovereignty.
As she writes, cognitive sovereignty means staying the author of your own mind as you work with AI. That's it. The question is: how do you actually do that when AI can help you think, create, and decide in ways you couldn't before?
The Cube gives you a way to see where you are and whether you chose to be there. It maps three things that happen when you work with AI: how far it gets into your reasoning (cognitive permeability), how much it fuses with your sense of self (identity coupling), and how much it changes what things mean to you (symbolic plasticity). These three dimensions create eight roles you can assign to AI, from Doer to Co-Author.
Here's what most people get wrong about sovereignty: they think it means keeping AI at a distance. Keep AI in the Doer role, let it do the boring stuff, and you're safe. But that's not sovereignty it's just avoidance.
Real sovereignty is knowing which role you've given AI, choosing that role deliberately, and being able to change it. Someone who assigns AI the Co-Author role with full awareness might have more sovereignty than someone who keeps AI solely as Doer because they're afraid to go deeper. The person who has tried using AI as Builder, understood what it does to their sense of self, and consciously decided when to assign that role? That's sovereignty. The person who won't let AI into their thinking because they think it compromises them? That might just be rigidity.
Want to learn more about cognitive sovereignty and the cube?
Bring us into your organization or association to talk about applying our research on the human experience of AI.
"A big thank you to Helen and Dave Edwards of the Artificiality Institute for their incredibly fascinating keynote on the human-AI experience!" — the Oregon Community College Association
We're excited to publish a podcast with Artificiality Summit 2025 speaker, Tess Posner. This episode was recorded prior to the Summit but held due to an issue with Spotify. Essentially, Spotify automatically identified that we use music from Jonathan Coulton as our theme song but the process of proving we had a licence was far from transparent. Ironically, we could have used AI-generated music without issue but we like our human-generated music! Hopefully it's fixed now.
Why am I explaining all of this to you in this email? It seems appropriate since Tess is a musician in addition to being founding CEO and board member of AI4ALL. For nearly a decade—long before the current AI surge—Tess has led efforts to broaden access to AI education, starting from a 2016 summer camp at Stanford that demonstrated how exposure to hands-on AI projects could inspire high school students, particularly young women, to pursue careers in the field.
Take a listen to the podcast and keep an eye out for a recording of Tess's talk from the Artificiality Summit.
Support the Artificiality Institute
We are on a mission to shape the emerging human experience in an increasingly synthetic world through story-based research, immersive education, and cultural dialogue—so our future with AI is more meaningful, not just more efficient.
We can't do this without support from people like you.
Your support powers three essential programs:
1. Research: We document how people actually live and work with AI, creating a lasting record of this pivotal moment in history
2. Education: We develop practical programs for human-AI collaboration that prioritize meaning over metrics
3. Media: We publish essays and podcasts that make our research accessible and actionable for everyone
Every tax-deductible contribution helps build a future where technology serves human flourishing, not just profit margins.