Cognitive Boundaries and Collaborative Intelligence
Why Working Memory Limits Make Expert-AI Collaboration Work Expertise is shaped by a simple cognitive limit: we can hold only
The medium isn’t just the message anymore. The medium is now the meaning.
The human experience is entering a new reality in an increasingly synthetic world. Until now, our synthetic creations have been our tools, our enablers, our sidekicks. But now, we are creating synthetic intelligence and agency. This new reality—which we call the Artificiality—fundamentally changes what it means to be human.
But perhaps this shift is not so sudden. Human identity has always emerged through entanglement with our creations—from tools to language to song. These weren’t just expressions of human intelligence—they were engines of its evolution. The Artificiality continues this journey into the co-evolution of mind and machine.
It may seem easy to dismiss this idea. One might think: machines can’t be truly intelligent or agentic—those characteristics are constrained to the natural. But we now understand that intelligence and agency begin at the smallest levels of biological systems with information leading to computation and then to intelligence and agency. And we know that synthetic systems display the same capacity for emergent intelligence and agency.
One might also think: machines are machines and humans are humans; the boundary of each is distinct. But we know that we have already outsourced physical capabilities to machines, memory to machines, creative production to machines. As we outsource cognition to machines, we must ask: what does it mean for a human to think, to make sense of the world, and to find meaning in life when some of that cognition exists in the synthetic world?
These blurring boundaries invite us to reconsider not just what machines can do, but how we communicate with them. Our existing design language has been created to transfer ideas, stories, and meaning from human to human. We communicate through language, express metaphors through pictures, order numbers in recognizable patterns—finding creative and efficient ways to transfer ideas to each other. Our computer interfaces follow this pattern of relatability using skeuomorphic design, likening digital concepts to relatable physical items: writing is identified as a piece of paper, a group of things are identified as a folder, something to discard is put in a trash can. All of these design elements help explain one human’s ideas to another.
But what if the next interface isn’t based on metaphor at all? Our metaphors make sense to us because they reflect a shared human world. But synthetic minds don’t share that world. They operate in spaces of abstraction, combination, and unfamiliar logic. What if, instead of explaining our world to them, we found ways to meet them in theirs—through shared moments of rhythm and resonance? These aren’t metaphors for translation; they’re invitations to co-sense. Machine minds may not speak our symbolic language, but they are already generating patterns, rhythms, and responses—forms of cognition that are not reducible to logic or explanation. To engage them is not to decode, but to attune.
The Artificiality invites a complete reevaluation of our interaction with machines. Machines are no longer mediums for human ideas—they are now creators of ideas themselves within cognitive spaces we cannot inhabit. These aren’t simply foreign—they may be structured in logics we haven’t mapped, landscapes we haven’t walked. To make sense of them, we may need new kinds of cognitive maps that navigate spaces we don’t intuitively understand.
We call this new design language neosemantics for a new way to express meaning. Today, neosemantic design is an ambitious concept—a path of discovery rather than a set of answers. Neosemantics draws from gesture, motion, and space—the original languages of thought—from the arts, music, and dance. It draws from human meaning-making that is felt, lived, and embodied. Neosemantic design aspires to communicate meaning through the pleasure of alignment, the tension of dissonance, the grace of fittingness. Neosemantics also aspires to be aesthetic—not just expressive, but sensory, felt, and shared.
The early designers of what became the GUI were visionaries. They moved us from typed commands to graphic representations that mirrored how we already thought—spatially and symbolically. They gave us metaphors we could act upon. Instead of typing out the command to draw a line or delete a file, they created graphical methods for a human to instruct the machine to complete those tasks. The GUI has evolved in remarkable ways to allow humans to instruct machines, to communicate the meaning we have in our own minds to others. But its symbolic foundations were never built to accommodate synthetic minds.
The GUI evolved within a symbolic system designed only for human minds. It serves human-to-human communication, not human-to-synthetic resonance. Our existing graphical user interfaces—remarkable as they are—were designed for human minds shaped by paper, folders, desktops. They reflect the spatial metaphors and symbolic logic of the human world. But they weren’t built for minds that combine information in alien ways, or that generate meaning in spaces we can’t yet describe. As machines develop their own cognitive landscapes, the GUI becomes a limiting frame—a beautiful map of a world that no longer exists. And while language remains the default human interface, relying on it to communicate with AI risks becoming a regression: a return to linearity, to constraint, to the pre-spatial computing of command lines.
Today, we are encountering minds that were not shaped by gesture or paper or metaphor. We are entering a new ecology of minds—one in which meaning is no longer exchanged from one human to another, but emerges between human and synthetic, in conceptual spaces that drift beyond our symbolic inheritance. These aren’t just new interfaces; they are new environments of thought. And like all environments, they will shape us in return.
This cognitive mismatch raises questions about how meaningful exchange might occur between human and synthetic minds. How might a machine communicate meaning that takes too long to write & read? What if there is no graphical metaphor to apply? Will we choke our human-machine communications with methods created for intra-human communication? Or might there be a new way to communicate meaning—a neosemantic way.
We don’t pretend to have the answers today. But we hope we are asking the right questions. We are inspired by human meaning-making that perhaps isn’t explicit. By sense-making that just fits. By how one song means something entirely different than another. How a painting expresses meaning implicitly. How a walk in the woods opens the mind. How the sea calms the soul. How a hummingbird can elicit both care and awe. How the stars create wonder, the rising sun excitement, and the setting sun calm.
Not all meaning needs to be spoken. Neosemantics invites us into this space of implicit understanding —a space that might only last a moment, and might look different for each person. Imagine an interface that shifts its color, rhythm, or tone to mirror your mood—not to inform or instruct, but to invite alignment.
In the future, machines may be able to shape that space dynamically. Interfaces will no longer be fixed; they will be ephemeral, context-aware, and responsive to a person’s internal state and cognitive patterns. These interfaces won’t just display information—they will guide attention, invite movement, and scaffold action. Like a well-designed diagram or a well-planned room, they will subtly suggest where to go, what to explore, what matters most right now. Even as these systems shift and adapt, they must still offer affordances—opportunities for orientation, invitation, and action. But perhaps not in the form of fixed labels or visible cues. In this new space, legibility may emerge not from explanation, but from coherence—a sense that something feels right, even if we can’t yet describe why.
We humans feel and create meaning in ways far beyond our computing interfaces today. And we hope to apply inspirations from these meaning-makings to future designs so that machines can communicate new meanings to us that we haven’t thought of before. Because for the first time in history, humans have created a medium whose purpose isn’t to communicate meaning among humans. Its purpose is to communicate meaning itself. Not a message sent from one human mind to another, but a resonant field between human and synthetic minds from which meaning emerges—together. The medium isn’t just the message anymore. The medium is now the meaning.
Writing and Conversations About AI (Not Written by AI)