What We Know Before Saying It

AI works downstream of something it doesn't know exists

What We Know Before Saying It
Polar blast, Wisconsin. Credit: Helen Edwards

I'm always thinking about the question of what makes humans different. Is it our ability to handle the unpredictable? Or is it something to do with our felt experience—that we live in a physical and social world.

Sara Walker just published an article in Noema that gave me a new perspective—one I'd felt but hadn't actually had words for. Now if that sounds familiar, keep reading (and read her article) because this idea is kind of the point.

We tend to assume language is where thought happens. Writing is thinking. Talking through a problem is how we figure it out. But what if there's a layer before that—what if language is post-thinking? What if experience gets organized—compressed, translated into something holdable—before we ever reach for words?

If that's true, then AI is downstream in a specific way. Every piece of training data is something a human already processed. Experience already pressed into symbols. AI works brilliantly with that output. But it never does the original organizing. It only sees what's already become language.

Some think embodiment might change this. Give AI sensors, let it touch the world directly, and maybe it gets upstream too. But I don't think so. Sensors produce data. Data isn't experience being organized before it becomes something. There's no "before" for a sensor. It goes straight from signal to representation. A robot with cameras has inputs. That's different from having a layer where things get shaped before they become information at all.

So there's something in us—before language, before representation—that AI doesn't have and won't have.

I'm not being defensive. AI does know something real. It perceives the shape of human language at scale. Patterns across billions of outputs. The geometry of how we've collectively organized our symbols. No individual human can see that. AI sees it directly. But, if Sara is right, humans have access to something before we speak. A layer where experience gets organized before it becomes language. 

AI is all about prediction and we've been told that prediction is all there is. But perhaps we should think of prediction as inside what's already been said. This means it’s not the same as a good explanation. Prediction finds patterns in existing language. Explanation articulates something that wasn't there before. AI can't do it—not because it's not smart enough, but because it only ever sees the after.

AI gives us the map of our collective output. Useful for seeing patterns we couldn't see alone. But human thinking is different because it gives us explanation. The act of reaching into unprocessed experience and finding something new.

We need both. But we shouldn't confuse them. And we especially shouldn't measure one against the other as if they're on the same axis. They're not. One predicts from the already-said. One explains from the not-yet-said.

The conversation about AI keeps getting stuck on questions of capability and competition. Will it catch up? Will it surpass us? These questions assume a single road with positions along it. The better mental model is of multiple territories. AI knows the shape of the maps. Humans know something about the territory before it gets mapped. That's a new way of thinking, for me. The idea of pre-language. The validation of how there might just be something different there.

And it's not a claim about consciousness or souls or anything mystical. It's a claim about position. Where in the process of turning experience into language does a given system sit? AI sits after. We sit before and after both.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.