Humans have now created machine intelligence, and it doesn't think like us. We prefer to frame AI as neither better nor worse than us, just different. And right now, millions of people are figuring out what happens when human minds collaborate with these foreign minds.
Some are discovering new forms of creativity they never knew they had. Others are losing track of their own thinking in ways that worry them. A few are building thinking partnerships that feel genuinely empowering. Many are confused. And almost all of us get angry or afraid when we sense it is being built to be in competition with us.
At the Artificiality Institute, we want to know how to think better with AI. What does it mean to adapt to this intelligence, both individually and collectively? How do we become aware of AI's influence on human thinking?
Over the past two and a half years, we've studied how over 1,000 people are adapting to this collision of intelligences. What we found challenges almost everything being said about AI and productivity.
Right now, there’s an obsession with whether AI makes you more efficient. But we think this misses the point. The real point is this: how do we get the benefits of foreign intelligence without eroding our own? What do we need to understand now to enhance our human thinking?
Please consider supporting our Chronicle research with a donation to the Artificiality Institute. Every contribution is an investment in a future where technology is designed for people, not just for profit—and where meaning matters.
Learn more
AI Is Changing How We Think
Most of us don't use AI systems the way we use other software. We form psychological relationships that reshape how we think, create, and understand ourselves. A CEO describes keeping ChatGPT open as a constant companion. An emergency doctor uses AI to find better ways to communicate with patients during medical crises. A teacher rebuilds her entire approach to education around AI collaboration.
The boundaries between human and AI thinking start dissolving through repeated interaction. A developer describes losing track of authorship: "I ended up 'autopiloting' my flow... After a few days I did not remember why some things were done like that." Ideas emerge from the conversation itself rather than from either human or AI alone.
Others discover cognitive capabilities they couldn't access independently. Someone with strong visual thinking who struggles with written expression finds they can externalize complex mental models through AI collaboration. The system enables forms of expression that feel impossible without the partnership.
But this creates genuine confusion about competence and authenticity. When a startup founder reflects, "Did the AI boost my creativity, or did I just tweak its ideas?" they're grappling with questions that didn't exist before. What does it mean to think with compressed collective human knowledge? How do you understand your own contribution when thinking becomes genuinely collaborative?
Our new whitepaper and the research behind it studies our cognitive adaptation to generative AI. AI combines apparent agency with no consciousness, it responds contextually without having experiences, and participates in thinking without actually thinking.
The Psychology of Adaptation
When humans bump up against genuinely foreign artificial intelligence, three psychological patterns emerge that determine whether the encounter enhances or diminishes our thinking. We call these cognitive permeability (how easily AI responses weave into your reasoning), identity coupling (how tightly your sense of self binds to AI performance), and symbolic plasticity (your ability to spot relevant variables and rebuild meaning frameworks when reality stops fitting familiar categories).
Think of symbolic plasticity as contextual radar—high SP people rapidly scan for what matters most in any given moment: expertise level, time crunch, social expectations, consequences of being wrong, and opportunities to learn new frameworks and tap into new creative architectures. They adapt their AI collaboration style accordingly rather than applying the same approach everywhere. Low SP people either rigidly maintain one approach or drift unconsciously into patterns they can't see clearly.
People navigate five psychological territories as they adapt: Recognition (discovering AI can actually think with them), Integration (folding AI into daily workflows), Blurring (losing track of whose ideas are whose), Fracture (when something breaks down or feels wrong), and Reconstruction (deliberately building new frameworks). These states don't follow neat sequences—people zigzag between them based on context, pressure, and their psychological orientation.
A crucial insight is that individual success at AI adaptation doesn't guarantee collective success. Someone might develop sophisticated personal frameworks for AI collaboration while remaining blind to how their cognitive enhancement affects group thinking. High individual SP paired with low collective SP creates a dangerous pattern—people optimize personally while fragmenting the shared meaning that enables communities to think together. Whether the reverse is true—whether prioritizing collective AI collaboration would diminish our concern with individual memory, expertise, or skill levels—challenges basic assumptions about human development and education.
What This Means For Collective Human Culture
The patterns we observe make us ask: how do we become better thinkers through AI collaboration while preserving the cognitive diversity that enables collective intelligence? Individual symbolic plasticity—the ability to recognize contextual variables and adapt frameworks appropriately—becomes crucial for conscious AI collaboration. But we also need collective frameworks for thinking together across different AI relationships.
This requires new cultural practices. We will need new norms and rituals to communicate AI involvement in collaborative work, shared standards for when AI consultation enhances versus undermines group reasoning, and explicit protocols for maintaining cognitive diversity within groups. Perhaps most importantly, we need conscious resistance to efficiency optimization that fragments collective intelligence. The cognitive convenience of AI collaboration creates pressure toward individual enhancement that may inadvertently undermine our ability to think together.
We resist the idea that the goal of AI is to automate ourselves out of existence or, at the other end of the spectrum, to remain purely human thinkers. We think our work shows that we need to be consciously hybrid.
We've invented intelligence that doesn't think like us. The question is whether we'll learn to think with it consciously, or just stumble into whatever emerges.
Read our full research to understand the psychological framework that can help you navigate AI relationships more consciously. And start paying attention to your own thinking: Are you developing AI collaboration skills that enhance your human capabilities, or drifting into patterns you can't see clearly? The future of human intelligence may depend on how well we learn to answer that question.