The Chronicle is an ongoing research initiative documenting how people are adapting to AI—through workshops, interviews, story analysis, and direct observation. Our first release offers an exploratory map of emerging psychological patterns.
At the Artificiality Institute, we want to know how to think better with AI. Over the past two and a half years, we've studied how over 1,000 people are adapting to this collision of intelligences. What we found challenges almost everything being said about AI and productivity.
People are forming psychological relationships with AI systems that feel unprecedented to them. The Chronicle maps the psychological changes happening as people incorporate AI into their thinking, creativity, and daily relationships.
Most decisions and most deciders are hybrids. Some machine, some human. The trick is to imagine all the ways that humans figure out ways around, over, and through the machine when what they really want is to make the decision themselves even if it means sacrificing accuracy.
AI-based decision-tools and data-driven decision-making is designed to reduce the variability of human decision-making. People assume that data offers an objective view of the reality and that an AI decision is rational. Decisions get easier with an objective and rational view of reality because the answer is apparent and incontestable. In reality, more data isn't necessarily more meaningful, it’s what someone has chosen to pay attention to, and what’s deemed rational depends entirely on the parameters people care about.
How would you expect individuals to react to a decision recommended by an AI? Would it depend on the context in which the AI made the recommendation? Or the level of confidence the AI expressed in the decision? Or the expertise of the person receiving the recommendation? The biggest factor in how people respond to AI-based decision-making is their own decision-making style.
Even when people are given identical AI inputs, people make entirely different choices. How people use input from AI depends on how they process information, how they regulate their emotions and behavior, and how urgent the decision is. Counter-intuitively, executives who are most rational and data-driven in their decision-making style can be the most likely to reject the algorithm. This is probably because they also place a high value on their own agency and autonomy. Conversely, executives who don’t like to make decisions and tend to procrastinate, are the most likely to delegate to AI, perhaps because it allows them shift responsibility to the machine.
Humans do not have a single, universal response to AI. This means that the accuracy of an AI prediction is only half the story. What matters most is to ask, “what is the purpose of AI in this decision?” AI can reduce variability in human decision-making but it’s important to understand if this is a decision where variability is desired and, if so, how much and why? Where autonomy is valued, AI will simply create subversion. Using intuition feels good, it builds a sense of fluency in judgments and creates an emotional signal called judgment completion. Mastery over the nuance of a situation feels good.
Most decisions and most deciders are hybrids. Some machine, some human. The trick is to imagine all the ways that humans figure out ways around, over, and through the machine when what they really want is to make the decision themselves even if it means sacrificing accuracy.
More resources on this topic:
Article: MIT Sloan Review on the human factor in AI-based decision making. (Paywall)
Book: Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony, and Cass Sunstein
Just as ecosystems need diversity, so does what we pay attention to. A thought-provoking yet practical essay on rewilding your attention.
God, Human, Animal, Machine by Meghan O’Gieblyn. I started this book and couldn’t put it down. Humorous, topical, on-point. A wonderful series of essays on metaphor, meaning and technology.
An in-depth listen on the science of consciousness. Anil Seth on the Brain Inspired podcast. Topics covered include consciousness as controlled hallucination, free will, psychedelics, and whether consciousness is related more to life, intelligence, information processing, or substrate.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.