This post is part of our series on expertise and AI. If you're concerned about preserving and enhancing the value of your expertise as AI advances, consider joining our short course—starting August 6th—to learn the psychological strategies needed to make your expertise even more valuable with AI.
Join our AI Course for Leaders to learn how to Become More Essential with Artificial Intelligence. While most AI courses focus on tools, we focus on minds—helping you build the human capabilities that make you more essential in a machine-shaped world.
Our research with 1000+ professionals reveals that a key difference between those who become more valuable with AI versus those who become replaceable isn't technical skill, it's their psychological strategy.
Join us—starting August 6th—for an immersive, personal learning experience.
Learn more
Something interesting is happening in AI productivity research. You've probably seen how early studies show impressive gains: 40% faster coding, doubled output, transformed workflows, etc. But as researchers move from controlled experiments to real workplace settings, the picture gets more complex.
The difference seems to lie in how we study productivity itself. Earlier research often follows a familiar pattern: recruit participants, give them standardized tasks, provide AI tools, and measure the difference. The tasks are carefully designed to be clear, self-contained, and measurable. Think "write a marketing email" or "debug this code snippet" or "summarize this document." The results are often impressive, and they make for great headlines.
What these studies miss is everything that makes work actually work: the context.
The Context Problem
Context is the invisible infrastructure that determines whether any solution will actually succeed. It's knowing that your client prefers formal language, that this particular codebase has quirky legacy requirements, or that your team's definition of "urgent" differs from the customer's.
When researchers strip away context to create measurable tasks, they're also stripping away the main thing that separates experts from novices. Experts understand how tasks fit into larger systems, what the unwritten rules are, and what the real constraints look like.
This creates a systematic bias in how we measure AI impact. The more context-free a task, the more AI can help. The more embedded a task is in specific knowledge and relationships, the more AI assistance can actually get in the way.
Recent research on experienced software developers illustrates this. When seasoned contributors to major open-source projects used state-of-the-art AI tools on their actual work, they slowed down. By a lot—almost 20%! Not because the AI was technically wrong, but because it couldn't navigate the accumulated knowledge of why things worked the way they did in that specific environment.
How Context Hides
The challenge with context is that it's often invisible, even to the people who possess it. A marketing manager naturally adjusts tone for different clients without conscious thought. A developer instinctively avoids code patterns that seem elegant but will cause maintenance headaches six months later. An accountant spots the kind of expense report that always gets flagged, even when the numbers add up perfectly.
This invisibility is why productivity studies consistently overestimate AI impact. When you ask people to perform decontextualized versions of their work, you're measuring a different skill entirely. It's like testing driving ability in an empty parking lot versus rush-hour traffic—the fundamental competencies are related, but the environments demand entirely different kinds of expertise.
The gap becomes clearer when you look at what happens after any given study ends. In the case of developers, AI-generated code might run flawlessly but violate team conventions in ways that create maintenance problems months later.
What This Actually Means
Understanding the context problem suggests we need to be more nuanced about when and how AI helps. Early evidence suggests AI may work differently in context-light environments where tasks can be clearly defined and success measures are straightforward, compared to context-heavy environments where success depends on navigating complex, unstated requirements. But the relationship between context and AI effectiveness isn't straightforward—it seems to depend heavily on who's using the tool, how well they understand the environment they're working in, and how well they understand their own psychological traits when working with AI.
This has several practical implications. First, the most dramatic productivity gains from AI will likely come in work that's already been standardized and systematized—areas where context has been deliberately minimized. Second, the most valuable human skills in an AI-augmented world will be precisely those that depend on contextual understanding—the ability to read situations, understand relationships, and navigate ambiguity.
And there's a deeper risk here. As AI struggles with contextual work, people may be tempted to redesign work itself to accommodate the technology. If AI struggles with context, why not simply remove the context? Standardize the client interactions, eliminate the judgment calls, turn everything into clear procedures that machines can follow.
This would be a profound mistake. Context isn't organizational inefficiency waiting to be optimized away—we need to recognize it as the creative constraint that makes good work possible. Remove the context, and you don't just lose efficiency; you lose the very thing that allows experts to see opportunities, make unexpected connections, and solve problems that weren't even visible in the original brief.
There's an equal danger in the opposite direction: believing the hype that certain tasks are naturally context-free. The productivity gurus promising AI will revolutionize your email management or automate your travel booking are betting that these domains contain minimal context. But this is precisely where contextual gotchas tend to hide—in the work that looks routine but isn't.
The human assistant who knows which "urgent" requests can wait until Monday. The travel coordinator who understands that this manager's "any hotel is fine" actually means "nothing below four stars or she'll be miserable for the entire trip." Context emerges in the gap between what people say they want and what they actually desire and need.
As AI capabilities advance, we're learning something unexpected about the nature of expertise itself. The work that seemed most mechanical—following procedures, applying rules, executing tasks—turns out to be surprisingly automatable. But the work that seemed most intuitive—reading situations, navigating relationships, understanding what matters in a specific moment—remains stubbornly human.
We're the ones constantly creating context. The standards of good work in any field aren't fixed because they evolve as practitioners push boundaries, respond to new challenges, and collectively redefine what success looks like. AI systems trained on past patterns may excel at reproducing yesterday's definition of quality work, but they struggle to sense when the rules themselves are shifting.
Humans actively shape context. This capacity to redefine the rules of the game while playing it may prove to be our most enduring advantage.