How to Read AI Usage Studies: A Guide to Three Different Lenses

AI usage is too complex for any single research approach to capture completely. We summarize and compare three recent studies: Anthropic's Economic Index Report, OpenAI's How People Use ChatGPT, and the Artificiality Institute's Chronicle project.

Logos of Anthropic, the Artificiality Institute, and OpenAI

You're trying to understand how people actually use AI, so you turn to the research. This week, both Anthropic and OpenAI released comprehensive studies analyzing millions of AI conversations. But the findings are confusing and contradictory. One study tells you that 36% of AI usage is coding. Another says it's only 4.2%. One finds that enterprise users automate 77% of their tasks. Another discovers that most people use AI for creative and personal purposes, not as much for work. One focuses on economic productivity gains. Another warns about psychological dependency and identity erosion.

How can studies of the "same" phenomenon produce such radically different conclusions?

As business leaders, educators, and policymakers grapple with AI's rapid adoption, these research findings shape critical decisions about product development, workplace policies, and resource allocation. But without understanding how each study's methodology influences its conclusions, you might draw the wrong lessons—or worse, make decisions based on data that doesn't actually apply to your situation.

The truth is that AI usage is complex enough that no single research approach can capture it completely. Different methodologies reveal different aspects of how humans interact with artificial intelligence. Understanding these differences helps you become a more sophisticated consumer of AI research and make better decisions about your own AI integration.

Let's examine three studies that each take a fundamentally different approach to understanding AI usage: Anthropic's Economic Index (which views AI through tasks and productivity), OpenAI's ChatGPT analysis (which examines user intent and demographics), and our own research at the Artificiality Institute (which focuses on identity and relationship formation). By comparing their methods, data, and conclusions, we can understand what each reveals—and what each misses.

The Studies

Anthropic Economic Index

What they studied: Economic patterns across millions of Claude conversations, with detailed geographic and enterprise deployment analysis.

Their approach: Task-based classification using the O*NET occupational database, tracking what work activities AI performs and how automation patterns vary across regions and user types.

Key findings:

  • 36% of usage involves coding and mathematical tasks
  • Enterprise API users show 77% automation patterns (delegating complete tasks rather than collaborating)
  • Geographic adoption follows historical technology diffusion patterns, concentrated in wealthy regions
  • "Directive" usage (give AI a task, it completes it) is growing faster than collaborative patterns

OpenAI ChatGPT Study

What they studied: Consumer behavior patterns across 700+ million ChatGPT users sending 2.5+ billion daily messages, with demographic and intent analysis.

Their approach: Intent-based classification (Asking for advice vs. Doing tasks vs. Expressing thoughts) combined with conversation topic analysis and demographic tracking.

Key findings:

  • 70% of usage is non-work related, growing faster than work usage
  • Only 4.2% involves computer programming
  • Three most common uses: Practical Guidance (29%), Writing (24%), and Seeking Information (24%)
  • Gender gap has closed (from 80% male early adopters to gender parity)
  • "Asking" messages (seeking advice/information) are most common and highest-rated

Our Research at the Artificiality Institute

What we studied: Psychological adaptation patterns through workshop observations of 1,000+ people learning to use AI, plus analysis of first-person online accounts.

Our approach: Identity-based analysis tracking how AI relationships affect thinking patterns, self-concept, and meaning-making frameworks through three psychological traits.

Our key findings:

  • People develop genuine psychological relationships with AI that feel unprecedented to them
  • Three traits determine outcomes: how AI blends into thinking (Cognitive Permeability), how identity becomes tied to AI (Identity Coupling), and ability to reframe meaning when contexts change (Symbolic Plasticity)
  • Same AI interaction can create empowerment or dependency depending on psychological approach
  • Five adaptation states: Recognition → Integration → Blurring → Fracture → Reconstruction

Our Conclusions About Their Conclusions

When you examine these three studies together, several important patterns emerge that none captures alone.

AI serves multiple simultaneous functions. The same person might use AI as a productivity tool (Anthropic's lens), seek creative inspiration (OpenAI's lens), and develop a particular psychological relationship based on how they make meaning of AI's role in their thinking and identity (our lens)—all within a single work session. The studies aren't contradicting each other—they're looking at different layers of the same complex phenomenon.

Context shapes everything. The dramatic difference in coding usage (36% vs 4.2%) isn't an error—it reflects genuinely different user populations. Claude attracts more technical users through its API and developer-focused features, while ChatGPT has achieved broader consumer adoption. Neither finding is wrong, but both are incomplete without understanding their specific contexts.

Our research reveals an additional layer of context that behavioral studies miss entirely: the psychological context of how people frame AI's role in their thinking. Two developers might both use AI for coding (same task), with the same intent (producing working code), but if one maintains awareness of AI's contribution while the other gradually loses track of their own reasoning processes, they're having fundamentally different experiences. The psychological context—how much AI blends into thinking, how identity becomes coupled with AI output, and whether someone can reframe the relationship when needed—determines whether identical usage patterns lead to empowerment or dependency.

The relationship matters more than the behavior. Two people might use AI in identical "directive" patterns (giving it a task and having it complete the work), but one maintains conscious control while another drifts into dependency. External observation of behavior—what most large-scale studies can measure—doesn't predict psychological impact.

Geography and demographics still shape adoption. Both Anthropic and OpenAI find that wealth, education, age, and cultural context influence how people engage with AI. Our research hasn't yet examined these demographic patterns—a clear limitation that we plan to address in future work. The patterns vary: geographic concentration in early adoption, rapid demographic expansion over time, and different usage patterns across socioeconomic groups.

Work and non-work usage are evolving differently. While Anthropic sees enterprise automation accelerating, OpenAI finds non-work usage growing faster than work applications. This suggests AI is becoming a general-purpose technology woven into daily life, not just a productivity enhancement. This makes our psychological research even more critical—if AI relationships are becoming as common as smartphone use, understanding their psychological impact becomes essential for human wellbeing. Moreover, the artificial separation between "business" and "personal" AI use misses the reality that business is fundamentally about humans serving other humans. The psychological patterns we document in individual AI relationships inevitably shape how people collaborate, make decisions, and relate to colleagues in professional contexts.

Our Conclusions About Their Conclusions

The differences between these studies reveal as much about AI usage as their individual findings do.

Why the Coding Discrepancy Matters

The 36% vs 4.2% programming usage gap points to multiple "AI ecosystems" emerging with different user bases, use cases, and relationship patterns. Anthropic's higher coding numbers reflect their technical user base and API-focused distribution. OpenAI's lower numbers show AI becoming mainstream consumer technology.

This matters because it reveals the danger of generalizing from any single study. If you're designing AI policy based on Anthropic's findings, you might overemphasize job displacement in technical fields and underestimate demand for AI personal assistants. If you rely only on OpenAI's data, you might underestimate AI's impact on software development and overemphasize consumer applications.

The lesson: AI is versatile enough to serve fundamentally different needs simultaneously - professional automation, consumer assistance, and psychological support. Research findings reflect which aspects researchers choose to study, not limitations of the technology itself. Understanding what each study can and cannot tell you is crucial for applying research insights to your specific context.

The Automation Paradox

Anthropic finds that 77% of enterprise API usage follows "automation" patterns, where users delegate complete tasks to AI. This sounds like humans becoming passive recipients of AI output. But our research shows that the same behavioral pattern can represent conscious collaboration (a "Partner" relationship) or problematic dependency (an "Outsourcer" dynamic).

The distinction lies in psychological factors that large-scale behavioral studies can't measure: Does the person maintain awareness of AI's contribution? Can they still perform the task independently? Do they feel empowered or displaced by the interaction?

This reveals a fundamental limitation of large-scale conversation analysis: identical actions can have opposite psychological effects. A manager who consciously delegates routine analysis to focus on strategic decisions has a very different AI relationship than one who gradually loses analytical capabilities through over-reliance.

The Scale vs. Depth Tradeoff

Large-scale studies like Anthropic's and OpenAI's capture broad patterns across millions of users but miss the nuanced experiences that determine individual outcomes. They can tell you what people do with AI but not how it affects their sense of agency, creativity, or professional identity.

Small-scale studies like our own reveal psychological complexity but raise questions about generalizability. Do workshop observations reflect natural usage patterns? Can findings from early adopters predict mainstream experiences?

The answer isn't choosing one approach over another—it's recognizing that both are necessary. Behavioral data shows what's happening at scale. Psychological research reveals why individual experiences vary so dramatically within those broad patterns.

A Note on the Data

Understanding how each study collected and analyzed data is crucial for interpreting their findings. Each approach has inherent strengths and blind spots that shape what they can and cannot discover.

Conversation Logs (Anthropic, OpenAI)

Both Anthropic and OpenAI analyze actual conversations between users and AI systems—millions of real interactions captured as they happen naturally. This provides unprecedented access to authentic usage patterns at massive scale.

Strengths: Behavioral accuracy (what people actually do vs. what they say they do), natural usage contexts (not laboratory settings), and scale that enables demographic and geographic analysis. These studies can track changes over time and identify patterns that would be invisible in surveys or interviews.

Weaknesses: No direct access to user motivations, psychological states, or outcomes. Privacy constraints limit how deeply researchers can probe individual experiences. Automated classification systems must reduce complex human behavior to predetermined categories, potentially missing important nuances.

Best for: Economic impact analysis, usage pattern identification, demographic trends, and geographic adoption tracking.

The classification challenge here is significant. Both studies use AI systems to categorize millions of conversations into topics and intents. Anthropic maps conversations to O*NET occupational tasks—a framework designed for job analysis, not AI usage. OpenAI creates their own "Asking/Doing/Expressing" taxonomy. These choices profoundly shape results.

Consider a user asking ChatGPT to help brainstorm solutions to a work problem. Is this "Asking" (seeking information) or "Doing" (producing output)? Is it "Thinking Creatively" or "Making Decisions and Solving Problems"? Different classification schemes yield different insights about the same interaction.

Workshop Observations (Our Research)

We take a fundamentally different approach: direct observation of people learning to use AI in structured environments, supplemented by analysis of first-person online accounts.

Strengths: Access to psychological processes, adaptation tracking over time, and ability to capture experiences that users might not articulate unprompted. Researchers can observe the gap between what people intend and what actually happens during AI interaction.

Weaknesses: Small sample sizes raise generalizability questions. Workshop settings may not reflect natural usage patterns. Observer effects could influence behavior. The focus on early adopters and workshop participants may miss mainstream experiences.

Best for: Understanding individual variation, relationship dynamics, psychological adaptation patterns, and the gap between conscious intentions and unconscious changes.

The Representation Problem

All three studies grapple with representing their specific user populations. Anthropic's Claude users skew technical and professional. OpenAI's ChatGPT captures broader consumer adoption but still represents people willing to try new AI tools. Our workshops attract people curious enough about AI to attend educational sessions.

None represents "the general population" interacting with AI. They represent early adopters, technical users, and the curious—groups whose experiences may not predict how AI integration unfolds as the technology becomes truly mainstream.

This matters because early adoption patterns often don't persist. The gender gap that OpenAI documents closing (from 80% male to gender parity) shows how usage patterns can shift rapidly as technology moves from early adopters to broader populations.

Evaluating the Studies in Detail

Let's examine each study's methodology, limitations, and contributions with the same analytical rigor.

Anthropic Economic Index

Why it's important: First comprehensive economic analysis of enterprise AI deployment at scale, providing hard data on business AI adoption patterns rather than speculation or surveys.

Purpose: Track AI's economic impact through task-based analysis and geographic adoption patterns.

Headline result: 36% of usage involves coding/mathematical tasks, with 77% of enterprise API usage following automation patterns.

Timing: Geographic analysis shows AI adoption following historical technology diffusion patterns but at unprecedented speed.

Primary methodology: Task classification using O*NET occupational database, mapping millions of conversations to work activities and analyzing automation vs. augmentation patterns.

Limitations: The O*NET framework was designed for job analysis, not AI interaction analysis. "Automation" classification based on conversation patterns may miss psychological factors that determine whether delegation feels empowering or displacing. Enterprise API users may not represent broader AI adoption patterns.

What's interesting: Geographic concentration mirrors previous technology adoption (wealthy regions first, gradual diffusion) but compressed into months rather than years. The finding that complex tasks require longer context inputs suggests information access, not just model capability, may limit sophisticated AI deployment.

What's useful: Shows AI following familiar economic patterns while revealing specific bottlenecks (context requirements, information access) that businesses need to address for effective implementation.

OpenAI ChatGPT Study

Why it's important: Largest consumer AI usage study to date, tracking behavior across 700+ million users and 2.5+ billion daily messages with demographic breakdowns.

Purpose: Understand consumer adoption patterns, demographic expansion, and intent classification across the world's most-used AI chatbot.

Headline result: 70% non-work usage growing faster than work applications, with Asking (seeking advice) as most common and highest-rated interaction type.

Timing: Rapid demographic expansion with non-work usage outpacing professional applications over the study period.

Primary methodology: Intent-based classification (Asking/Doing/Expressing) combined with conversation topic analysis and privacy-preserving demographic matching.

Limitations: Automated classification of "intent" may miss context that determines whether interactions feel meaningful to users. Demographic analysis based on name patterns and self-reported age has inherent uncertainties. Focus on consumer plans excludes enterprise usage patterns.

What's interesting: AI becoming mainstream consumer technology rather than just professional tool, with gender parity achieved remarkably quickly. The prominence of "Asking" behaviors suggests people value AI as advisor/consultant rather than just task executor.

What's useful: Demonstrates AI's evolution from technical tool to general-purpose technology for daily life. The high rating of "Asking" interactions suggests designing for consultation rather than just automation may improve user satisfaction.

Our Research at the Artificiality Institute

Why it's important: Only large-scale study examining psychological relationship formation with AI systems and individual adaptation patterns.

Purpose: Map how people psychologically adapt to AI collaboration and what factors determine healthy vs. problematic relationships.

Headline result: Three psychological traits (Cognitive Permeability, Identity Coupling, Symbolic Plasticity) determine whether AI relationships feel empowering or displacing, with Symbolic Plasticity serving as a moderating factor.

Timing: Five psychological states from Recognition through Reconstruction, with non-linear movement between states based on context and individual differences.

Primary methodology: Direct observation of 1,000+ workshop participants, analysis of first-person online accounts, and identification of psychological patterns through qualitative analysis.

Limitations: Small sample relative to other studies raises generalizability questions. Workshop settings may not reflect natural usage patterns. Focus on conscious adaptation may miss unconscious integration patterns. Self-selected participants likely more reflective than general population.

What's interesting: Same AI interaction can create empowerment (Partner relationship) or dependency (Outsourcer dynamic) depending on psychological factors invisible to behavioral analysis. Our framework provides concrete guidance for conscious AI relationship management.

What's useful: Offers practical tools for individuals and organizations to navigate AI integration consciously rather than drifting into problematic patterns. Reveals why identical AI implementations produce such varied individual outcomes.

The Takeaway

Understanding these different research approaches helps you become a more sophisticated consumer of AI studies and make better decisions about your own AI integration.

For researchers: Each lens reveals different aspects of the same phenomenon. Economic analysis shows scale and distribution patterns. Behavioral studies reveal usage preferences and demographic trends. Psychological research uncovers individual variation and relationship dynamics. Combining approaches provides a fuller picture than any single methodology can capture.

For practitioners: Position yourself consciously across all three frameworks. Understand what tasks you're using AI for (Anthropic's lens), what outcomes you're seeking (OpenAI's lens), and how the relationship is affecting your identity and capabilities (our psychological lens). The same AI interaction might be economically productive, intentionally purposeful, and psychologically problematic—or vice versa.

For business leaders: Don't assume that productive task completion equals healthy AI integration. The enterprise automation patterns that Anthropic documents could represent efficient delegation or problematic dependency, depending on implementation. Consider both economic metrics and psychological factors when designing AI policies.

For policymakers: Economic data shows AI's scale and distribution, but psychological research reveals human impact. Effective AI governance requires understanding both productivity gains and individual adaptation patterns. The geographic concentration all studies find suggests intervention may be needed to prevent AI from exacerbating existing inequalities.

For product designers: Users simultaneously want task completion (doing), information access (asking), and meaningful relationships (psychological connection). Products that address only one dimension may miss opportunities or create unintended negative experiences.

The Meta-Lesson

AI usage is too complex for any single research approach to capture completely. Be suspicious of studies—including these three—that claim to have "the" answer about how people use AI.

Each methodology brings inherent biases. Large-scale behavioral studies miss psychological nuance. Small-scale psychological research may not generalize. Enterprise-focused analysis doesn't predict consumer behavior. Consumer studies don't reveal workplace dynamics.

The most important insight may be that AI is becoming multiple things simultaneously: a productivity tool, a creative partner, an information source, a decision aid, and a psychological relationship. How people experience these different functions depends on context, individual differences, and conscious choices about integration approaches.

Rather than seeking definitive answers about "how people use AI," ask more specific questions: How do people in my context use AI? What outcomes are they seeking? How is it affecting their capabilities and sense of agency? What patterns am I seeing in my own AI relationships?

The research provides frameworks for these questions, not final answers. Use each lens to examine your own experience and make conscious choices about how AI fits into your work and life.

As AI capabilities expand and adoption deepens, we'll need continued research across all these dimensions—economic, behavioral, and psychological. The studies we have now capture early adoption patterns that may not predict mainstream integration. The most important finding across all three may be just how much individual and contextual variation exists in human-AI relationships.

Understanding this variation, rather than seeking universal patterns, may be the key to navigating AI's integration into human life successfully.


This analysis is based on: Anthropic's Economic Index Report (September 2025), OpenAI's "How People Use ChatGPT" (September 2025), and "How We Think and Live with AI: Early Patterns of Human Adaptation" by the Artificiality Institute (2025).

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.