Goodbye, Midjourney | Five Practices to Help Students Navigate AI Consciously
I've struggled with how to create images for the past couple of years. Without the budget for a
Become More Essential with Artificial Intelligence. Our research-backed course helps experienced professionals co-evolve with AI through psychological strategy rather than technical training. Based on our Chronicle study of over 1,000 professionals, we've identified that success in AI collaboration depends on developing cognitive permeability, identity coupling, and symbolic plasticity. This intimate, four-session program, starting August 6th, guides participants through building conscious boundaries for AI collaboration while preserving the compressed expertise that makes human judgment irreplaceable.
As part of our series on AI and Expertise and ahead of our August course on Becoming More Essential with AI, this exploration reveals why early AI productivity research may be misleading us about AI's real workplace impact.
While controlled studies show impressive gains—40% faster coding, doubled output—research in actual workplace settings tells a different story. When experienced developers used cutting-edge AI tools on their real work, they slowed down by almost 20%. The culprit? Context.
Early productivity studies systematically strip away the invisible infrastructure that makes work actually work: knowing client preferences, understanding system quirks, recognizing when "urgent" doesn't mean urgent. This context is precisely what separates experts from novices, yet it's eliminated to create measurable tasks. The result is a measurement bias where AI appears most helpful on decontextualized work—exactly where human expertise matters least.
The deeper insight: humans don't just navigate context, we actively create it, constantly redefining what good work looks like. This capacity to reshape the rules while playing the game may prove to be our most enduring advantage as AI capabilities advance.
At the Artificiality Summit 2024, Jamer Hunt, professor at the Parsons School of Design and author of Not to Scale, catalyzed our opening discussion on the concept of scale. This session explored how different scales—whether individual, organizational, community, societal, or even temporal—shape our perspectives and influence the design of AI systems. By examining the impact of scale on context and constraints, Jamer guided us to a clearer understanding of the appropriate levels at which we can envision and build a hopeful future with AI. This interactive session set the stage for a thought-provoking conference.
Jamer will be back at the Artificiality Summit this October 23-25.
Watch/Listen to Jamer's lecture...
Upcoming community opportunities to engage with the human experience of AI.
Foundational explorations from our research into life with synthetic intelligence.