I've been teaching a course called "Living with AI" this semester, and we keep circling back to a question that feels increasingly urgent: Who's actually in charge here?
Not in some dystopian, Skynet-takes-over sense. I mean in the mundane, everyday sense of how we're using these tools. Are we directing AI to serve our purposes, or are we subtly reshaping our purposes to fit what AI can do?
Enter Iain McGilchrist and his brilliant Master and Emissary framework.
The Original Story
McGilchrist tells us about a wise master who sends out a clever emissary to handle specific tasks in the world. The emissary is good at his job—focused, analytical, efficient. But over time, he forgets his place. He starts to believe that his narrow, systematic way of seeing things is the only way. He mistakes the map for the territory, the representation for reality itself.
The emissary, McGilchrist argues, is like our left hemisphere—brilliant at breaking things down, categorizing, manipulating parts. The master is our right hemisphere—attuned to context, relationships, the living whole of experience.
When I first encountered this idea years ago, it felt like a useful metaphor for understanding cognition. Now, teaching about AI, it feels like a warning.
The New Emissaries
Here's what I'm noticing: AI tools are spectacular emissaries. They're narrow, focused, and targeted. They work with the known and fixed, analyzing parts and categorizing information with inhuman speed. They deal in explicit, literal data—the kind of thing that can be measured, optimized, and scaled.
Sound familiar? These are precisely the characteristics McGilchrist attributes to the left hemisphere, to the emissary.
And like any good emissary, AI can be incredibly useful. It can draft emails, summarize articles, generate code, analyze patterns in data. When it serves our broader vision—our "master" capacities for context, meaning, and embodied judgment—it's transformative.
But here's where it gets tricky.
When We Forget Who's Master
I'm watching my students (and myself, if I'm honest) start to defer to AI in ways that should concern us. Not because AI is dangerous, but because we're outsourcing the very capacities that make us human.
A student asks ChatGPT to write an essay about a poem. The AI produces something technically competent—it analyzes parts, categorizes themes, provides explicit interpretations. But it completely misses what the student might have discovered through their own broad, vigilant, open engagement with the text. It can't access the implicit meaning that emerges when you sit with ambiguity. It has no embodied experience to draw from.
The student gets a decent grade. The emissary has done its job. But the master—the student's own capacity for meaning-making—has been bypassed entirely.
Or consider how we're using AI in education more broadly. We're asking it to grade essays, detect plagiarism, personalize learning paths. All tasks that require narrow, focused analysis of known, fixed criteria. But education, at its best, is about attention to the new and unique in each student, about seeing context and relationships, about fostering the kind of living engagement with ideas that can't be reduced to metrics.
When we let AI's capabilities define what education is—when we optimize for what can be measured rather than what matters—the emissary has become the master.
The Trust Students Connection
This connects directly to something I've been writing about for years: the importance of trusting students.
Traditional education often operates from a place of suspicion. We design systems to catch cheaters, to control outcomes, to manipulate student behavior through grades and penalties. This is emissary thinking—narrow, focused on parts (individual assignments, test scores), working with the known and fixed (rubrics, standards, policies).
But real learning requires master thinking. It requires broad, open trust that students are capable of growth. It demands attention to context and relationships—understanding that a student's work exists within the larger story of their development. It means being attentive to the new and unique rather than forcing everyone through the same standardized process.
AI can amplify whichever approach we choose.
If we use it to police and control—to detect AI-generated text, to automate grading, to enforce compliance—we're letting the emissary run the show. We're reducing education to what can be analyzed, categorized, and manipulated.
But if we use it as a genuine tool—to free up time for deeper conversations, to provide scaffolding for struggling students, to handle administrative tasks so we can focus on embodied, relational teaching—then we keep the master in charge.
Staying Master
So how do we stay master in our relationship with AI?
First, we need to recognize what AI is good at—and what it isn't. It excels at emissary tasks: analyzing, categorizing, working with explicit information. It's terrible at master tasks: grasping implicit meaning, understanding context, making embodied judgments about what matters.
Second, we need to use AI to serve our broader purposes, not let its capabilities define our purposes. Ask: "What am I trying to accomplish?" Then ask: "Can AI help with that?" Not the other way around.
Third, we need to cultivate our master capacities. Spend time with ambiguity. Practice seeing wholes, not just parts. Trust your embodied sense of what's meaningful, even when you can't quantify it. These are the muscles that atrophy when we defer too much to algorithmic thinking.
Finally, in education specifically, we need to design for trust, not control. Use AI to support student agency, not to police it. Focus on the living, unique relationship between teacher and student, not just the fixed, mechanical delivery of content.
The Choice
McGilchrist argues that Western culture has been increasingly dominated by left-hemisphere, emissary thinking for centuries. We've privileged what can be measured over what matters, representations over reality, control over connection.
AI didn't create this problem. But it can certainly accelerate it—if we let it.
The good news? We still get to choose. Every time we use these tools, we're making a decision about who's in charge. Every time we design a lesson, grade an assignment, or respond to a student, we're choosing between emissary thinking and master thinking.
The algorithm doesn't get to decide. We do.
Let's make sure we remember that.
What's your experience with AI as emissary vs. master? I'd love to hear your thoughts in the comments.