Sunday, May 11, 2025

Scaling Impact: Using AI to Create Just-in-Time Faculty Development Resources

When I first began experimenting with AI as a tool for faculty development, I wondered if it could truly capture the nuanced needs of our teaching community. After all, professional development has always been a deeply human endeavor – built on relationships, contextual understanding, and responsive design. Could a tool like Claude really help bridge the persistent gap between what faculty need and what our limited resources can provide?

The answer, I've discovered, is both more complex and more promising than I initially imagined.

The "PD Resource Creation" project emerged from a practical challenge: how to provide high-quality, contextualized professional development resources for faculty across diverse disciplines at Eastern Kentucky University without overwhelming our small faculty development team. The traditional approach – developing each resource from scratch through extensive research and writing – simply wasn't sustainable given the breadth of needs identified in our faculty needs assessment.

Bespoke Resources

The key has been creating a process that leverages AI's capacity for synthesis and organization while preserving the essential human elements that make professional development meaningful. This requires a careful balance – providing enough structure for the AI to produce valuable content while maintaining space for human expertise and judgment.

The process involves:

  1. Contextualization: Providing Claude with detailed information about EKU's specific context, including student demographics, regional characteristics, and institutional priorities

  2. Expert Guidance: Sharing established frameworks and research-based best practices in the specific area of teaching and learning being addressed

  3. Disciplinary Tailoring: Adapting resources for specific colleges and disciplines by incorporating relevant examples and applications

  4. Human Review: Having faculty experts review and refine the AI-generated resources, adding nuanced insights from their practical experience

  5. Faculty Feedback: Collecting user feedback on the resources to continually improve both the content and our prompting approach

What's fascinating is how this process mirrors what we know about effective teaching. Just as students benefit from clear structures and expert guidance but need space for autonomy and personal meaning-making, our AI partnership works best when we provide clear parameters while leaving room for creative synthesis.

Analyzing the Results: What Works and What Doesn't

The project has yielded some remarkable successes, particularly in creating comprehensive resources on topics like "Developing Student Self-Advocacy Skills" and "Creating Supportive Learning Environments." These resources have provided faculty with research-based foundations, practical implementation strategies, and discipline-specific adaptations that would have required weeks of development time using traditional methods.

However, challenges remain. The AI sometimes produces content that sounds theoretically sound but lacks the nuanced understanding that comes from lived classroom experience. It may over-generalize research findings or suggest approaches that don't account for practical constraints. And perhaps most significantly, the resources sometimes lack the authentic voice and emotional resonance that can inspire faculty to try new approaches.

This is where human expertise remains irreplaceable. Our subject matter experts play a crucial role in refining these resources, adding concrete examples from their own teaching, adjusting recommendations based on what they know works in our specific context, and infusing the more technical sections with a sense of purpose and possibility.

Implications for Faculty Development Practice

This project has significant implications for how we approach faculty development in resource-constrained environments:

  1. Democratizing Access: By significantly reducing the time required to create comprehensive resources, we can address a wider range of faculty needs and disciplinary contexts

  2. Just-in-Time Support: Faculty can access specific guidance when they need it rather than waiting for scheduled workshop offerings

  3. Consistency with Customization: We can maintain consistency in quality and approach while tailoring resources to different disciplines and teaching contexts

  4. Sustainable Scaling: The approach allows our small team to have a broader impact without proportional increases in workload

  5. Continuous Improvement: Each iteration helps us refine our prompting approach, creating a virtuous cycle of improvement

Perhaps most importantly, this approach allows our human facilitators to focus their energy on the aspects of faculty development that genuinely require human connection – coaching conversations, observational feedback, community building, and addressing the emotional dimensions of teaching challenges.

Questions for Further Exploration

As this project continues to evolve, several questions guide my thinking:

  1. How do we effectively balance efficiency with authenticity in AI-assisted resource development?

  2. What are the ethical implications of using AI in creating content that guides teaching practice?

  3. How might we better capture and incorporate the tacit knowledge of experienced educators into these resources?

  4. What faculty development needs remain best addressed through fully human-created approaches?

  5. How does the use of AI in resource creation model the critical digital literacy we hope to develop in our students?

The journey of exploring this partnership continues to both challenge and inspire my thinking about what's possible in faculty development. I'm curious how others in faculty development roles are navigating similar questions and what insights you might share from your own experimentation with these emerging tools.

AI Use Acknowledgment: I acknowledge that I have used Generative AI tools in the analysis of professional development resources created with AI assistance. This blog post represents my own critical reflection and evaluation of this process, supported by AI-assisted analysis of patterns and themes across multiple resources developed through the project.

Saturday, May 10, 2025

Beyond Coding: Using AI to Enrich Qualitative Analysis of Faculty PD Needs

One of the persistent challenges in faculty development work is making sense of qualitative data from needs assessments. When I recently distributed a professional development survey to faculty across departments, I received nearly 200 responses with rich, nuanced open-ended comments. The quantity was overwhelming, but the potential insights were too valuable to reduce to mere word counts or simplistic themes.

This scenario presents a familiar tension in qualitative analysis: how to honor the complexity and nuance of faculty voices while still identifying actionable patterns? Traditional coding approaches, while methodologically sound, are extraordinarily time-intensive. Meanwhile, basic word-frequency analyses miss the contextual richness that makes qualitative data so valuable in the first place.

With these constraints in mind, I decided to experiment with using Claude, a large language model, as a collaborative analysis partner rather than simply a data processing tool. The results challenged my thinking about qualitative analysis in surprising and productive ways.

The Collaborative Analysis Process

Rather than approaching Claude as a replacement for human analysis, I used it as an initial sense-making tool and thought partner. The process unfolded through several iterative phases:

  1. Data preparation: I anonymized all responses, removing identifiable information while preserving contextual elements like department type and faculty rank that might inform interpretation.

  2. Initial pattern recognition: I asked Claude to identify recurring themes, tensions, and outlier perspectives across the responses without imposing predefined categories.

  3. Dialogic refinement: Instead of simply accepting Claude's initial analysis, I engaged in a back-and-forth process, questioning categorizations, asking for evidence, and pushing for more nuanced interpretations.

  4. Counter-narrative exploration: Critically, I asked Claude to identify perspectives that might be missing or underrepresented in the data, and to highlight tensions between dominant narratives.

  5. Synthesis and verification: Finally, I returned to the raw data myself to verify key themes and ensure the analysis reflected the actual responses rather than algorithmic artifacts.

The goal throughout was not to automate analysis but to enrich it—to use the model's pattern recognition capabilities to identify connections I might otherwise miss, while maintaining my responsibility for interpretive decisions.

What Emerged: Beyond Binary Needs

What surprised me most was how this approach revealed complex tensions in faculty needs that binary survey questions would have obscured.

For instance, a traditional coding approach might have simply tallied mentions of "time constraints" as a barrier to implementing new teaching approaches. But the collaborative analysis revealed a more nuanced reality: many faculty were actually expressing something more complex than mere time scarcity.

They described a tension between their intrinsic desire to innovate pedagogically and institutional reward structures that didn't recognize that work. As one faculty member put it, "It's not really about not having hours in the day—it's about what those hours are valued for."

This distinction has significant implications for how we design professional development offerings. Rather than simply making workshops more "efficient" (the solution to a time scarcity problem), we need to address the deeper tension between faculty values and institutional incentives.

Other rich patterns emerged as well:

  • Faculty across ranks expressed a desire for more engagement with learning science research, but wanted it translated into disciplinary contexts rather than presented generically
  • Early-career faculty sought technical skill development, while mid-career faculty more often requested community and reflective practice opportunities
  • Nearly a third of respondents described a tension between student expectations and evidence-based teaching approaches

Each of these insights suggests specific directions for our professional development programming that wouldn't have been as clear from a more traditional analysis.

Methodological Reflections: Trust and Verification

As with my classroom ungrading practices, I found that approaching this analysis through a lens of "trust, not compulsion" yielded the richest results. I trusted the AI to help identify patterns but didn't compel it to fit responses into predetermined categories. Simultaneously, I maintained a critical stance, consistently verifying suggested patterns against the raw data.

There were limitations, of course. The model occasionally overweighted particularly eloquent responses and sometimes missed subtle disciplinary differences in how faculty expressed similar needs. These limitations reinforced the importance of the human analyst in the process—the partnership worked precisely because it combined different analytical strengths.

Implications for Faculty Development Practice

This approach to needs assessment analysis has immediate implications for how we design professional development offerings:

  1. Moving beyond binary needs: Instead of designing workshops based on surface-level needs, we can address the deeper tensions faculty experience between competing values and constraints.

  2. Differentiating by career stage: The analysis revealed distinct developmental trajectories that suggest tailored programming for different faculty career stages.

  3. Creating disciplinary translations: Faculty consistently expressed desire for learning science principles translated into their specific disciplinary contexts.

  4. Addressing institutional tensions: Many faculty needs can only be addressed through institutional policy changes, not just better workshops.

Perhaps most importantly, this approach models a commitment to truly hearing faculty voices rather than simply tallying their "votes" for predefined options. It's an approach to professional development needs assessment that aligns with the same principles I advocate for student assessment—prioritizing rich understanding over reductive measurement.

Questions for Further Exploration

As I continue to refine this approach to qualitative analysis, several questions remain:

  • How might we combine this approach with more traditional methods to maximize both depth and methodological rigor?
  • What are the ethical considerations of using AI tools in analyzing faculty responses about their professional needs?
  • How can we most effectively share these complex findings with institutional decision-makers accustomed to more simplified data presentations?
  • What does this approach suggest about how we might analyze student feedback in our courses?

I'm particularly interested in how others are experimenting with similar approaches to qualitative analysis in educational contexts. Are you finding ways to leverage AI tools while maintaining intellectual integrity and methodological soundness? What tensions or opportunities have you encountered?

I'd love to hear your experiences as we collectively reimagine how we understand and respond to faculty professional development needs.

AI Use Acknowledgment: I acknowledge that I have used Generative AI tools in the preparation of this blog post, specifically Claude. The AI tool was utilized in analyzing qualitative response data from faculty surveys and helping to structure this narrative about the process. I have verified the accuracy of all representations of the analysis process and findings through comparison with the original survey data. This analysis represents my own critical thinking and evaluation, supported by AI-assisted data processing and narrative development.

Sunday, February 16, 2025

Rethinking Assessment: Self-Determination Theory and Ungrading in an Online Social Psychology Course

In Spring 2025, I implemented a novel approach to teaching PSY 300 Social Psychology that prioritized student autonomy and intrinsic motivation. Drawing from self-determination theory (SDT), which emphasizes autonomy, competence, and relatedness as key drivers of motivation, the course was designed to give students unprecedented control over their learning journey.

Course Design

The course structure departed from traditional grading methods by adopting an "ungrading" approach, where students propose their own midterm and final grades based on their engagement and learning. Rather than imposing strict deadlines, the course used "Maximum Benefit Completion Dates" - suggested timelines that optimize learning without penalty for later completion.

Four core components structured the learning experience:

  1.  Engagement with course materials through Perusall, a social annotation platform
  2.  Collaborative group work on shared documents
  3.  Regular reflection through learning journals
  4.  Optional, repeatable quizzes for self-assessment and learning reinforcement

The quizzes were explicitly framed as learning tools rather than evaluation instruments. Students could take them as many times as they wished, with no impact on their grade unless they chose to include their quiz engagement in their grade justification.

Midterm Reflection Process

At midterm, students were asked to reflect deeply on their learning experience through several prompts:

  • How they benefited from engaging with course texts on Perusall
  • How they benefited from collaborative group work
  • How they benefited from the learning journal process
  • Their progress toward course learning outcomes
  • How the course affected their life
  • External factors influencing their engagement

Based on this comprehensive reflection, students proposed their own midterm grades, supporting their proposals with specific evidence of their engagement and learning. This process encouraged students to think metacognitively about their learning journey and take ownership of their academic progress.

Analysis of Midterm Reflections

Using Claude.ai to analyze the midterm reflection data revealed fascinating patterns in how students approached self-assessment. The grade distribution showed:

  • A: 67 students (57.8%)
  • B: 36 students (31%)
  • C: 11 students (9.5%)
  • D/F: 2 students (1.8%)

Key findings emerged from the analysis:

  • Students who reported high engagement with course materials typically assigned themselves higher grades
  • Those facing significant personal challenges often chose more moderate grades despite demonstrating meaningful learning
  • Students consistently considered both their engagement level and learning outcomes when self-assigning grades
  • Even students assigning themselves lower grades frequently reported significant learning benefits

Perhaps most notably, the flexible course structure appeared to encourage honest self-assessment. Students were remarkably candid about their participation levels and challenges, while still acknowledging their learning achievements.

Course Impact

The data revealed that removing traditional grading pressures had profound effects:

  • Students with anxiety disorders reported being able to learn without crippling stress
  • Those balancing work, family, and health challenges appreciated the flexibility to engage meaningfully despite life circumstances
  • Many students noted deeper engagement with the material when freed from grade pressure
  • The course structure fostered genuine peer learning and community building
  • Students used the optional quizzes as true learning tools rather than stress-inducing evaluations

This experience suggests that combining SDT principles with ungrading can create a learning environment that supports both academic achievement and student wellbeing. The high level of self-reported learning, even among students assigning themselves moderate grades, indicates that removing traditional grading structures may enhance rather than diminish educational outcomes.

The midterm reflection process itself served as a powerful learning tool, helping students articulate their growth and identify areas for improvement in the second half of the course. Their detailed reflections demonstrated remarkable self-awareness and a genuine focus on learning rather than grade achievement.

AI Use Acknowledgment:

I acknowledge that I have used Generative AI tools in the preparation of this blog post, specifically Claude.ai. The AI tool was utilized in the following ways:

  • Analyzing qualitative student reflection data to identify patterns and themes
  • Calculating grade distribution statistics from the dataset
  • Helping to structure and organize the findings into a coherent narrative
  • Assisting in clear articulation of course design principles and outcomes

I have verified the accuracy of all statistical data through multiple reviews of the original dataset, and all interpretations of student experiences have been carefully checked against the original reflection texts. This analysis represents my own critical thinking and evaluation of the course outcomes, supported by AI-assisted data processing.

This research was conducted with attention to student privacy; all data was analyzed in aggregate form with no identifying information.