Expanding the Design Space for Explainable AI in Human-AI Interactions
Explainable AI (XAI) has largely been designed and evaluated through the lens of four recurring metrics: Trust, Reliance, Acceptance, and Performance (TRAP). While these metrics are essential for developing safe and responsible AI, they can also trap us in a constrained design space for how explanations provide value in human-AI interactions. Furthermore, mixed results on whether XAI actually helps calibrate reliance or foster appropriate trust raise the question of whether we are designing XAI with the right goals in mind. This talk explores how we can expand the design space for XAI by moving beyond the TRAP goals. I will discuss how domain experts appropriate AI explanations for purposes unanticipated by designers, how AI explanations can mediate understanding between physicians and other stakeholders, and how we can repurpose generative AI as an explanation tool to support various goals. By reframing XAI as a practical tool for reasoning and human–human interaction, rather than solely as a transparency mechanism, this talk invites us to consider what’s next for explainable AI
About our speaker
Katelyn Morrison is a 5th-year Ph.D. candidate in the Human-Computer Interaction Institute at Carnegie Mellon University’s School of Computer Science, advised by Adam Perer. Her research bridges technical machine learning approaches and human-centered methods to design and evaluate human-centered explainable AI (XAI) systems in high-stakes contexts, such as healthcare. In recognition of her work at the intersection of AI and health, she was awarded a Digital Health Innovations Fellowship from the Center for Machine Learning and Health at Carnegie Mellon University. Her research experience spans industry, government, and non-profit organizations, including the Software Engineering Institute, Microsoft Research, and IBM Research. Before joining Carnegie Mellon University, Katelyn earned her bachelor’s degree in Computer Science with a certificate in Sustainability from the University of Pittsburgh. She is currently on the job market for faculty, postdoc, and research scientist positions.