Expanding the Design Space for Explainable AI in Human-AI Interactions
Just Added

Expanding the Design Space for Explainable AI in Human-AI Interactions

By BostonCHI

BostonCHI in partnership with NU Center for Design at CAMD presents a hybrid talk by Katelyn Morrison

Date and time

Location

Northeastern University College of Arts, Media and Design (CAMD)

11 Leon Street #102 Boston, MA 02115

Agenda

6:00 PM - 6:30 PM

Connect & Share: Pizza and Networking with BostonHCI Community

6:30 PM - 7:00 PM

Speaker's Talk

7:00 PM - 8:00 PM

Q&A and post event networking

Good to know

Highlights

  • 2 hours
  • In person

Refund Policy

Refunds up to 7 days before event

About this event

Science & Tech • Other


Expanding the Design Space for Explainable AI in Human-AI Interactions 

Explainable AI (XAI) has largely been designed and evaluated through the lens of four recurring metrics: Trust, Reliance, Acceptance, and Performance (TRAP). While these metrics are essential for developing safe and responsible AI, they can also trap us in a constrained design space for how explanations provide value in human-AI interactions. Furthermore, mixed results on whether XAI actually helps calibrate reliance or foster appropriate trust raise the question of whether we are designing XAI with the right goals in mind. This talk explores how we can expand the design space for XAI by moving beyond the TRAP goals. I will discuss how domain experts appropriate AI explanations for purposes unanticipated by designers, how AI explanations can mediate understanding between physicians and other stakeholders, and how we can repurpose generative AI as an explanation tool to support various goals. By reframing XAI as a practical tool for reasoning and human–human interaction, rather than solely as a transparency mechanism, this talk invites us to consider what’s next for explainable AI

About our speaker
Katelyn Morrison is a 5th-year Ph.D. candidate in the Human-Computer Interaction Institute at Carnegie Mellon University’s School of Computer Science, advised by Adam Perer. Her research bridges technical machine learning approaches and human-centered methods to design and evaluate human-centered explainable AI (XAI) systems in high-stakes contexts, such as healthcare. In recognition of her work at the intersection of AI and health, she was awarded a Digital Health Innovations Fellowship from the Center for Machine Learning and Health at Carnegie Mellon University. Her research experience spans industry, government, and non-profit organizations, including the Software Engineering Institute, Microsoft Research, and IBM Research. Before joining Carnegie Mellon University, Katelyn earned her bachelor’s degree in Computer Science with a certificate in Sustainability from the University of Pittsburgh. She is currently on the job market for faculty, postdoc, and research scientist positions.

Naviagation: Enter the building through this gate and take left.

Organized by

BostonCHI

Followers

--

Events

--

Hosting

--

$0 – $15
Nov 3 · 6:00 PM EST