PhD Seminar • Artificial Intelligence | Human–Computer Interaction • Public Explanations in Human-AI Interaction

Thursday, April 9, 2026 10:00 am - 11:00 am EDT (GMT -04:00)

Please note: This PhD seminar will take place online.

Marvin Pafla, PhD candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Mark Hancock, Kate Larson

Explainability is often framed as a property of an AI model, with explanations extracted from its internals and shown to users. In this seminar, I instead provide an embodied account of explainability based on Dourish and enactivist cognition, where understanding is created in use as people act on affordances in shared practice. Using demonstrations and conceptual analysis, I then briefly discuss the ontological obstacles that arise when instead “looking inside” large language models: surrogates import external abstractions that can be mistaken for the model’s, and focusing on internal reasoning misses that explainers participate in their own understanding. I also briefly address how these obstacles appear in XAI practice, arguing that many so-called explanations are misnamed, which skews their purpose and can increase overreliance.

The main focus of the seminar is public explanation: how explanations can reorganize sense-making by publicly highlighting what matters, making relevant distinctions available for action, and providing affordances through which people can probe, coordinate, and repair behaviour in situated practice. On this view, explanations do not primarily work by revealing an internal chain of reasoning, but by making sense publicly available in ways others can take up and use. Finally, I introduce a proposal for an empirical study that tests the effects of different kinds of AI explanations, including this form of ostensive explanation.


Attend this PhD seminar virtually on MS Teams.