Please note: This master’s thesis presentation will take place online.
Andy Yu, Master’s candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Lukasz Golab
Explainable AI methods have been proposed to help interpret complex models, e.g., by assigning importance scores to model features or perturbing the features in a way that changes the prediction. These methods apply to one model at a time, but in practice, engineers usually select from many candidate models and hyperparameters. To assist with this task, we formulate a space of comparison operations for multiple models and demonstrate CAMEO: a web-based tool that explains consensus and expertise among multiple models. Users can interact with CAMEO using a variety of models and datasets, to explore 1) consensus patterns, such as subsets of the test dataset or intervals within feature domains where models disagree, 2) data perturbations that would make conflicting models agree (and consistent models disagree), and 3) expertise patterns, such as subsets of the test dataset where a particular model has surprising performance compared with other models.