Please note: This master’s thesis presentation will be given online.
Nivasini
Ananthakrishnan, Master’s
candidate
David
R.
Cheriton
School
of
Computer
Science
Supervisor: Professor Shai Ben-David
Quantifying the probability of a label prediction being correct on a given test point or a given sub-population enables users to better decide how to use and when to trust machine learning derived predictors. In this work, combining aspects of prior work on conformal predictions and selective classification, we provide a unifying framework for confidence requirements that allows for distinguishing between various sources of uncertainty in the learning process as well as various region specifications. We then consider a set of common prior assumptions on the data generation process and show how these allow learning justifiably trusted predictors.
To join this master’s thesis presentation on Zoom, please go to https://us02web.zoom.us/j/81290732372?pwd=QmEvSHcvaTRuZFhHQjBZUGhBaWhtQT09.