Please note: This PhD seminar will take place online.
Vasisht Duddu, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor N. Asokan
Practitioners must demonstrate properties of an ML model, its training process, and its training data, to a verifier (e.g., regulator or customer). Such claims are typically communicated via ML property cards (e.g., model, data, and inference cards). However, these property cards are not verifiable and a malicious practitioner can make false claims.
I propose ML property attestations, which are technical mechanisms that allow provers (e.g., model trainers) to demonstrate ML properties to verifiers, while ensuring confidentiality of the proprietary model and data. I show that existing software-based attestations are either inefficient (e.g., cryptographic mechanisms), or ineffective and easily evaded (e.g., ML-based mechanisms). I then identify hardware-assisted mechanisms using trusted execution environments as an effective, efficient and scalable alternative for providing ML property attestations. Such ML property attestations can be used for verifiable ML property cards, enable accountability of practitioners’ claims, and also show compliance with regulations.