Please note: This PhD seminar will take place in DC 2314 and online.
Frédéric Bouchard, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Krzysztof Czarnecki
Autonomous vehicles benefit from an interpretable knowledge representation, to specify and verify the safety, reliability and legal conformance of the vehicle’s behaviour. Rules are typically more interpretable than alternative approaches, so in previous work we presented the architecture of a fully rule-based behaviour planner that is successfully deployed in a real autonomous vehicle.
In this work, we formalize the counterexample-driven learning procedure that we use to construct a rule base that aims to make decisions with understandable causality. Starting with a set of requirements and a precise notion of their violation, we show how an expert can efficiently learn a driving policy from counterexample situations. During operation, these situations provide an explanation of why a rule applies. We demonstrate our computer-aided learning procedure using the publicly-accessible Carla simulator, which provides a realistic and non-trivial operational design domain.