Date: Tuesday, May 3, 2016 (9am-5pm)
Location: Davis Centre, University of Waterloo
Cost: $115 (academics), $0 (industry)
Contact for information: Nancy Day (nday AT uwaterloo DOT ca)
Dr. Jeffrey Joyce is a principal of an engineering consultancy, Critical Systems Labs, which provides clients with expertise in the specification, analysis and review of critical systems. He has contributed to international standards in both aerospace (RTCA DO 178C) and automotive (ISO 26262). Dr. Joyce is co-inventor of US Patent 8618922 ("Method and system for ensuring operation of limited-ability autonomous driving vehicles"). His recent and current client projects address elements of technology used in aircraft engines, autonomous vehicles, submarines, hydro-electricity generation and high-energy physics. Dr. Joyce earned a doctorate in Computer Science from Cambridge University, with earlier degrees from the University of Calgary and the University of Waterloo.
This workshop is intended to stimulate thinking among researchers about the challenges of assuring safety for complex software-intensive systems. The emerging technology of self-driving cars will be used as context for this workshop. ISO 26262 is an international standard that addresses the functional safety of electronic control systems in passenger vehicle in production for use on roads. ISO 26262 requires the development of a safety case for safety-related items in the form of an "argument that the safety requirements for an item are complete and satisfied by evidence compiled from work products of the safety activities during development". The workshop is organized as a series of presentations, small group tasks in breakout sessions, and plenary sessions (see below for agenda). Attendees can be participants or observers. (All students must be participants.) Participants will be grouped in teams. Observers will be invited to "hover" around the edge of team discussions, available to serve as a "sounding board" for each team, but otherwise, leaving the participants to develop their own solutions to the task.
The main meeting room will be DC 1304. The break out rooms will be:
|9:00||DC 1304||Introduction (Nancy Day, Michal Antkiewicz)|
|9:15||DC 1304||Presentation 1 (Jeff Joyce): Objectives of this workshop followed by brief introduction to ISO 26262 and the concept of a safety case|
|9:30||Task 1: Participants will be divided into four teams. Each team will be assigned one of four hypothetical high-level descriptions of a system intended to provide passenger road vehicles with an autonomous driving capability. After moving to an assigned breakout room, each team has 30 minutes to elaborate their assigned high-level description with a preliminary sketch of their proposed architecture and design. Observers are invited to join a discussion about the high-level descriptions and what specific challenges they may pose for assuring safety.|
|10:15||DC 1304||Plenary 1: Each team will be asked to give a 5 minute presentation to the entire workshop about their proposed approach to the architecture and design of their system.|
|10:35||DC 1304||Coffee Break (10 minutes)|
|10:45||DC 1304||Presentation 2 (Jeff Joyce): A brief presentation on the formulation of safety claims for complex systems, including examples from other technical domains.|
Task 2: Each team will be asked to develop a set of written safety claims for their system, taking into account how their proposed architecture and design will facilitate the achievement of these claims.
|11:30||DC 1304||Plenary 2: Each team will be asked to give a 10 minute presentation of their proposed safety claims (with time for brief questions for the purpose of clarification only).|
|12:10||DC 1301||Lunch Break (30 minutes)|
|12:40||DC 1304||Presentation 3 (Jeff Joyce): A brief presentation on the formulation of safety evidence, i.e., what is the evidence that can/should be used as a basis of the safety arguments.|
|1:00||Task 3: Each team will review another team's proposed safety claims with respect to the following questions: (1) are claims sufficiently precise to be meaningful and demonstrable? (2) is it reasonable to expect that the claims are achievable? (3) could the claims conflict with how the automaker might intend to market the vehicle? Observers will be invited to discuss each team's proposed safety claims.|
|1:20||DC 1304||Plenary 3: Each team will be asked to challenge another team's set of proposed safety claims, with an opportunity for a response by the team who wrote the safety claims. Observers will also be invited to make brief comments and/or ask questions about any of the proposed safety claims.|
|2:20||DC 1304||Coffee Break (20 minutes)|
|2:40||DC 1304||Presentation 3 (Jeff Joyce): A brief presentation on the development of safety arguments.|
|3:00||Task 4: Each team will prepare a sketch of a safety argument in support of at least one of their safety claims. (They may also revise their safety claims to take into account feedback received in the previous plenary.) Observers are invited to "hover" around the edge of team discussions, available to serve as a "sounding board" for each team, but otherwise, leaving the participants to develop their own solutions to the task.|
|3:45||DC 1304||Plenary 4: Each team have 10 minutes to present their sketch of a safety argument, with time for questions. An addition 20 minutes will be available for observers to comment on the safety arguments presented by each group.|
|4:45||DC 1304||Closing Remarks (Michal Antkiewicz, Nancy Day, and Krzysztof Czarnecki)|
The following high-level descriptions are for hypothetical systems. These descriptions were created exclusively for instructional purposes. Any similarity to a commercial product is not intended.
The system is a set of advanced driver assistance features whose combined behaviour is capable of providing a "hands-free" driving experience under a well-defined set of conditions, e.g., divided freeway. These features collectively ensure that (1) the vehicle maintains a constant headway with a forward vehicle subject to the posted speed limit; (2) the vehicle stays centered in the current lane, except when performing a required steering maneuver; (3) and the vehicle will brake to avoid a collision with a forward vehicle. Depending on the vehicle model and options, different configurations of the system will enable/disable particular features. The validation process involves demonstrating that both functional and safety requirements for each feature are satisfied, including fault tolerance. The safety argument for this system is expected to rest primarily on a clearly stated limitation of the intended use of system; in particular, that the driver must be fully aware of the driver situation at all times and ready to take back control of the vehicle within seconds; along with a "fail operational" design that ensures that system will always alert the driver about a problem soon enough for the driver to intervene.
Using a variety of sensor inputs, the software maintains an internal model of the situation including vehicles, road topology/geometry and obstacles. Given this internal model of the situation and a set of rules (e.g., don't exceed the posted speed limit, don't request more than 0.5 g of braking force except in an emergency), the software will continuously calculate a planned trajectory that both maximizes safety margins and ensures a comfortable passenger experience, e.g., the car will avoid weaving back and forth over the center line. Another layer of software decides on combination of actions (steering, braking and propulsion) to follow the planned trajectory. The validation process involves generating sequences of simulated sensor inputs for millions of situations and checking that the planned trajectory violated neither safety constraints nor passenger comfort metrics. The safety argument for this system is expected to rest primarily on the belief that the precise calculation of a safe trajectory for the vehicle several times each second is far superior to the driver ability of an average driver to choose a safe trajectory.
Using machine-learning techniques, the software deployed on production vehicles is the product of a development process that uses machine-learning methods to "train" the software to mimic the behaviour of skilled drivers. The training phase is carefully designed to expose the software to every conceivable situation that requires specific decision by a driver. The validation process for the final production version of the software involves comparing actions initiated by the software with driver actions over a very large variety of situations, with a minimum of 98% consistency between software-initiated actions and driver actions. The safety argument for this system is expected to rest primarily on the belief that an autonomous driving system that closely mimics skilled drivers is inherently a safe system.
The system is a platform for applications ("apps") that the vehicle owner can purchase separately from a third-party. The platform includes a built-in capability that maintains a dynamic model of the driving situation derived from a variety of sensor inputs and stored maps, e.g., location of all nearby moving vehicles. The third-party apps are required to use this dynamic model as their exclusive source of sensor based data. The platform also includes a built-in arbiter that resolves potential conflicts between apps, e.g., a request for braking always takes priority over a request for acceleration. The arbiter also enforces a fixed set of safety rules, e.g., enforces a maximum level of braking force. Some third-party apps are officially certified by the vehicle manufacturer. The vehicle owner may also download other apps which are not officially certified. However, the use of non-certified apps is inhibited unless the owner accepts all responsibility for the installation of the non-certified app.
David R. Cheriton School of Computer Science, University of Waterloo