The NIPS-08 Workshop on
Model Uncertainty and Risk in Reinforcement Learning

To be held at the Twenty-Second Annual Conference on Neural Information Processing Systems (NIPS-08)
December 13, 2008 in Whistler, British Columbia, Canada

Schedule and list of talks/posters are now available.


Reinforcement Learning (RL) problems are typically formulated in terms of Stochastic Decision Processes (SDPs), or a specialization thereof, Markovian Decision Processes (MDPs), with the goal of identifying an optimal control policy. In contrast to planning problems, RL problems are characterized by the lack of complete information concerning the transition and reward models of the SDP. Hence, algorithms for solving RL problems need to estimate properties of the system from finite data. Naturally, any such estimated quantity has inherent uncertainty. One of the interesting and challenging aspects of RL is that the algorithms have partial control over the data sample they observe, allowing them to actively control the amount of this uncertainty, and potentially trade it off against performance.

Reinforcement Learning as a field of research, has over the past few years seen renewed interest in methods that explicitly consider the uncertainties inherent to the learning process. Indeed, interest in data-driven models that take uncertainties into account, goes beyond RL to the fields of Control Theory, Operations Research and Statistics. Within the RL community, relevant lines of research may be classified into the following (partially overlapping) sub-fields:

  • Bayesian RL. Bayesian methods attempt to explicitly model uncertainties using posterior probability distributions, computed using Bayes' rule. Such Bayesian modeling may be used in estimating the MDP's transition and reward distributions; or in estimating other quantities that are more directly related to performance, such as value function and policy gradient. 
  • Risk sensitive and robust dynamic decision making. These methods use information beyond the expected return, to compute policies that are robust to inaccuracies in the estimated model. Such quantities include quantiles, as well as higher order moments of the return random variable. A closely related family of methods use expectations of non-linear mappings of the return, as their measures of performance. 
  • RL with confidence intervals. This research is concerned with methods that employ Frequentist measures of model uncertainties, based on confidence intervals. Much of this research is focused on on-line algorithms, whose performance is evaluated concurrently with the learning process. 
  • Applications of risk-aware and uncertainty-aware decision-making. Applications in mission critical tasks, finance, and other risk-sensitive domains, where uncertainties have to be taken into account, in order to establish a level of worst-case performance, or to guarantee a minimum level of performance that may be achieved with high probability.


This workshop is aimed at bringing together researchers working in these and related fields, allow them to present their current research, and discuss possible directions for future work. We intend to focus on possible interactions between the sub-fields listed above, as well as on interactions with other related fields, which are outside of the current RL mainstream.

We would like to have panel discussions on topics such as
  • "Models that work and those that don't." In this panel we will let participants discuss specific applications and theoretical models and share experience regarding the effectiveness of different approaches. We will prepare several examples and test-cases and discuss the approaches taken to handle uncertainty. 
  • "Benchmarks and challenges." The objective of this panel is to arrive at a few sample problems from different fields that encompass the core challenges of control under uncertainty. We will solicit initial proposals for such problems/domains/challenges before the workshop, to avoid the awkward silence that often accompanies such meetings.

Workshop Information



Workshop Schedule


Important Dates

Submissions Due 
October 30, 2008

Notification of Acceptance 
November 4, 2008

Workshop Date 
December 13, 2008

Workshop Format

This one-day workshop will consist of two to three invited talks and six to eight paper presentations. Two panel discussions will be used when appropriate to facilitate discussion of clusters of closely related talks. The remainder of the workshop will consist of a poster session to encourage more in-depth discussions. The eventual mix of contributed talks and poster discussion will depend on the submissions.

Call for Contributions

The organizing committee is currently seeking either technical papers (eight pages in the conference format) or else abstracts (up to two pages) describing research relevant to the workshop. Submissions should be sent via email to Pascal Poupart at ppoupart@cs.uwaterloo.ca and should be in Postscript, PDF, or MS Word format. Previously published work that is reworded, summarized or extended may be submitted to the workshop.  However, priority will be given to novel work.  If the papers are of sufficient quantity and quality, we will seek to publish them as an edited book or journal special issue.

Confirmed Invited Speakers

Workshop Organizing Committe

Yaakov Engel
Email: yakiengel@gmail.com
WWW: http://www.cs.ualberta.ca/~yaki

Mohammad Ghavamzadeh
INRIA Lille - Team SequeL
Email: mgh@cs.ualberta.ca
WWW: http://www.cs.ualberta.ca/~mgh

Shie Mannor
McGill University
Department of Electrical and Computer Engineering
Email: shie@ece.mcgill.ca
WWW: http://www.ece.mcgill.ca/~smanno1

Pascal Poupart
School of Computer Science
University of Waterloo
Email: ppoupart@cs.uwaterloo.ca
WWW: http://www.cs.uwaterloo.ca/~ppoupart

Last changed Monday, October 27, 2008