The primary goal of this tutorial is to raise the awareness of the research community with regard to Bayesian methods, their properties and potential benefits for the advancement of Reinforcement Learning. An introduction to Bayesian learning will be given, followed by a historical account of Bayesian Reinforcement Learning and a description of existing Bayesian methods for Reinforcement Learning. The properties and benefits of Bayesian techniques for Reinforcement Learning will be discussed, analyzed and illustrated with case studies.

- Introduction to Reinforcement Learning and Bayesian learning

- History of Bayesian RL

- Model-based Bayesian RL

3.1 Policy optimization techniques

3.2 Encoding of domain knowledge

3.3 Exploration/exploitation tradeoff and active learning

3.4 Bayesian imitation learning in RL

3.5 Bayesian multi-agent coordination and coalition formation in RL

- Model-free Bayesian RL

4.1 Gaussian process temporal difference (GPTD)

4.2 Gaussian process SARSA

4.3 Bayesian policy gradient

4.4 Bayesian actor-critic algorithms

- Demo

5.1 Control of an octopus arm using GPTD

- Pascal Poupart (http://www.cs.uwaterloo.ca/~ppoupart,
ppoupart[at]cs[dot]uwaterloo[dot]ca)

Pascal Poupart received a Ph.D. degree in Computer Science from the University of Toronto in 2005. Since August 2004, he is an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo. Poupart's research focuses on the design and analysis of scalable algorithms for sequential decision making under uncertainty (including Bayesian reinforcement learning), with application to assistive technologies in eldercare, spoken dialogue management and information retrieval. He has served on the program committee of several international conferences, including AAMAS (2006, 2007), UAI (2005, 2006, 2007), ICML (2007), AAAI (2005, 2006, 2007), NIPS (2007) and AISTATS (2007).

- Mohammad Ghavamzadeh
(http://www.cs.ualberta.ca/~mgh,
mgh[at]cs[dot]ualberta[dot]ca)

Mohammad Ghavamzadeh received a Ph.D. degree in computer science from the University of Massachusetts Amherst in 2005. Since September 2005 he has been a postdoctoral fellow at the Department of Computing Science at the University of Alberta, working with Prof. Richard Sutton. The main objective of his research is to investigate the principles of scalable decision-making grounded by real-world applications. In the last two years, Ghavamzadeh’s research has been mostly focused on using recent advances in statistical machine learning, especially Bayesian reasoning and kernel methods, to develop more scalable reinforcement learning algorithms.

- Yaakov Engel (http://www.cs.ualberta.ca/~yaki,
yaki[at]cs[dot]ualberta[dot]ca)

Yaakov Engel received a Ph.D. degree from the Hebrew University of Jerusalem in 2005. Since April 2005 he has been a postdoctoral fellow with the Alberta Ingenuity Centre for Machine Learning (AICML) at the Department of Computing Science at the University of Alberta.