Pascal Poupart's Current Projects


Efficient Algorithms for Partially Observable Markov Decision Processes

The design of automated systems capable of accomplising complicated tasks is at the heart of computer science.  Such systems can be viewed abstractly as taking inputs from the environmnet and producing outputs toward the realization of some goals.  An important problem is the design of good control policies that produce suitable outputs based on the inputs received.  In many application domains (e.g., robotics, assistive technologies, spoken dialogue systems, etc.), the design of control policies is complicated by imprecise inputs from noisy sensors and uncertain outputs from stochastic actuators.

Partially observable Markov decision processes (POMDPs) provide a natural framework for modeling complex control problems with partially observability, uncertain action effects, incomplete knowledge of the environment dynamics and multiple interacting objectives.  To date, the use of POMDPs in real-world problems has been limited by the poor scalability of existing solution algorithms.  In collaboration with Craig Boutilier and Jesse Hoey at the University of Toronto, I have developed several new algorithms, pushing significantly further the size of POMDPs that can be tackled.  I have also applied my work on POMDPs to intelligent assistive technologies, spoken dialogue management, trust modeling in electronic markets and Bayesian reinforcement learning.

Relevant papers include:

Intelligent Ubiquitous System to Help Elderly Persons with Dementia

It is estimated that 1 in 3 people over the age of 85 has dementia, with Alzheimer disease accounting for 60-70% of cases.  At the onset of dementia, a family member will often assume the role of caregiver.  Unfortunately, as dementia worsens, the caregiver will experience greater feelings of burden, which frequently result in the care recipient being placed in a long term care facility.  A solution to relieve some of the financial and physical burden placed upon caregivers and health care facilities is a ubiquitous, autonomous system that will allow aging in place by improving the quality of life for both the care recipient and their caregiver.

People with advanced dementia may have difficulty completing even simple activities of daily living (ADL) and require assistance from a caregiver to guide them through the steps needed to complete an activity.  Examples of ADL are handwashing, dressing, and toileting.  While there have been several cognitive aids designed to assist ADL completion, all of them require explicit feedback from the user, such as button press, to indicate that a step has been completed.  This makes them unsuitable for persons with moderate-to-severe dementia as this group does not possess the capacity to learn the required interactions.

In collaboration with Jesse Hoey, Alex Mihailidis, Geoff Fernie, Jennifer Boger and Craig Boutilier  at the University of Toronto, I am designing robust control systems based on partially observable Markov decision processes (POMDPs) that will guide patients with memory deficiencies through the steps of handwashing by monitoring their progress with video-cameras and when necessary, prompting the next step with a verbal cue.  Modeling this problem as a POMDP allows the system to gradually adapt to each patient by learning their preferences through interaction and to robustly monitor patients despite noisy/incomplete sensor information such as obscured camera views.

Relevant papers include:


Spoken Dialogue Management

Spoken dialog systems help users achieve some goal through spoken language.  Within a spoken dialog system, a dialogue manager interprets evidence from the conversation and decides what system action to take to reliably and efficiently satisfy a user's goal.  Actions might include asking a question, confirming a user's goal, querying a database, or stating information.

The dialog management task is complex for several reasons.  First the system observes the user's actions via automated speech recognition and language parsing: imperfect technologies which corrupt the evidence available to the system.  Second, each user action (even if it could be observed accurately) provides incomplete information about a user's goal, so the system must assemble evidence over time to infer a user's goal.  Because the user might change their goal at any point, inconsistent evidence could either be ude to a channel (speech recognition) error or due to a changed user goal.  Thus deciding how to interpret conflicting evidence is a challenge.  Finally, the system must make trade-offs between the "cost" of gathering additional information (increasing its certainty of the user's goal, but prolonging the conversation) and "cost" of commiting to an incorrect user goal.  That is, the system must perform planning to decide what sequence of actions to take to best achieve the user's doal despite having imperfect information about that goal.  For all of these reasons, the dialog management problem can be regarded as planning under uncertainty.

In collaboration with Jason Williams and Steve Young at Cambridge University, I am developing robust dialog managers based on partially observable Markov decision processes (POMDPs). 

Relevant papers include:


Trust Modeling in Electronic Markets

Trust is a desirable property of any market, because it reduces the friction by which we do business.  A good example of ease which trust provides is a business agreement using a handshake instead of legal contracts.  The success of trust in streamlining transactions in traditional markets motivates the search for a comparable measure of trust in emerging electronic markets which can be populated agents that automate the transactions between buyers and sellers.

In collaboration with Kevin Regan and Robin Cohen at the University of Waterloo, I am developing trust models based on partially observable markov decision processes.

Relevant papers include:

Preference Elicitation

In many situations, a set of hard constraints encodes the feasible configurations of some system or product over which multiple users have distinct preferences.  However, making suitable decisions requires that the preferences of a specific user for different configurations be articulated or elicited, something generally acknowledged to be onerous.

In collaboration with Craig Boutilier (University of Toronto), Relu Patrascu (University of Toronto) and Dale Schuurmans (University of Alberta), I have developed interactive algorithms to incrementally elicit user preferences.

Relevant papers:

Bayesian Reinforcement Learning

Reinforcement learning (RL) was originally proposed as a framework to allow agents to learn in an online fashion as they interact with their environment.  Existing RL algorithms come short of achieving this goal because the amount of exploration required is often too costly and/or the amount of computation is too large for timely online learning.  As a result, RL is mostly used for offline learning in simulated environments. 

In collaboration with Nikos Vlassis (University of Amsterdam) and Kevin Regan (university of Waterloo), I am working on the design of effective online learning algorithms that are computationally efficient while minimizing the amount of exploration.  We are pursuing a Bayesian model-based approach, framing RL as a partially observable Markov decision process.  This approach has the advantage of naturally optimizing the exploration/exploitation tradeoff and allowing domain experts to encode informative priors which can significantly reduce the amount of exploration needed in practice.

Relevant papers:


Ontology Learning

An ontology consists of a database of concepts and relationships encoding general knowledge in some domain.   Ontologies can used as knowledge references by humans or by automated systems for natural language understanding.  Traditionally, ontologies are hand coded, making them quite expensive to build and maintain.  Alternatively, one could automatically "learn" ontologies from large corpuses of text.  However this is a very difficult task given the inherent uncertainty of semantic analysis.

In collaboration with Andy Chiu and Chrysanne DiMarco at the University of Waterloo, I am working on algorithms for ontology learning and, more generally, for automated semantic analysis.

Relevant papers: