Assisted Handwashing using a Partially Observable Markov Decision Process (POMDP)

Older adults living with cognitive disabilities (such as Alzheimer's disease or other forms of dementia) have difficulty completing activities of daily living (ADLs). They forget the proper sequence of tasks that need to be completed, or they lose track of the steps that they have already completed. The current solution is to have a human caregiver assisting the patients at all times, who prompts them for tasks or reminds them of their situation. The dependence on a caregiver is difficult for the patient, and can lead to anger and helplessness, particularly for private ADLs such as using the washroom.

Here we present our real-time system for assist persons with dementia during handwashing. Assistance is given in the form of verbal and/or visual prompts, or through the enlistment of a human caregiver's help. The system uses only video inputs, and combines a Bayesian sequential estimation framework for tracking hands and towel, with a decision theoretic framework for computing policies of action -- specifically a partially observable Markov decision process (POMDP). A key element of the system is the ability to estimate and adapt to user states, such as awareness, responsiveness and overall dementia level.

This project is part of the COACH project.

Overall System

hand pomdp
The overall system works as follows as shown above. Video is grabbed by an overhead Point Grey Research Dragonfly II IEEE-1394 camera, and fed to a hand and towel tracker. The tracker reports the positions of the hands and towel to a belief monitor that tries to estimate where in the task the user is currently: what have they managed to do so far, and what is their internal mental state. The belief about where the user's state is then passed to the policy. The policy maps belief states into actions: audio-visual prompts or calls for human assistance.
Talks, Videos
You can browse the ICVS talk, which should link to videos, but if not here they are again. Each video looks as follows (still shot):
video still
  • On the left, you see video taken from an independent video camera showing the whole scene. This video is not used by the system.
  • On the right, you see the video from the overhead camera
  • In the middle, you see the belief state in the planstep (PS), the awareness (AW), the responsiveness (RE) and the overall dementia level (DL).

Scenario B actor trial - person needs some assistance, but is responsive to audio prompts.

Scenario C actor trial - person needs assistance for most steps, and is only responsive to video prompts. The system learns this after an initial attempt with audio prompts, and then switches to using video only.


More details can be found by reading the following papers:

Details on the hand tracker can be found in the following paper, presented at BMVC 2006

Still more papers on this topic can be found in my publications page, or through the IATSL web page
The system was tested in clinical trials in Toronto in summer 2007. For collaborating researchers, results and example videos from real trials can be found by emailing me. Alternatively, you can check out a video of two professional actors playing the roles of a user and a carer in this video

A paper describing our work in validating the use of actors to simulate dementia appears in
We plan to extend this system in a number of directions