Research OverviewMy research group consists of two post-doctoral fellows and nine graduate students that I supervise or co-supervise. Currently, we pursue a variety of research projects in the areas of intelligent user interfaces, persuasive technology, multi-touch and mobile interaction, and powerwall design.
Current Talk Slides
I recently gave a research presentation at Grand's Sustainability Workshop at UBC. My slides are available here.
In this project, we explore the design of large interactive screens. We are deploying a 90 square foot, 16 Megapixel display in the lobby of Davis Centre at the University of Waterloo. As well, a colleague Dan Vogel has deployed a large display that uses body position to support interaction within an art gallery.
Of particular interest to us is how on-screen territory can be allocated and manipulated to control parallel usage of the display. For example, among other questions we explore whether we can encourage users to make room for others, control grouping and spacing, and manage the process of overall engagement with the display.
|Mass Interactions with Large Displays
In conjunction with local start-ups CineClick and NetClick, we have been exploring how to support hundreds of users simultaneously sharing large, public displays. In these mass interaction environments, users cannot localize using on-screen pointers or physical touch, as the number of pointers and physical space limitations would make interaction too slow. Our initial work has examined Spatial Corresponence Targeting as a mechanism for supporting these mass interactions. We are currently working on gamification and cooperative work using mass interactions in conjunction with CineClick and NetClick. Enlarging the image on the left or this youtube video are good starting points for understanding some of our early work in this space.
|Motion Gesture Interaction
We are interested in the use of physical movement as a mechanism for issuing commands to smartphone devices. In particular, we have been looking at questions surrounding how to design motion gestures, how to map motion gestures onto commands, how to improve recognition of motion gestures in everyday use, and how to teach users to effectively use motion gestures for input.
In this project, we explore the design of persuasive technology for sustainability. In particular, we are examining demand management programs for electricity consumption (time of use pricing, PeakSaver) in Ontario with a focus on shortcoming. We are exploring persuasive technology as a tool to foster collective action on the part of large groups of electricity users so that they can take more aggressive action at times when it matters most.
Our lab is currently investigating the use of self-monitoring systems (e.g. the "FitBit") alongside tests of cognitive performance to encourage people to engage in healthier habits. In particular, we are looking at sleep among university students, and trying to encourage students to get more sleep>
|Enhanced Input on Multi-touch Tablets
Modern multi-touch tablets are becoming a primary computing device for many users. While multi-touch does offer a set of gestures to control behaviour, it seems like the language of gesture could be significantly enhanced. This is particularly true because, as tablets become a primary computing device, users are willing to learn more and more about the intricacies of the gestural language they use every day to control these devices.
We are currently working on enhancements of the pinch-to-zoom gesture that was introduced in the 1980s and has been popularized by Apple in their iPad devices. While two fingers support natural zooming, one question we ask is whether 3-fingered variants of pinch could be used as an ehancement to the basic pinch-to-zoom behaviour specified more than 20 years ago.
|Focus + Context Sketching on Smartphones
Modern smartphones support basic text entry, but other forms of content are still difficult to enter. To address this, one popular smartphone (the Samsung Galaxy Note) uses an active stylus to enhance the input of non-textual graphical content such as sketches and diagrams.
In this project, we question whether the active stylus is necessary to support input on a smartphone, or if enhanced finger-sketching techniques could also produce high-precision input.
|Endpoint Prediction Using Motion Kinematics
We have worked for several years on developing models of movement in interfaces. These models allow us to predict the goal of a user's trajectory, i.e. the motion endpoint. With this information, we can do things like simplify pointing tasks in interfaces or pre-compute operations to speed response rate.
|Recognizing Hand Drawn Diagrams
Hand drawn diagram recognition has been a focus of mine since my Ph.D. Most recently, I've been working with the MathBrush Project on designing interfaces to support recognition, correction, and manipulation of mathematical expressions. I also do work studying the usability of multi-touch systems for sketching input, both in diagram input and recognition and in free-form drawing tasks.
I have several active funding sources, including:
- NCE GRAND Program
- ORF Program