Project Leader: Prof. Dan Vogel
New camera-based input devices like the Microsoft Kinect and LEAP Motion have generated a lot of excitement and press coverage. The dream is that these will make interacting with computers more natural, but in reality, waving your arm or pointing your finger to navigate something like Netflix can leave a lot to be desired.
In this mini-project, we’ll look at a fundamental Human-Computer Interaction problem associated with all computer vision-based input, the “Midas Touch Problem.” Like the Greek Myth where everything King Midas touches turns to gold, with computer vision, everything you do may be interpreted as input; even if you’re just waving at a friend. What interaction techniques can minimize and hopefully eliminate this problem? Discovering a universal solution could unlock related problems like delimiting gestures and signalling input events. The challenge is to find techniques that work well with noisy computer-vision algorithms, aren’t too tiring, and are easy and fast to perform. We’ll review current approaches and go through a mini bootcamp for computer vision coding so we can prototype and test new interaction techniques — and experience first hand what Human-Computer Interaction research is like.