Wave goodbye to smart phones that don't recognize gesture commands

Thursday, June 27, 2019

Researchers at the Cheriton School of Computer Science have developed a strategy that could reduce the level of frustration users experience when giving gesture commands to smart devices and smart environments.

In a study that outlines the new strategy, the researchers found that when developing smart devices to recognize gesture input, the adage, “If at first, you don’t succeed, try, try again,” can be applied to boost users’ perceptions of system reliability.

Gesture input is growing in popularity as technology companies incorporate it in their smart devices. Some smartphones, for example, allows users to simply flip the device over to silence notifications, or make a chopping motion to turn the flashlight on or off.

“Any time a computer tries to do something a little intelligent like recognizing users’ actions or interpreting something that a user says there is the possibility of an error in that interpretation,” said Edward Lank, a professor at the David R. Cheriton School of Computer Science. “The errors might be because the users aren’t precise enough in their input, or users have accents, and the computer struggles to interpret them.”

“The idea behind our strategy is that if you don’t give a command with enough precision for the device to recognize and you immediately try again, and it is still imprecise, the device can use those two unclear commands and determine what they’re closest to and execute that task.”

The new recognition strategy — bi-level thresholding — turns users’ near-misses into information that can be used to determine what they’re trying to do.

To assess the potential of bi-level thresholding, the researchers looked at free-space gesture input, the style of gesture input found in movies like Minority Report. They assessed the potential of bi-level thresholding in three ways. First, they conducted a small pilot study that showed that bi-level thresholding could boost user success when trying to perform gesture input. A second study showed that, even when recognition accuracy was identical, users perceived bi-level thresholding systems to be more accurate. Finally, a third study showed that bi-level thresholding has the potential to boost overall recognition accuracy in free-space gesture input.

“Our studies demonstrated that bi-level thresholding has the potential for improving gesture recognition,” said Keiko Katsuragawa, an adjunct professor at the Cheriton School of Computer Science. “We also found that when users give commands that are not recognized, they view the first instance of failure as relatively minor and the person will simply try again, but persistent failure is what really frustrates users.”


A paper detailing the new strategy titled Bi-Level Thresholding: Analyzing the Effect of Repeated Errors in Gesture Input, authored by Cheriton School of Computer Science Adjunct Professor Keiko Katsuragawa, Professor Edward Lank and his master’s students Ankit Kamal, Qi Feng Liu and Matei Negulescu, was recently published in ACM Transactions on Interactive Intelligent Systems.

  1. 2024 (22)
    1. March (13)
    2. February (1)
    3. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (50)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)