Please note: This PhD seminar will take place in DC 2310 and online.
Liwei Alan Wu, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Jian Zhao
With the progress in Large Language Models (LLMs) and rapid development of wearable smart devices like smart glasses, there is a growing opportunity for users to interact with on-device virtual assistants through voice and gestures with ease. Although voice user interfaces (VUIs) have been widely studied, the potential uses of full-body gestures in VUIs that can fully understand users’ surroundings and gestures are relatively unexplored.
In this two-phase research using a Wizard-of-Oz approach, we aim to investigate the role of gestures in VUI interactions and explore their design space. In an initial exploratory user study with six participants, we identify influential factors for VUI gestures and establish an initial design space. In the second phase, we conducted a user study with 12 participants to validate and refine our initial findings.
Our results showed that users are open and ready to adopt and utilize gestures to interact with multi-modal VUIs, especially in scenarios with poor voice capture quality. The study also highlighted three key categories of gesture functions for enhancing multi-modal VUI interactions: context reference, alternative input, and flow control. Finally, we present a design space for multi-modal VUI gestures along with demonstrations to enlighten future design for coupling multi-modal VUIs with gestures.
To attend this PhD seminar in person, please go to DC 2310. You can also attend virtually using Zoom.