Please note: This PhD seminar will take place online.
Shaikh Shawon Arefin Shimon, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Jian Zhao
Virtual Reality (VR) users with limited physical space or lower limb disabilities often rely on seated VR experiences on stationary furniture, limiting bare-hand locomotion techniques that depend on lower-limb movement. Existing bare-hand VR locomotion approaches typically rely on gaze-based input or manual midair and on-skin gestures performed below the neck or within the field of view (FOV) of head-mounted displays (HMDs), which can interfere with object manipulation and has limited support for complex locomotion actions (e.g., turning, crouching, and jumping). Recent advances in head-mounted and wearable sensing enable reliable detection of subtle unimanual gestures performed in above-neck regions outside the user’s FOV.
Building on these capabilities, we introduce Beyond-FOV locomotion, a bare-hand, unimanual interaction technique around the VR HMD that uses above-neck gestures to control directional navigation and aforementioned complex locomotion actions while seated, enabling simultaneous locomotion and within-FOV object manipulation without explicit mode switching. We evaluate this interaction model using an imaginary user interface paradigm in a user study (N=16), comparing it with a controller based baseline and two within-FOV bare-hand techniques. Results suggest that Beyond-FOV locomotion performs comparably to or better than within-FOV manual approaches while reducing task completion time and physical strain, indicating promise for improving seated bare-hand VR interaction.