Researchers have developed a new voice assistant that allows people with visual impairments to get web content as quickly and as effortlessly as possible from smart speakers and similar devices.
In a new study done in collaboration with Microsoft, researchers found a way to merge the best elements of voice assistants with screen readers to create a tool that makes free-form web searches easier. The tool is called VERSE — Voice Exploration, Retrieval, and Search. The study was led by Alexandra (Sasha) Vtyurina, a PhD student at Waterloo’s Cheriton School of Computer Science, along with Adam Fourney, Meredith Ringel Morris and Ryen W. White from Microsoft Research, and Leah Findlater, an assistant professor at the University of Washington.
“People with visual impairments often rely on screen readers, and increasingly voice-based virtual assistants, when interacting with computer systems,” said Alexandra Vtyurina, who undertook the study during her internship at Microsoft Research. “Virtual assistants are convenient and accessible but lack the ability to deeply engage with content, such as read beyond the first few sentences of an article, list alternative search results and suggestions. In contrast, screen readers allow for deep engagement with accessible content, and provide fine-grained navigation and control, but at the cost of reduced walk-up-and-use convenience.
“Our prototype, VERSE, adds screen reader-like capabilities to virtual assistants, and allows other devices, such as smartwatches to serve as input accelerators to smart speakers.”
The primary input method for VERSE is voice, so users can say “next,” “previous,” “go back” or “go forward.” VERSE can also be paired with an app, which runs on a smartphone or a smartwatch.
These devices can serve as input accelerators, similar to keyboard shortcuts. For example, rotating the crown on a smartwatch advances VERSE to the next search result, section, or paragraph, depending on the navigation mode.
In the study, 53 visually impaired web searchers were surveyed. More than half of the respondents reported using voice assistants multiple times a day, and a vast range of devices, such as smart speakers, phones and smart TVs.
The data collected from the survey was used to inform the design of a prototype of VERSE after which a user study was conducted to gather feedback.
“At the outset, VERSE resembles other virtual assistants, as the tool allows people to ask a question and have it answered verbally with a word, phrase or passage,” Alexandra said. “VERSE is differentiated by what happens next. If people need more information, they can use VERSE to access other search verticals, for example, news, facts, and related searches, and can visit any article that appears as a search result.
“For articles, VERSE showcases its screen reader superpowers by allowing people to navigate along words, sentences, paragraphs or sections.”
The study, titled VERSE: Bridging Screen Readers and Voice Assistants for Enhanced Eyes-Free Web Search, authored by Alexandra Vtyurina, Adam Fourney, Meredith Ringel Morris, Leah Findlater and Ryen W. White, will be presented at the 21st International ACM SIGACCESS Conference on Computers and Accessibility, which will be held in Pittsburgh, Pennsylvania from October 28–30, 2019.