
Cuing up your ’80s playlist, scheduling a smart washer to start a cycle or controlling home lighting while you’re away can be as simple as giving instructions to a digital voice assistant though an Amazon Echo or Apple HomePod device.
But for people with disabilities—especially those with speech impairments such as vocal cord damage, mutism or severe stuttering—interacting with Alexa or Siri can be difficult and frustrating.
Computer science researchers at the University of Maryland are working on a voice-free alternative—a system that would allow users to simply write words in the air using innovative technology based on voice processing software and a handwriting interface.
The digital tool, called Scribe, picks up the movements of a stylus-like device with sophisticated sensors, transferring the data to the voice assistant platform where it is analyzed as if the text were spoken out loud. The technology is unique in that it does not interfere with voice commands, allowing users to mix speech and air-writing as needed.
The researchers say Scribe could also be useful for people with autistic and neurodiverse conditions that often have minimal verbal output.
“For some people, handwriting can feel less stressful and more intuitive than speaking,” says Yang Bai Ph.D. ’25. “It reduces cognitive load by allowing more time to process and formulate thoughts, which is especially helpful for those with language processing challenges. By supporting nonverbal and delayed communication, handwriting interfaces greatly enhance accessibility and independence.”
Bai co-developed the system with fellow doctoral students Irtaza Shahid, who is graduating in December of this year, and Harshvardhan Takawale, a third-year student, along with their adviser, Nirupam Roy, an assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies.
Powered by a novel acoustic sensing technique called cross-frequency continuous wave sonar, Scribe uses an ultrasonic stylus and a small add-on speaker to transmit high-frequency signals—a kind of “sonic ink.” Though inaudible to humans, these signals are picked up by standard microphones—allowing the system to track movement with high precision while leaving voice functions unaffected.
In tests, Scribe achieved a 94.1% accuracy rate in recognizing handwritten text, rivaling traditional input methods. To make the interface intuitive, the team conducted hands-on studies and found that a stylus-shaped device offered the best control and user experience. As one participant noted, “It actually feels like my own handwriting on paper.”
Beyond communication, Scribe could also support new forms of interaction, like multifactor authentication. For example, a user could physically sign an agreement or approve a payment on a voice-first device while simultaneously confirming their identity through voice recognition. This added layer of security could make smart devices more accessible and reliable for sensitive tasks.
As part of future work, the team is exploring the possibility of American Sign Language detection on voice-enabled devices using the same sensing principles behind Scribe. This advancement could unlock new applications—such as placing drive-through orders via sign language or allowing households with members who have speech difficulties to use existing voice interfaces through gestures.
Bai envisions even broader possibilities. Similar motion tracking could support remote surgery, enabling doctors to perform complex procedures from a distance with millimeter-level accuracy, potentially transforming telemedicine.
But for now, Scribe’s greatest impact may lie in giving people with disabilities a voice—without requiring them to speak.
—Story by Melissa Brachfeld, UMIACS communications group