Designing for an accessibility-first society. One gesture at a time.

 

Alexa, Google Assistant, Cortana, Siri: voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear?

With Project SIGNS, we invented an application program interface (API) for digital accessibility and inclusion at brands’ fingertips. Allowing people with hearing disabilities to interact with smart voice assistants.

 

Insight

Around 11% of the global population with access to the internet have hearing and speaking disabilities. In other words, 466 million people worldwide can’t access voice assistants which will be responsible for over 40% of digital interactions by 2022.

Idea

Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. It’s a smart tool that recognizes and translates English, Arabic, and German sign language in real-time and then communicates directly with a selected voice assistant (Alexa, Google, Siri, Cortana). The voice assistant processes the data in real time and replies appropriately back to SIGNS. The answer is then immediately either displayed in text form or via visual feedback.

Intelligence

SIGNS is based on an intelligent machine learning framework that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the selected voice assistant understands.

Previous
Previous

Responsible AI Design

Next
Next

Digital Transformation