
Sign 2 Voice - UX Research/Design
Sign 2 Voice
Sign 2 Voice is an in-browser web extension that is embedded into common online interviewing platforms in order to help those that are hard of hearing to be able to confidently communicate with interviewers who may not be familiar with sign language.
Team
James Jiang, Emmy Ni, Tazik Shahjahan
Timeline
36 hours
Hack Western 6 (Western University)
First Hackathon Experience
Role
UX/UI Designer
Problem Context
“Some deaf people choose not to speak because it is difficult for them to regulate the sound, pitch and volume of their voice.”
“It is difficult for people of hard of hearing to be able to find employment.”
With the hopes of breaking employment barriers, we identified a design opportunity to increase accessibility to help those who are deaf with job interview video calls that are becoming increasingly popular so these interviewees can more easily communicate with interviewers who may have trouble understanding them without an interpreter. Even though there are ways to request accessibility, we wanted to envision a new future where individuals that are hard of hearing can leverage the comfort of the American Sign Language system while helping these individuals feel more confident in themselves by providing an outlet for their voice through gestures, while reducing self-consciousness that may be potentially associated by using their voice.
The Essence of our Idea
By using Google Firebase and other API libraries, our team leveraged the use of the webcam camera to capture gestures of the user while comparing them to a trained model on the entire alphabet of the American Sign Language in order to print out comprehensible words and phrases that could be read by an interviewer on the other end of the call.
Design Mockups

Landing Page
I decided to leverage the hand as a design metaphor when thinking about ways to convey our logo. I wanted our product to be able to convey the mission of turning an individual’s hand signs as an outlet for their voice.




Onboarding & Webcam
I wanted to use familiar icons for the stop and record buttons during the interview call to invoke familiarity for the user. In terms of the icons on the right, they allow the user to switch the recognition from a male to female voice due to technical feasibility as that may impact recognition as well as the potential to adjust the voice output volume due to accessibility reasons.
Reflection
The result of our project was not as planned as I found myself spending too much time conducting research to find a social problem we were all interested in and trying to layout the structure of the web extension which made the development side not have a lot of time to finish with implementation of the UI. From this experience, I learned the importance of a minimum viable product and the need to prioritize as it was not necessary to have a login screen when the product is just as good without it and the software developers could focus on training our model to recognize sign language gestures from the webcam to ensure accuracy. The main challenge our team came across was that the solution only had about 60% accuracy so the letters that were printed out to the chat window from the gestures were not entirely valid but I learned a lot about working in a team of developers in a time-pressured environment. Even though we were strangers at first meet-up, we ended up being able to communicate with each other quite well and worked towards our team’s goal. We even kept in touch afterwards which made my first daunting hackathon an even better experience!
