Google uses AI to translate sign language into spoken speech

  • Google has announced AI algorithms that make it possible for a smartphone to interpret and "read aloud" sign language.

    The technology was created in partnership with image software company, MediaPipe, and is not a fully developed app. However Google hopes the algorithms it has published will help other developers to make their own smartphone apps.

    Google research engineers Valentin Bazarevsky and Fan Zhang said the intention of the freely published technology was to serve as "the basis for sign language understanding" in an online AI blog.

    A spokeswoman for Google told the BBC: "We're excited to see what people come up with. For our part, we will continue our research to make the technology more robust and to stabilise tracking, increasing the number of gestures we can reliably detect."

    Google acknowledges this is a basic first step, and it is not without fault. Campaigners from the hearing-impaired community say an app that produced audio from hand signals alone would miss any facial expressions or speed of signing and these can change the meaning of what is being discussed. Also, any regionalisms which exist in a local area would not be included, as there are over 200 dialects of sign language worldwide.

    Speakers of American Sign Language (ASL) have probably been the most fortunate in the past as a number of startups and research projects by large tech companies are within the U.S. and are dedicated to translating ASL in real time. More research is now being called for elsewhere.

    In 2018, video consultation services company, Coviu, were able to produce a working prototype which translated signs from the Auslan (Australian sign language) alphabet to English text in real time. Specialising in healthcare, the company stated that “with the growing adoption of telehealth, deaf people need to be able to communicate naturally with their healthcare network, regardless of whether the practitioner knows sign language.”

    Coviu wanted to create a web app that uses a webcam to capture a person signing Auslan and translate it in real time. This involved data collection, training a machine learning model to recognise the Auslan alphabet and building the user interface.

    One contention is that this type of technology is a good first step for hearing people, but more needs to be done to fully benefit the hearing impaired; this tech allows the non-hearing impared to people understand sign language, but not vice versa.

    Jesal Vishnuram, Action on Hearing Loss's technology manager, said: "From a deaf person's perspective, it'd be more beneficial for software to be developed which could provide automated translations of text or audio into British Sign Language (BSL) to help everyday conversations and reduce isolation in the hearing world."

    Until now, when trying to track hands on video, finger-bending and flicks of the wrist have hidden other parts of the hand. Combined with speed and the dynamic nature of sign language, this confused earlier versions of this kind of software.

    Google imposes a graph on 21 points across the fingers, palm and back of the hand, making it easier to understand a hand signal if the hand and arm twist or two fingers touch.

    Until now, this type of software has only worked on PCs and hasn’t been completely without error.

    In 2018,  Microsoft teamed up with the National Technical Institute for the Deaf to use desktop computers in classrooms that helped students with hearing disabilities via a presentation translator.

    In a blog, students described having previously missed some of what their professors had said because they had to keep switching attention from human sign language interpreters to what was being written on the board.

    But by having all the viewing information presented via a desktop, the problem was solved.

    Elsewhere in the world, innovators have created their own home-grown tech.

    One 25-year-old developer in Kenya has built a pair of haptic gloves that translate sign language to an Android application which reads the text aloud. Roy Allela made the gloves for his hearing-impaired niece, and his innovation recently won an award from the American Society of Mechanical Engineers.

    In 2017, a partnership comprising linguists from the Deafness Cognition and Language Research Centre at University College London, the Engineering Science team at the University of Oxford, and experts at the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey were granted circa £1m to develop AI that will recognise not only hand motion and shape, but also the facial expression and body posture of the signer.

    In the same year, Lloyds Bank became the first financial services company to trial technology developed by Signly – a British Sign Language (BSL) translation startup, which provides translations into BSL through augmented reality

    Work is still ongoing, and it is unknown as to when AI will revolutionise signing flawlessly, but the progression and enduring research worldwide over recent years shows a step in the right direction.

     

    Source: BBC News

    About the author

    Niamh is a Sync NI writer with a previous background of working in FinTech and financial crime. She has a special interest in sports and emerging technologies. To connect with Niamh, feel free to send her an email or connect on Twitter.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

    Sign up now for a FREE weekly newsletter showcasing the latest news, jobs and events in NI’s tech sector.

Share this story