A team of researchers from the Barcelona Supercomputing Center (BSC-CNS) and the Universitat Politècnica de Catalunya (UPC) has developed an artificial intelligence (AI) tool that aims to bridge communication gaps for individuals who are deaf or hard of hearing. 

The tool focuses on automatic sign language translation, providing a promising solution to a common challenge faced by sign language users.

Laptop
(Photo : Jan Vašek from Pixabay )

Sign Language in AI

While voice recognition technologies like Alexa and Siri have advanced significantly, they have yet to include sign language in their applications. 

This limitation poses difficulties for individuals who rely on sign language as their primary means of communication when interacting with technology and accessing digital services created exclusively for spoken languages.

The development of open-source software by researchers paves the way for improved communication accessibility. By integrating computer vision, natural language processing, and machine learning techniques, advancements in automatic sign language translation are being achieved.

The system, currently in an experimental phase, employs a machine learning model called Transformers to convert sign language sentences captured in video format into text format, facilitating communication for individuals who rely on sign language. 

While initially focused on American Sign Language (ASL), the system has the potential to be adapted to other languages, given the availability of relevant data for training and translation. 

Read Also: G-7 Leaders Want to Develop an AI Framework Called the 'Hiroshima AI Process' After the Recent Summit

80 Hours of Videos

According to Laia Tarrés, a researcher at BSC and UPC, the team built upon their previous work, known as How2Sign, which involved publishing data needed to train the models. 

This data comprised more than 80 hours of videos featuring ASL interpreters translating video tutorials, including cooking recipes and DIY tricks.

Leveraging this available data, the team developed a new open-source software capable of learning the mapping between video and text.

While the researchers acknowledge that there is still ample room for improvement, they consider this project as a crucial step towards creating concrete applications that can benefit users. 

The ultimate goal is to refine the tool further, paving the way for the development of accessible technologies that cater to the needs of deaf and hard-of-hearing individuals. 

The project has already been showcased at the Fundación Telefónica space in Madrid as part of the 'Code and algorithms. 

The 'Sense in a Calculated World' exhibition showcases multiple artificial intelligence projects, prominently featuring contributions from BSC.

Furthermore, it will be showcased at the Centre de Cultura Contemporània de Barcelona (CCCB) as part of a significant upcoming exhibition on artificial intelligence, set to open in October. 

"This open tool for automatic sign language translation is a valuable contribution to the scientific community focused on accessibility, and its publication represents a significant step towards the creation of more inclusive and accessible technology for all," Tarrés said in a statement.

The findings of the team were published in arXiv. 

Related Article: Apple's SignTime, a Sign Language Interpreter, Will Soon Be Available -- More Accessibility Features Coming!

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion