Science Daily

Software developed to convert conversation into sign language video

LONDON: A new invention for the deaf has been developed to show spoken words with a video model or digital avatar. In this way, it is very effective to reach out to those who are deaf and hard of hearing.

In the UK, a company affiliated with Artificial Intelligence (AI) has developed software that works in the style of a neural network and creates a digital character. As soon as someone speaks, it is played in the form of a video from a digital avatar playing sign language.

This is how a video with real pictures appears and it looks like an expert is explaining something with his gesture. This process goes hand in hand in real time. Many people can watch it through video at a time.
For this important project, Ben Sanders of the University of Surrey and his colleagues have created a neural network that converts spoken words into sign language. Since then, Signage has built a human skeleton and 3D model for it. Another team has provided content and images for the actual sign language.

Google has previously developed an app that shows sign language during video calls, but it had some flaws, but the new signage algorithm is much better.

Naeem Ur Rehman

Pakistan's youngest blogger and the CEO of Raabta.net. He is currently the student of BS Environmental Sciences at University of the Punjab, Lahore. He is also working as a senior advisor to Aagahi.pk, Mukaalma, and Pylon TV.

Related Articles

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Back to top button
0
Would love your thoughts, please comment.x
()
x

Please allow ads to run on our site

It looks like you're using an ad blocker or some other service to block ads. We rely on advertising to run our site. Please support Raabta and allow ads.