LONDON: A new invention for the deaf has been developed to show spoken words with a video model or digital avatar. In this way, it is very effective to reach out to those who are deaf and hard of hearing.
In the UK, a company affiliated with Artificial Intelligence (AI) has developed software that works in the style of a neural network and creates a digital character. As soon as someone speaks, it is played in the form of a video from a digital avatar playing sign language.
This is how a video with real pictures appears and it looks like an expert is explaining something with his gesture. This process goes hand in hand in real time. Many people can watch it through video at a time.
For this important project, Ben Sanders of the University of Surrey and his colleagues have created a neural network that converts spoken words into sign language. Since then, Signage has built a human skeleton and 3D model for it. Another team has provided content and images for the actual sign language.
Google has previously developed an app that shows sign language during video calls, but it had some flaws, but the new signage algorithm is much better.