• Home
  • >
  • Tech News
  • >
  • This robotic arm translates text to sign language in real time

This robotic arm translates text to sign language in real time

Robotic hand sign language

Project Aslan, an initiative launched by a team of engineers at the University of Antwerp (Belgium), is developing a robotic arm that has the ability to translate text into sign language in real time. Thanks to this new electronic interpreter, it will be easier than ever to allow deaf people to understand any conversation or speech.

It is not the first time we see a technological device that has the purpose of making the sign language understand . However, the solutions we have seen so far focus exclusively on converting sign language into audible voice, and are not capable of carrying out the opposite task.

There have also been other proposals that carry out the reverse action, such as a robot that Toshiba introduced in 2014, but its use has not been extended. Now thanks to this new robotic arm it will be possible to do it in an accessible and cheap way, so that anyone can communicate using sign language without knowing it.

The device developed by Project Aslan consists of a total of 25 pieces of plastic made using a 3D printer so that its production is economical and can be easily repaired anywhere. To these parts are added 16 servomotors, three motor controllers, an Arduino Due microcomputer and other electronic components. The plastic parts take about 139 hours to print, while the final assembly of the robot takes about 10 hours.

Another advantage of this design allows the structure to be modified in a simple way, so it would be possible to update it at a very low cost.

The current version of the robotic arm can translate written text to the alphabet, which is one of the communication systems used by deaf people to communicate, in which each symbol is equivalent to one letter.

The team is now working on improving its functions so that, in addition to the alphabet, it can also translate into advanced sign language, in which the meaning of concepts is not transmitted letter by letter, but through the combination of gestures, body posture and Facial expressions. For this it will be necessary to implement an expressive face to the system. In addition, future versions will integrate a webcam and other sensors to translate spoken voice, not just written messages.

[Source: New Atlas ]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.