AI glasses translate sign language on the fly
With the help of a Raspberry Pi Zero and cloud-based AI, 3D-printed glasses can translate sign language into spoken words.
- Daniel Schwabe
Maker Nekhil has built a pair of glasses that translate sign language into spoken language. A Raspberry Pi camera module 3 is used for image recognition in the project. This camera offers a resolution of 4608 by 2592 pixels at 12 megapixels. The images from the camera are forwarded to the Viam cloud service via the Internet. At the heart of the glasses is a Raspberry Pi Zero 2 W, which is equipped with a quad-core processor consisting of Cortex-A53 cores clocked at 1 GHz. It also uses 512 MB LPDDR2 SDRAM.
Viam can be used to equip your projects with cloud-based services such as AI data processing etc.. Here, the data is processed by an AI model that translates the characters into individual letters. The AI model used is called asl-letters-yolov8 and was created to recognize the letters in American Sign Language.
The letters are then converted into spoken language using a text-to-speech service from Google and output locally using a speaker amplified by a PAM8403 module and connected to a USB sound card.
A demo of the glasses in action is available on Nekhil's YouTube channel.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmmung wird hier ein externes YouTube-Video (Google Ireland Limited) geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (Google Ireland Limited) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.
In the future, there are plans to expand the capabilities of the glasses from individual letters to more complex elements of sign language.
If you would like to find out more about how Maker optimizes medical aids, you can find more information in our article.
(das)