GESTURE LANGUAGE HEARD WITH RASPBERRY PI

In the era of rapid technical progress, amazing innovative developments become commonplace. However, some projects stand out by their humanistic orientation and the desire to facilitate communication between people. Enthusiast under the nickname Nehil created innovative glasses using the Raspberry Pi one-paying computer, capable of recognizing the language of gestures and voice it using the function of converting the text into speech.

The system uses artificial intelligence and video stream from the camera to track and interpret the hand gestures. A trained neural network can recognize individual letters and immediately voices them, allowing others to understand the essence of communication even without knowledge of the language of gestures.

To implement the project, Nekhil used the VIAM open platform focused on creating smart devices using AI. Initially, he planned to use the latest Raspberry Pi 5 model, but then he decided to use the more compact and energy-efficient model PI Zero 2 W, which is quite part of the tasks.

The V3 camera is located in front of the frames of the glasses, which allows it to capture images and videos in front of the user. Provided that the interlocutor is in the field of view of the camera, the system will be able to “see” and recognize his gestures. The frame itself was designed in Fusion 360 and printed on a 3D printer specifically for this project.

In the project, Nekhil used the computer vision model Yolov8, trained to recognize gestures corresponding to the individual letters of the American language of gestures (ASL). After recognizing the letter, the system voices it with the help of speech synthesis. Although the VIAM platform, on the basis of which the device has been created, supports work with Tensorflow Lite models, in this case a more productive Yolov8 model was chosen for more accurate recognition of gestures.

/Reports, release notes, official announcements.