ASL Translator
Summary
I’m making a necklace-based ASL translator with a few friends, since we thought it’d be a good way to give a deaf friend more independence. It’ll look like this, but with the camera on the top edge (they’d look down and sign into it to have their signs converted to speech), a power button, a potentiometer for volume adjustment, and a speaker inside.
Plan
- We’ll connect our camera to our Raspberry Pi using its attached ribbon cable and use the picamera module to verify that it works.
- Then, we’ll train a hand/face keypoint to ASL gloss LSTM using this video as a reference and export it to our Pi. We’ll add keypoint detection to our Pi too, of course. These ASL glosses can then be converted to (very rough) speech on the Pi using the pyttsx3 text-to-speech module.
- We’ll then wire the rest of our components to our Pi; the speaker and its amp with this tutorial, the battery and its charging board with this tutorial, and the rest on our own (we know how to wire a button). The main Python script running doing the keypoint detection and running inference with the LSTM model will, of course, be modified to interface with these components.
- Finally, we’ll design and 3D print a case for our necklace-based device (with a loop for the paracord+buckle).