Mentor/s
Tolga Kaya
Participation Type
Poster
Abstract
This project aims to develop a robotic arm that demonstrates American Sign Language (ASL) letters. The system features a custom-designed 3D-printed forearm with thirteen servo motors that allow for various wrist and finger movements, enhancing the dexterity of the arm. To support ASL letter recognition, a dataset was created consisting of letters from A to I, captured against seventeen different colored backgrounds with five to seven variations per letter. A convolutional neural network (CNN) model was then developed using TensorFlow, Keras, and scikit-learn to process the input images and classify them into corresponding ASL letters. The main Python program utilizes OpenCV and MediaPipe to analyze webcam footage. Users input the desired letter they want to learn, prompting the robotic arm to perform the corresponding gesture. The program then monitors the user’s replication of the arm’s movement through the webcam. It provides positive feedback if the user’s hand position matches the arm’s gesture correctly or prompts the user to try again if they did it incorrectly. This system overcomes previous limitations by allowing the arm to do various movements, such as radial adduction and abduction, accurately demonstrating the ASL letter formation. The expected outcomes include increasing the number of images. Overall, this abstract seeks to provide a comprehensive understanding of the development of an AI-driven robotic arm for teaching ASL, using advanced image processing techniques.
College and Major available
Computer Engineering BS, Computer Science BS
Academic Level
Undergraduate student
Location
Digital Commons & West Campus West Building University Commons
Start Day/Time
4-25-2025 12:00 PM
End Day/Time
4-25-2025 2:00 PM
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License
Prize Categories
Most Scholarly Impact or Potential, Most Creative, Best Technology Prototype
Artificial Intelligence Robotic Arm for Teaching American Sign Language
Digital Commons & West Campus West Building University Commons
This project aims to develop a robotic arm that demonstrates American Sign Language (ASL) letters. The system features a custom-designed 3D-printed forearm with thirteen servo motors that allow for various wrist and finger movements, enhancing the dexterity of the arm. To support ASL letter recognition, a dataset was created consisting of letters from A to I, captured against seventeen different colored backgrounds with five to seven variations per letter. A convolutional neural network (CNN) model was then developed using TensorFlow, Keras, and scikit-learn to process the input images and classify them into corresponding ASL letters. The main Python program utilizes OpenCV and MediaPipe to analyze webcam footage. Users input the desired letter they want to learn, prompting the robotic arm to perform the corresponding gesture. The program then monitors the user’s replication of the arm’s movement through the webcam. It provides positive feedback if the user’s hand position matches the arm’s gesture correctly or prompts the user to try again if they did it incorrectly. This system overcomes previous limitations by allowing the arm to do various movements, such as radial adduction and abduction, accurately demonstrating the ASL letter formation. The expected outcomes include increasing the number of images. Overall, this abstract seeks to provide a comprehensive understanding of the development of an AI-driven robotic arm for teaching ASL, using advanced image processing techniques.
Students' Information
Julia Piascik, Honors Computer Science and Computer Engineering Student, graduating May 2026.
Winner, Most Creative 2025 Award
Winner, Best Technology Prototype 2025 Award