Powered by

Advertisment
Home Profile

Who is Priyanjali Gupta, student behind AI sign language translation?

Priyanjali Gupta, a final-year student at VIT, created an AI model that translates American Sign Language into English in real-time using TensorFlow. Her project gained viral attention and aims to bridge communication gaps for the deaf community.

By Ground Report Desk
New Update
Who is Priyanjali Gupta, student behind AI sign language translation?

Priyanjali Gupta, a student behind AI sign language translation. Photograph: (@Priyanjali Gupta/GitHub)

Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

Priyanjali Gupta, a final-year student at Vellore Institute of Technology (VIT), has captured the tech world’s attention with her AI work aimed at improving inclusivity for the hearing-impaired community. The 20-year-old computer science and data science student developed a real-time AI model that translates American Sign Language (ASL) into English. Her project, which uses a webcam and AI-based object detection, has gained recognition after her viral LinkedIn post received over one lakh likes and praise from the tech community.

Advertisment

Gupta’s AI model idea came from a conversation with her mother, who encouraged her to use her engineering skills for something practical and impactful. This challenge pushed her to think about how technology could help the deaf and hearing-impaired community.

Reflecting on this moment, Gupta said, “One day, amid conversations with Alexa, the idea of inclusive technology struck me. That triggered a set of plans.”

This inspiration led her to combine her passion for computer science with a desire to help bridge the communication gap for the hearing-impaired. By February 2022, she had developed her AI model, which uses the TensorFlow object detection API and a pre-trained SSD MobileNet model. The model can recognise basic ASL signs and translate them into text in real-time, making it accessible and easy to use through a simple webcam.

What is her AI Sign language project?

Gupta’s AI project aims to bridge the communication gap between the deaf and hearing communities. She focused on six common ASL signs—“Hello,” “I Love You,” “Thank You,” “Please,” “Yes,” and “No”—and trained the model to recognize them using a small dataset. The model, initially trained with single frames, is now being developed to handle video inputs, allowing recognition of sign language sequences in motion. Gupta explained, “I’m researching how to use Long-Short Term Memory (LSTM) networks to improve the system’s accuracy and detect sign language in videos.”

The project has sparked conversations about AI’s potential for social change. In an interview, Gupta emphasised that the technology is still in its early stages. “I believe understanding sign language is really big and hard. A small-scale object detection model cannot solve it,” she said. She hopes to encourage more researchers to explore AI’s potential in inclusive technology, particularly in recognizing facial expressions and shoulder movements integral to ASL.

Who is Priyanjali Gupta?

Priyanjali Gupta is an aspiring computer science and data science student from Delhi in her final year at VIT. Driven by a deep interest in development and research, she pursued a career in engineering. Her passion for using technology to solve real-world problems led to her work on AI-driven solutions for the deaf and hearing-impaired communities. Despite being a self-taught coder, Gupta’s determination and commitment have led to remarkable success, gaining attention from experts in the AI field.

Gupta’s AI model combines object detection and transfer learning techniques. She used the TensorFlow object detection API and a pre-trained SSD MobileNet model to create an AI system that recognises ASL signs in real time. Users can sign chosen words through a webcam, and the system translates them into English text. Gupta’s model has made a huge impact, with her viral LinkedIn post inspiring others to explore similar technologies.

Gupta shares her experience on the project, saying, “I’m self-taught with a keen interest in development and research. I seek opportunities to apply my technical knowledge to build solutions for current global problems.” Her future plans involve improving her model, incorporating video-based recognition, and building robust solutions for the deaf community.

Support us to keep independent environmental journalism alive in India.

Keep Reading

California Fires Live updates: destructive wildfires in history

Hollywood Hills burning video is fake and AI generated

Devastating wildfire in California: wind, dry conditions to blame?

Los Angeles Cracks Under Water Pressure

From tourist paradise to waste wasteland: Sindh River Cry for help

Follow Ground Report on X, Instagram and Facebook for environmental and underreported stories from the margins. Give us feedback on our email id [email protected]

Don't forget to Subscribe to our weekly newsletter, Join our community on WhatsApp, and Follow our YouTube Channel for video stories.