image
image
user-login
Patent search/

Real-Time Sign Language Translation Using MediaPipe: Bridging Communication Gaps Through Gesture Recognition

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

Real-Time Sign Language Translation Using MediaPipe: Bridging Communication Gaps Through Gesture Recognition

ORDINARY APPLICATION

Published

date

Filed on 13 November 2024

Abstract

Sign language serves as a vital communication method for individuals who are hearing impaired, utilizing hand gestures rather than spoken words. For those unfamiliar with sign language, understanding and interpreting these gestures can be challenging, which often leads to difficulties for deaf individuals in communicating with others. Typically, they rely on interpreters who are proficient in sign language. To bridge this communication gap and encourage greater participation from the deaf and hard-of-hearing community, as well as to facilitate their ability to express themselves freely without needing an interpreter, this project aims to develop a sign language translation program. This program will convert sign gestures into text, making it easier for users to comprehend the conveyed message. The translation process will utilize a webcam to capture sign language gestures, which will then be processed. The processed images will be compared against a dataset, and matching results will be provided.

Patent Information

Application ID202441087364
Invention FieldPHYSICS
Date of Application13/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Dr.RRAJARAMESH MERUGUAssociate Professor, Department of Information Technology, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh, India.IndiaIndia
Mr. NAGESWARA RAO ARAMANDAAssistant Professor, Department of Computer Science and Engineering, Shri Vishnu Engineering College for Women (Autonomous), Bhimavaram, Andhra Pradesh, India.IndiaIndia
Mr. PHANI BABU KOMARAPUAssistant Professor, Department of Information Technology, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh, India.IndiaIndia

Applicants

NameAddressCountryNationality
Vishnu Institute of TechnologyVishnu Institute of Technology, Vishnupur, Bhimavaram-534202, Andhra Pradesh, India.IndiaIndia
Shri Vishnu Engineering College for womenShri Vishnu Engineering College for Women, Vishnupur, Bhimavaram, Andhra Pradesh, India.IndiaIndia
Dr.RRAJARAMESH MERUGUAssociate Professor, Department of Information Technology, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh, India.IndiaIndia
Mr. NAGESWARA RAO ARAMANDAAssistant Professor, Department of Computer Science and Engineering, Shri Vishnu Engineering College for Women (Autonomous), Bhimavaram, Andhra Pradesh, India.IndiaIndia
Mr. PHANI BABU KOMARAPUAssistant Professor, Department of Information Technology, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh, India.IndiaIndia

Specification

Description:Real-Time Sign Language Translation Using MediaPipe: Bridging Communication Gaps Through Gesture Recognition
Field of Invention
This invention focuses on assistive communication technology that translates sign language gestures into text using computer vision and machine learning. By capturing hand gestures through a webcam and processing them against a gesture dataset, the system enables individuals with hearing or speech impairments to communicate seamlessly without an interpreter, promoting inclusion and self-expression.
The Objectives of this Invention
The primary objective of this invention is to enable seamless communication between individuals who use sign language and those unfamiliar with it, reducing the reliance on interpreters and fostering more inclusive interactions. By translating sign language gestures into text in real time, this system promotes social inclusion and empowers individuals with hearing and speech impairments to participate more freely in various social, educational, and professional activities. Finally this invention aims to increase accessibility in communication, bridging awareness and understanding of sign language, and reducing communication barriers in everyday life.
Background of the Invention
The invention of sign language translation systems aims to bridge communication gaps for individuals who rely on sign language. Early research, such as that by Lin et al. in ["Human Hand Gesture Recognition Using a Convolution Neural Network,"]focused on gesture recognition using convolutional neural networks (2014). Recent advancements leverage modern machine learning, MediaPipe, and deep learning for real-time, accurate hand gesture interpretation, as seen in studies like ["Sign Language Translation" ]by Harini et al. (2020) and ["A Vision-Based System for Recognition of Words Used in Indian Sign Language Using MediaPipe" by Adhikary et al]. (2021). Researchers like Kemkar et al. in "[Sign Language to Text Conversion Using Hand Gesture Recognition" (2023) and Kushwaha et al. in "Hand Gesture Based Sign Language Recognition Using Deep Learning" (2023)] have improved accessibility by converting sign language into text with high precision, particularly for languages like Indian and American Sign Language. These systems ultimately promote inclusivity, reducing reliance on interpreters and enabling freer communication for hearing-impaired individuals.
Summary of the Invention
Sign Language is a way of communication between deaf and dumb community. It uses hand movements for communication between people. Most of us are usually unaware about the signs and can't tell what the sign means. So, this project helps us in filling this gap and understanding the sign language. It will capture various hand gestures using webcam and OpenCV and process it using MediaPipe. The processed data is then compared with database and the corresponding text message is displayed as output. In order to bridge such a tremendous gap, our project is proposing the development of an innovative sign language translation program that would automatically and in real-time interpret the sign language gestures into text. Through advanced technologies such as computer vision and machine learning, we will empower the deaf community to communicate more authentically and participate fully in society. We would capture sign language gestures through a webcam, after which our program processes the images with the help of various Python libraries that are OpenCV and MediaPipe. This application will identify and classify signs from a robust dataset to give instant textual responses corresponding to the gestures they input. This benefit would thus be the elimination of dependence on interpreters, and actually opening up avenues for understanding while making it possible to be more accessible, hence inclusive, through raising awareness of sign language to broader society. For this reason, we ultimately want to create a tool that will allow and strengthen connections with all diverse communities through meaningful communication.
Detailed Description of the Invention
The proposed sign language translation model works on the systematic process of how to capture, process, and translate the sign language gestures into text. To start with it, the system uses a webcam for recording real-time video input capturing sign language gestures, and in this manner, OpenCV controls video streaming and frame extraction. Next, each frame obtained is processed to enhance image quality: resized, normalized, and noise reduced to maintain clarity during analysis. Landmarks are hand figures particularly important for gesture recognition and, hence, are recorded in a CSV(Comma-Separated Values) file that has coordinates of various hand landmarks corresponding to particular gestures of the sign language. These landmarks are detected and tracked within the frames captured by the application with the help of MediaPipe, after which the model, employing landmark coordinates, is given permission to recognize these particular gestures on data retrieved from a CSV file. Once the system has identified a gesture, it will compare the processed landmarks to the entries in the CSV file to look for a match. In the event of such a match, the corresponding text representation of the recognized sign will be retrieved from the CSV and displayed in real time, enabling users to see the translation immediately. Further, the interface offers user interaction with the application, allowing users to capture as well as halt video capture and to read translations of the text in view during that time. It has been incorporated to ensure easy and interactive use for the benefit of the deaf and hard-of-hearing in empowering them in communication, making it easier to understand, and increasing social interaction.










, Claims:Claim:

1.A system for Predicted Benefits of "Real-Time Sign Language Translation Using MediaPipe: Bridging Communication Gaps Through Gesture Recognition" , said system comprising the steps of:
(a)A system designed to reduces communication barriers between the deaf and hard-of-hearing community and the hearing population, fostering inclusivity and understanding in diverse social settings.
2.As mentioned in claim 1, A method is enabling real-time translation of sign language, the model empowers individuals in the deaf community to express themselves more freely, enhancing their confidence and participation in everyday interactions.
3.Utilizing MediaPipe for precise detection of hand landmarks, the model effectively recognizes gestures based on coordinated data, ensuring a high degree of accuracy in sign language interpretation.
4.The system retrieves and displays the text representation of recognized gestures instantly, providing users with immediate feedback and facilitating seamless interaction.

Documents

NameDate
202441087364-COMPLETE SPECIFICATION [13-11-2024(online)].pdf13/11/2024
202441087364-DECLARATION OF INVENTORSHIP (FORM 5) [13-11-2024(online)].pdf13/11/2024
202441087364-DRAWINGS [13-11-2024(online)].pdf13/11/2024
202441087364-FIGURE OF ABSTRACT [13-11-2024(online)].pdf13/11/2024
202441087364-FORM 1 [13-11-2024(online)].pdf13/11/2024
202441087364-FORM-9 [13-11-2024(online)].pdf13/11/2024
202441087364-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-11-2024(online)].pdf13/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.