image
image
user-login
Patent search/

"THE SYSTEM OF USER PREFERRED VARIABLE VOICE AND TEXT FROM SIGN LANGUAGE AND ITS METHODS THEREOF"

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

"THE SYSTEM OF USER PREFERRED VARIABLE VOICE AND TEXT FROM SIGN LANGUAGE AND ITS METHODS THEREOF"

ORDINARY APPLICATION

Published

date

Filed on 4 November 2024

Abstract

Abstract Humans communicate with one another to share their thoughts, feelings, and expet:iences with others around them. But for those who are deaf-mute, this is not the case. The use of sign language helps a method of communication for deaf-mute people. By means of sign language a deaf-mute person, communication is feasible without the acoustic techniques. The purpose of this endeavor is to create a technique for decoding sign language that offers communication between those who have trouble in speaking and narrows the communication chasm between ordinary people to them. Hand gestures are less common than other gestures (arm, face, head, and body). Gesture is crucial because it conveys the user's opinions in fewer hours. Our idea is to capture gestures made P.y t':t.: hands and translate that into text and user preferred variable voice that is based on age, gender and emotions. By this way deaf-mute person and a person without any knowledge about sign language are able to commtinicate with each other seamlessly.

Patent Information

Application ID202441083964
Invention FieldPHYSICS
Date of Application04/11/2024
Publication Number45/2024

Inventors

NameAddressCountryNationality
Sakthish Kumaar PDepartment of ECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
Devi Priya PDepartment ofECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
Mohamed Ameen MDepartment ofECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
Dheena Dhayalan RDepartment ofECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
N Nazeeya AnjumAssistant professor, Department of ECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia

Applicants

NameAddressCountryNationality
SRI SAIRAM ENGINEERING COLLEGEMrs.N.NAZEEYA ANJUM, ASSISTANT PROFESSOR, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044. TEL: 044-22512229, MOB: 9791059520, nazeeyaanjum.ece@sairam.edu.inIndiaIndia
Sakthish Kumaar PDepartment ofECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
Devi Priya PDepartment ofECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
Mohamed Ameen MDepartment of ECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
Dheena Dhayalan RDepartment of ECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia
N Nazeeya AnjumAssistant professor, Department of ECE, SRI SAIRAM ENGINEERING COLLEGE, SAI LEO NAGAR, WEST TAMBRAM, CHENNAI, PIN CODE-600044.IndiaIndia

Specification

THE SYSTEM OF USER PREFERRED
VARIABLE VOICE AND TEXT FROM SIGN
LANGUAGEANDITSMETHODSTHEREOF
Field of Invelltion
This invention is all about creating a system for understanding sign
language, which enables interaction between people with speech impediment
and the broader population, bridging the communication gap. In comparison to
other gestures (arm, tace, head, and body), the hand gesture is significant since
it may convey the user's thoughts in a brief amount of time. Our system will
capture the gesture along with the emotion of the person and provide the
accurate text and variable voice output appropriately.
Background
I) Alphabetical Gesture Recognition of American Sign Language using E-Voice
Smart Glove.
-Muhammad Saad Amin
This paper deals with hand gesture translation into speech. E-Voice a Smart
Glove designed to be an interpreter between deaf-mute individuals and the
normal public. Gesture translation is performed by idealizing the standard ASL
template. Speaking disability of mute person is removed through E Voice glove
by translating gestures into speech and display form. The main features of the
prototype include a gesture recognizer that is portable and reduces the necessity
of learning the sign language gestures for normal people.
2) Sign language to digital voice conversion device
- Shivani Kute
This paper deals with a smart glove system which might conve11 sign language
to speech output. The glove can facilitate artificial speech that produces daily
communication for speech impaired persons. Compared to different gestures of
the body like hand, face and head; hand gestures play a crucial role, as a result
of it being expressed as presently as the reaction of an individual.
3) Aphonic's Voice: A Hand Gesture Based Approach to Convert Sign
Language to Speech
- Surendra Kumar Keshari
This paper discusses the design to recognize the hand gesture as it is one of the
fastest ways to conununicate. And further the discussion will be about
recognizing the digit and performing operations in addition to recognizing the
English alphabets to form words. The paper reviewed the current study status of
applications aiming to recognize hand gestures.
4) Hand gesture recognition and voice conversion system for dumb people
- Vigneshwaran
This paper deals with vision-based technique cameras that will be used for
gesture detection and non-vision-based technique sensors. In this project non
vision-based techniques will be used. Most of the dumb people are deaf also.
So, the normal people's voice can be converted into their sign language. In an
emergency the message will automatically be sent to their relation or friends.
5) Sign language to speech conversion
- Aarthi M
This paper deals with the work to develop a system for recogmzmg sign ·
language, which provides communication between people with speech
impairment and normal people, thereby reducing the communication gap
between them. In the current work flex sensor-based gesture recognition module
is developed to recognize English alphabets and a Text-to-Speech synthesizer, to
convert the con·esponding text.
Summary
Deaf-mute individuals often face challenges in their daily interactions
with the hearing community. While sign language -serves as their primary mode
of communication, comprehension is limited to tho~e who have received
specialized training in this nonverbal language. Sign language, characterized by
intricate hand gestures and nuanced facial expressions, conveys meaning
through a complex interplay of movements and gestures. It integrates hand
shapes, orientations, arm and body movements, as well as facial expressions, to
articulate thoughts with fluidity and precision.
Recognizing the importance of effective communication for deaf-mute
individuals, the concept of a sign language to speech conversion system
emerges. Such a system aims to bridge the communication gap between deafmute
individuals and the hearing population by translating sign language
gestures into spoken language comprehensible to all. The primary objective of
this endeavor is to develop and implement a robust system capable of accurately
translating fingerspelling, a fundamental aspect of sign language, into audible
speech.
The former is tasked with deciphering and interpreting ihe intricate hand
movements and gestures inherent in fingerspelling, leveraging advanced
recognition techniques to accurately capture and analyze these gestures in realtime.
Beyond facilitating communication for deaf-mute individuals, hand-gesture
recognition systems find applications across diverse domains. From character
recognition and gesture-based television control to ho.me automation and robotic
arm manipulation, the versatility of such systems is evident. Moreover, gesture
recognition technology extends its utility to assistive devices, enabling
individuals with mobility impairments to control wheelchairs with intuitive
hand movements.
In essence, the development of a sign language to speech conversion
system represents a significant step towards fostering inclusivity and
accessibility for deaf-mute individuals. By harnessing cutting-edge recognition
and synthesis techniques, such systems hold the potential ro enhance
communication and empower individuals with hearing impairments to engage
more fully in society.
Objectives
The primary aim of this product is to devise a comprehensive system tailored
for the understanding of sign language, thus fostering interaction between
individuals grappling with speech impediments and the wider populace. This
endeavor seeks to bridge the communication chasm prevalent in our society. In
contrast to other forms of non-verbal communication, such as gestures involving
the arm, face, head, and body, hand gestures wield paramount significance due
to their ability to encapsulate the user's thoughts swiftly and succinctly.
Delving deeper, this initiative entails the development of a sophisticated
technological framework capable of deciphering the intricate nuances embedded
within sign language. Recognizing the complexity of hand movements inherent in
sign language, this system necessitates the employment of advanced algorithms
encompassing image processing, pattern recognition, and machine learning.
Mastery over these intricate details is pivotal in ensuring the creation of a robust
and dependable system.
Moreover, the system's efficacy hinges on its aptitude for facilitating real-time
interaction. Given that sign language conversations often unfold in the immediacy
of the moment, the system must possess the agility to promptly process hand
gestures and furnish timely feedback or responses. Consequently, minimizing
latency and maximizing responsiveness constitute pivotal objectives in the system's
design.
Furthermore, adaptability and personalization emerge as key tenets in the
development process. Acknowledging the diversities prevalent in sign language,
ranging from regional dialects to individual idiosyncrasies, necessitates a flexible
framework that can accomi11odate these variations seamlessly. This might involve
permitting users to input custom gestures, fine-tuning recognition algorithms to
accommodate differing signing styles, or tailoring feedback mechanisms to align
with individual preferences.
By encapsulating these multifaceted dimensions, the envisioned system aspires not
merely to decode sign language but to cultivate an inclusive environment wherein
individuals with speech impediments can engage in meaningful interactions with
the broader populace, thus fostering empathy, understanding, and mutual respect.
Thus, this research aspires to make significant strides in the field of assistive
technologies by developing an innovative, comprehensive system for sign
language interpretation. Our objective is not only to advance the technical
capabilities in this area but also to contribute to a more inclusive society where
every individual has the opportunity to communicate freely and effectively.
Brief Descriptions of Drawings
Figure 1-FLOW CHART OF SIGN LANGUAGE TO SPEECH AND
TEXT MODULE
Figure 1 explains the flowchart of the flex sensor. To start with, the Arduino
board serves as a central processing unit that coordinates the vanous
components of the system. The Arduino receives input signals from sensors
capturing sign language gestures. These sensors could be fle.~. sengor3,
accelerometers, or any other type of sensor capable of detecting hand
movements or gestures. Once the Arduino receives the input signals from the
flex sensor, it processes this data to interpret the sign language gestures. This
may involve analyzing the amplitude, frequency, or pattern of the gestures to
recognize specific signs. Flex sensors play a crucial role in recognizing and
interpreting sign language gestures. Each sign in sign language involves specific
hand shapes, movements, and orientations. Flex sensors help .::apture these
movements by detecting the bending and flexing of the fingers and hand joints.
The data from the flex sensors is then analyzed to identify the gestures being
made. Using programmed algorithms, the Arduino matches the captured gestures
with predefined sign language gestures stored in its memory or database. Tt
identifies the corresponding sign for each detected gesture. Upon recognizing a
sign, the Arduino converts it into text format. This involves mapping the
recognized sign to its con·esponding word or phrase in the spoken
language. Finally, the Arduino controls the output device, typically a speaker or
headphones, to play the synthesized speech corresponding to the recognized
sign language gesture.
Figure 2 - SCHEMATIC DIAGRAM OF FLEX SENSORS AND
TACTILE SENSOR INTEGRATION WITH ARDUINO UNO
Figure 3- SCHEMATIC DIAGRAM OF 3 AXIS ACCELEROMETER
INTEGRA TJON WITH ARDUINO UNO
Detailed explanation of the development with relevance to the
architecture diagram
Gesture detection :
Gesture recognition forms the cornerstone of the system, serving as the
mechanism through which sign language gestures are discerned and
comprehended. This pivotal functionality is executed through the utilization of
an input device, notably the glove model, which captures the intricate hand
movements inherent in sign language communication. At its core, the process of
gesture recognition involves a multifaceted approach encompassing several key
steps. Initially, the system undertakes the task of extracting pertinent features
from the input data, which comprises the captured sign language gesture from
flex sensor. These features may encompass various aspects of the gesture, such
as the position, orientation, and movement of the fingers and hand. Following
the extraction of features, the system endeavors to map these discerned
characteristics to their corresponding signs within the lexicon of sign language.
This mapping process relies heavily on sophisticated algorithms that are adept at
recognizing patterns and discerning subtle variations within the captured
gestures. Machine .learning techniques, including neural networks and support
vector machines, may be employed to facil.tate this mapping process, enabling
the system to learn and adapt to a diverse array of sign language
gestures. Furthermore, the efficacy of the gesture recognition system hinges on
its ability to operate in real-time, facilitating seamless and fluid communication
between users. Achieving !ow latency and high accuracy is paramount, ensuring
that the system r.an promptly interpret and respond to user gestures with
minimal delay.ln essence, gesture recognition serves as the linchpin of the
system, enabling individuals with speech impediments to convey their thoughts
and ideas effectively through the medium of sign language. By harnessing
advanced technologies and algorithms, the system endeavors to bridge the
communication gap between users, fostering inclusivity, understanding, and
mutual respect within society.
fhduino UNO:
A microcontroller development board called Arduino Uno is based on the
A Tmega328P microcontroller. It is part of the Arduino ecosystem and is known
for its simplicity and versatility, making it a popular choice for hobbyists,
students, and professionals in the field of electronics and programming. The
Arduino UNO has three main categories of pins, namely digital pins, analog
pins, and power pins.
I. Digital Pins (DO-D 13):
• The digital pins are used for both the input and output operations
and they are configured as digital inputs and digital outputs.
• DO and D I pins- these pins are used for serial communication
over other devices and can also be used for programming.
• DO to D 13 pins -these pins sever as a general-purpose digital
input and output pins
2. Analog Pins (AO-A5):
• These pins are used for measuring the voltage value levels with a
specific range (from 0 to 5 volts) and convei·t them into digital values.
• Analog pins can be also used for reading analog voltage levels
from sensors and other devices.
3. Power Pins:
• 5V - provides a regulated 5-volt power supply for external
components.
• 3.3V - provides a 3.3 volt power supply for some Arduino
models.
• Yin - It is the voltage input pin. You can connect an external
power supply to this pin if you need more voltage than the Arduino's
onboard regulator provides.
• GND (Ground): These pins are used as the ground reference for
all the power and signal connections.
4. Other Pins and Connections:
• Reset (RESET): This pin is used for reset the Arduino when it is
pulled LOW.
• Crystal Oscillator Pins (XT ALI and XTAL2): These pins are
connected to an external crystal oscillator for accurate timekeeping
(typically 16 MHz).
• ARef: The ARef (Analog Reference) pm allows you to
Flex sensor:
A flex sensor or bend sensor is a sensor that measures the amount of
deflection or bending. Usually, the sensor is stuck to the surface, and resistance
of the sensor element is varied by bending the surface. Since the resistance is
directly proportional to the amount of bend it is used as a goniometer, and often
called a flexible potentiometer. Flex sensor changes its resistance value
depending upon the amount of bend applied on the sensor. By measuring the
resistance, we determine how much the sensor is being bent. An unflexed sensor
has 10 to 30K ohm resistance and when it is bent, the resistance value increases
to lOOK ohm.
• Flat Resistance- 25k Ohms
• Resistance Tolerance- -30% to +30%
• Bend Resistance Range- 45k to 125k0hms
• Power Rating - I Watt Peak
Force sensor:
Force sensors are even called tactile sensors. Force sensor is a type of
transducer, specifically a force trat.sducer. It converts an input mechanical force
such as load, weight, tension, compression or pressure into another physical
variable, in this case, into an electrical output signal that can be measured,
converted and standardized. As rhe force applied to the force sensor increases,
the electrical signal changes proportionally. Force Transducers became an
essential element in many industries from Automotive, High precision
manufacturing, Aerospace & Defense, Industrial Automation, Medical &
Pharmaceuticals and Robotics where reliable and high precision measurement is
paramount. Most recently, with the advancements in Collaborative Robots
(Cobots) and Surgical Robotics, many novel force measurement applications are
emerging. "Tactile" means touch. It is something that is designed to be
distinguishable by touch and also noticeable to the touch. Touch is something
that relates to the sense of touch. The tactile sensors measure touch information
based on physical contact with the environment. The tough sensor architecture
is based on biological skin touch sensing, which detects a range of mechanical
stimulations, as well as temperature changes. Touch sensors used in robotics,
computer hardware, and security systems are few to mention. The tactile sensors
act as a switch. Upon contact, pressure, or force, they activate and behave like a
switch. When contact pressure is released, they behave like closed switches.
3- axis accelerometer :
An accelerometer is a tool that measures proper acceleration. Proper
acceleration is the acceleration (the rate of change of velocity) of a body in its
own instantaneous rest frame; this is different from coordinate acceleration,
which is acceleration in a fixed coordinate system. Accelerometers can be used
to measure vehicle acceleration. Accelerometers can be used to measure
vibration on cars, machines, buildings, process control systems and safety
installations. They can also be used to measure seismic activity, inclination,
machine vibration, dynamic distance and speed with or without the influence of
gravity. Applications for accelerometers that measure gravity, wherein an
accelerometer is specifically configured for use in gravimetry. are called
gravimeters.
Accelerometer used within the gesture recognition system is employed as
a tilt sensing element, used for finding the hand movement and orientation. It
measures the static as well as dynamic acceleration. Static forces include
gravity, while dynamic forces can include vibrations and movement.
Accelerometers can measure acceleration on one, two, or three axes. The sensor
has a g-select input, which in turn switches the measurement range of the
accelerometer between ± 1.5g and ±6g. Accelerometer has a signal conditioning
unit with a single pole low pass filter. Provision for temperature compensation,
self-testing, and Og-detect (for jete-::ting the linear free fall) is also present.
Hand tracking module :
The hand tracking module is a crucial component dedicated to the precise
detection and monitoring of the user's hand movements within the captured sign
language input. It operates by leveraging a combination of advanced sensors
including flex sensors, force sensors, and accelerometers. These sensors work
synergistically to capture the intricate nuances of hand gestun::s, facilitating
accurate and reliable tracking. Through the utilization of flex sensors, the
module can discern the bending and flexing of the fingers, enabling it to
decipher the specific hand shapes and configurations inherent in sign language.
Additionally, force sensors provide valuable insight into the pressure exerted by
the user's hand, further enhancing the granularity of gesture recognition .
Moreover, the inclusion of accelerometers enables the module to capture
dynamic aspects of hand movements such as acceleration and orientation
changes, ensuring comprehensive tracking capabilities. Collectively, these
sensor technologies empower the hand tracking module to precisely locate and
track hand gestures, thereby facilitating seamless interaction and
communication in sign language .
Emotion recognition module:
In this module we recognize the emotion of the person by capturing their
voice using a microphone. We analyse this voice input with our trained emotion
detection model and it will provide corresponding scores on all- emotions we
trained. The emotion with the maximum score is sent back to the translation
module.
Feature extraction module:
The feature extraction module plays a pivotal role in the system by
extracting pertinent characteristics from the tracked hand movements. Through
sophisticated techniques such as hand shape analysis, motion trajectory analysis,
and hand posture recognition, this module captures distinctive features that are
indicative of different sign gestures.
For instance, certain sign gestures, such as the letters M, N, and T, exhibit
similarities in their hand movements. Likewise, the gestures for the letters U
and V share common characteristics. To address the challenge of distinguishing
between these similar gestures, tactile sensors are employed. By integrating
tactile sensors into the system, particularly in areas where gestures overlap or
resemble each other, accuracy in recognizing these letters is significantly
enhanced.
The tactile sensors serve to provide additional data points that augment
the feature extraction process, enabling the system to discriminate more
effectively between subtle variations in hand movements. Consequently, this
refinement in the feature extraction stage contributes to improved accuracy and
robustness in sign language recognition, enhancing the overall performance and
usability of the system .
Database module:
The database module consists of words, phrases, or sentences in various
spoken languages with various voice modtilation schemes such as based on
gender either male or female , base<.! on age (example either boy or man) and
based on emotions (example happy, sad,exciting).
It also comprises above mentioned words, phrases, or sentences in various
spoken languages in text format.
Translation module :
The translation module is integrated with our database module, tasked with
converting recognized sign gestures into textual representations or spoken
language equivalents. It operates by mapping the identified signs to their
respective words, phrases, or sentences in the desired spoken language. This
process involves intricate algorithms that analyze the recognized signs and
con·elate them with linguistic elements in the target language. For instance,
upon recognizing a sign gesture representing a particular word or phrase in sign
language, the translation module swiftly identifies the corresponding textual or
spoken language equivalent. This mapping enables seamless communication
between individuals using sign language and those who rely on spoken language
for communication.
Furthermore, the translation module may incorporate machine learning
techniques to improve accuracy and adaptability over time. By continuously
refining its algorithms based on user interactions and feedback, the module
enhances its ability to accurately translate sign gestures into spoken language,
thereby facilitating effective communication across linguistic barriers.
User interface module :
The user interface module serves as a crucial component within the
system, offering a user-friendly platform for interaction between the user and
the technology. It incorporates various elements, including a visual display to
present recognized signs, translated text, and voice output. Through this
interface, users can seamlessly interact with the system, receiving feedback in a
comprehensible manner. Whether through visual cues or auditory prompts, the
user interface module effectively communicates the recognized sign gestures,
translated text, or corresponding spoken language output. This enables users to
engage with the system intuitively, facilitating smooth and efficient
communication. By providing a well-designed interface, the system enhances
accessibility and inclusivity, entering to users with diverse needs and
preferences. Additionally, the interface may offer customization options,
allowing users to tailor the display or auditory feedback according to their
individual requirements, thereby optimizing the user experience.
Claims
We Claim,
Claim [I]: A sign language translation hand glove device and its method
compns1ng a user preferred variable voice with emotion output and a text
output.
Claim [2]: as said in Claim [I], our technology recognises hand gestures,
displays the recognised hand gestures which is followed by providing the
appropriate variable voice according to the user, based on gender, age with
appropriate emotion to the situation.
Claim [3]: as stated in Claim [2], our hand gloves recognises hand gestures by
combining flex sensors to record finger bends, a tactile sensor to record force
applied to nearby fingers, and an accelerometer sensor to record hand
movements in space.
Claim [4): Our trained emotion recognition model will recognize the emotion of
the people by capturing the nature of their tone and pace with which the person
communicates.
Claim[5]: Our translation module provides the precise words that the user wants
to convey to another person with appropriate emotion as stated in Claim [5].
Claim[6]: Our technology which integrates the outputs of Claim [6] along with
hand glove technology as said in Claim [4] will provide accurate variable voice
and text communication for the sign language.

Documents

NameDate
202441083964-Form 1-041124.pdf06/11/2024
202441083964-Form 2(Title Page)-041124.pdf06/11/2024
202441083964-Form 3-041124.pdf06/11/2024
202441083964-Form 5-041124.pdf06/11/2024
202441083964-Form 9-041124.pdf06/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.