image
image
user-login
Patent search/

SMART GLOVE SYSTEM

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

SMART GLOVE SYSTEM

ORDINARY APPLICATION

Published

date

Filed on 13 November 2024

Abstract

Disclosed herein is a smart glove system (100) for translating sign language into spoken and written language. The system (100) comprises a glove (102) configured to process and transmit input signals from a plurality of sensors (104), sensors (104) configured to detect finger and hand movements. The sensors (104) further comprises a plurality of flex sensors (106) configured to detect the bending and movement of each finger individually, an inertial measurement unit (108) configured to track the orientation and motion of the hand. The system (100) also includes a microcontroller (112) configured to receive and process data collected from the flex sensors (106), inertial measurement unit (108), and pressure sensors (110) to translate sign language gestures. The microcontroller (112) further comprises a data input module (114) configured to receive raw sensor data from the flex sensors (106), inertial measurement unit (108), and pressure sensors (110); a processing module (116).

Patent Information

Application ID202441087674
Invention FieldCOMPUTER SCIENCE
Date of Application13/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
KARTHIKASSISTANT PROFESSOR, DEPARTMENT OF ECE, NMAM INSTITUTE OF TECHNOLOGY, NITTE (DEEMED TO BE UNIVERSITY), NITTE - 574110, KARNATAKA, INDIAIndiaIndia

Applicants

NameAddressCountryNationality
NITTE (DEEMED TO BE UNIVERSITY)6TH FLOOR, UNIVERSITY ENCLAVE, MEDICAL SCIENCES COMPLEX, DERALAKATTE, MANGALURU, KARNATAKA 575018IndiaIndia

Specification

Description:FIELD OF DISCLOSURE
[0001] The present disclosure generally relates to a smart glove system, more specifically, relates to a system for translating sign language into spoken and written language.
BACKGROUND OF THE DISCLOSURE
[0002] Deaf and mute communities often face significant communication barriers when interacting with individuals who do not understand sign language. This lack of understanding often requires the presence of skilled interpreters, but interpreters are not always available, creating an accessibility challenge for both personal and professional communication.
[0003] Traditional sign language interpretation methods depend heavily on human interpreters or manual tools, which are not always practical or accessible in all environments. The advent of technology has brought forward some solutions, but many are either expensive, require large and complex setups, or do not offer real-time communication capabilities.
[0004] This smart glove-based system addresses the communication gap by providing a wearable, portable, and real-time sign language translation device. By integrating advanced sensors and leveraging machine learning algorithms, this system allows users to translate gestures into spoken or written language instantly. This invention enhances the independence and confidence of deaf and mute individuals, fostering more inclusive interactions.
[0005] Existing systems for sign language translation face several key drawbacks, primarily in their reliance on slow and complex computational algorithms. These systems often struggle to differentiate between subtle, similar gestures, resulting in reduced accuracy and slower response times. Additionally, they tend to lack machine learning capabilities, meaning they cannot adapt to new gestures or regional variations, limiting their effectiveness in diverse cultural contexts where sign language can vary significantly.
[0006] Conventional systems also suffer from a lack of customization and real-time feedback. Most conventional systems only support a fixed set of predefined gestures, offering no flexibility for users to personalize or add their own gestures. The absence of haptic or visual feedback also leads to higher error rates during communication, as users have no way to immediately verify whether their gestures were correctly interpreted. Furthermore, many systems suffer from processing delays, which disrupt the natural flow of conversation and make communication less fluid.
[0007] Prior technologies are often bulky, expensive, and reliant on external devices like computers or mobile phones for processing. This limits portability and makes them inconvenient for users who need real-time translation on the go. Moreover, these systems are not equipped to handle multilingual outputs or customizable language settings, reducing their utility in multilingual environments. The lack of adaptability, combined with these design flaws, results in lower overall accuracy, particularly when dealing with more complex or conversational gestures.
[0008] The present disclosure offers a smart glove system that provides real-time, accurate translation of sign language into spoken or written language. It incorporates advanced sensors and machine learning algorithms, enabling the system to learn and adapt to new gestures over time. This enhances its flexibility and usability across different sign languages and regional variations, making it a more inclusive solution compared to existing systems.
[0009] Unlike conventional solutions, this invention eliminates the need for external devices such as computers or mobile phones for processing. The system is fully integrated into the glove, making it lightweight, portable, and highly practical for everyday use. Additionally, it provides real-time haptic or visual feedback to users, ensuring greater accuracy and reducing errors in gesture recognition.
[0010] The system is designed to be customizable, allowing users to add their own gestures and adapt the device to their specific needs. It also supports multilingual outputs, making it suitable for use in multilingual environments where communication barriers are common. With its real-time processing and adaptive learning capabilities, the system offers a seamless communication experience.
[0011] The system solves several major challenges faced by existing technologies, including slow processing times, lack of portability, and limited adaptability. By integrating machine learning and sensor technologies, the smart glove system ensures higher accuracy, faster translation, and greater flexibility, ultimately providing a more efficient and user-friendly tool for sign language communication.
[0012] Thus, in light of the above-stated discussion, there exists a need for a smart glove system for translating sign language into spoken and written language.
SUMMARY OF THE DISCLOSURE
[0013] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0014] According to illustrative embodiments, the present disclosure focuses on a system for translating sign language into spoken and written language, which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0015] An objective of the present disclosure is to provide a smart glove system that can translate sign language into spoken and written language in real-time, enhancing communication for deaf and mute individuals.
[0016] Another objective of the present disclosure is to develop a system that eliminates the need for human interpreters, enabling independent communication for users without the assistance of third parties.
[0017] Another objective of the present disclosure is to incorporate advanced sensor technology, such as flex sensors and accelerometers, to ensure precise detection of finger and hand movements for accurate gesture recognition.
[0018] Another objective of the present disclosure is to integrate machine learning algorithms that allow the system to learn new gestures and adapt to regional or personalized sign languages, making it highly customizable for users.
[0019] Another objective of the present disclosure is to create a portable and lightweight glove system that is comfortable to wear and can be used in various environments, ensuring ease of use for everyday communication.
[0020] Yet another objective of the present disclosure is to provide multilingual support and customization, allowing users to select their preferred language for output, making the system suitable for diverse, multilingual settings.
[0021] In light of the above, in one aspect of the present disclosure, a smart glove system for translating sign language into spoken and written language is disclosed herein. The system comprises a glove configured to process and transmit input signals from a plurality of sensors integrated into the glove, sensors integrated into the glove, positioned at various points on the fingers and palm, and configured to detect finger and hand movements. The sensors further comprises a plurality of flex sensors connected to the fingers, positioned along each finger, and configured to detect the bending and movement of each finger individually, an inertial measurement unit comprising an accelerometer and gyroscope, connected to the glove, positioned on the back of the hand, and configured to track the orientation and motion of the hand and a plurality of pressure sensors connected to the glove, positioned on the fingertips and palm, and configured to detect touch and grip pressure. The system also includes a microcontroller connected to the sensors, and configured to receive and process data collected from the flex sensors, inertial measurement unit, and pressure sensors to translate sign language gestures. The microcontroller further comprises a data input module, integrated into the microcontroller, and configured to receive raw sensor data from the flex sensors, inertial measurement unit, and pressure sensors, a processing module, integrated into the microcontroller, and configured to analyze and interpret sensor data using a gesture recognition algorithm, a gesture recognition module, integrated into the microcontroller, and configured to convert recognized gestures into text, and a text-to-speech module, integrated into the microcontroller, and configured to translate recognized text into spoken language. The system also includes a user device, connected to the microcontroller via a wireless communication network, and configured to receive and display the translated gesture data as written and spoken language. The system also includes a display unit integrated into the glove, and configured to present real-time visual output of translated gestures in the form of written text.
[0022] In one embodiment, the system further comprises conductive wires integrated within the fabric of the glove, and configured to connect and transmit sensor signals efficiently to the microcontroller.
[0023] In one embodiment, the system further comprises vibration motors, integrated into the glove, and configured to provide haptic feedback during gesture recognition for user interaction.
[0024] In one embodiment, the glove is made of flexible lycra spandex fabric, providing a comfortable fit and allowing flexibility for sensor integration and user movement.
[0025] In one embodiment, the system further comprises a real-time gesture recognition algorithm implemented on the microcontroller, and configured to process and interpret sensor data instantly, providing immediate spoken and written output.
[0026] In one embodiment, the system further comprises a power supply unit, connected to the microcontroller and sensors, and configured to provide extended power and recharge capabilities for extended use.
[0027] In one embodiment, the user device further comprises a speaker, and configured to output the recognized speech in real-time.
[0028] In one embodiment, the communication network comprises a Bluetooth module connected to the microcontroller, and configured to transmit processed data wirelessly to an external smartphone application for display and further interaction.
[0029] In one embodiment, the system further comprises an mobile application configured to receive translated gestures from the glove and display them as text and speech output, enhancing user interaction.
[0030] In light of the above, in another aspect of the present disclosure, a method for translating sign language into spoken and written language. The method comprises processing and transmitting input signals from a plurality of sensors via a glove. The method also includes detecting finger and hand movements via sensors. The method also includes detecting the bending and movement of each finger individually via a plurality of flex sensors. The method also includes tracking the orientation and motion of the hand via an inertial measurement unit. The method also includes detecting touch and grip pressure via a plurality of pressure sensors. The method also includes receiving and processing data collected from the flex sensors, inertial measurement unit, and pressure sensors to translate sign language gestures via a microcontroller. The method also includes receiving raw sensor data from the flex sensors, inertial measurement unit, and pressure sensors via a data input module. The method also includes analyzing and interpreting sensor data using a gesture recognition algorithm via a processing module. The method also includes converting recognized gestures into text via a gesture-to-text module. The method also includes translating recognized text into spoken language via a text-to-speech module. The method also includes receiving and displaying the translated gesture data as written and spoken language via a user device. The method also includes presenting real-time visual output of the translated gestures via a display unit.
[0031] These and other advantages will be apparent from the present application of the embodiments described herein.
[0032] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments.
[0033] The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilising, alone or in combination, one or more of the features set forth above or described in detail below.
[0034] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0035] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0036] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0037] FIG. 1 illustrates a block diagram of a smart glove system, in accordance with an exemplary embodiment of the present disclosure;
[0038] FIG. 2 illustrates a block diagram of sign language translator having a smart glove system, in accordance with an exemplary embodiment of the present disclosure;
[0039] FIG. 3 illustrates a flow chart of sign-to-speech converter, in accordance with an exemplary embodiment of the present disclosure;
[0040] FIG. 4 illustrates the integration of MIT App Inventor 2 in developing the mobile application for the smart glove-based sign language translator system, in accordance with an exemplary embodiment of the present disclosure.
[0041] FIG. 5 illustrates a system flow diagram of a method, outlining the sequential steps involved in the smart glove system for translating sign language into spoken and written language, in accordance with an exemplary embodiment of the present disclosure.
[0042] Like reference, numerals refer to like parts throughout the description of several views of the drawing.
[0043] The smart glove system is illustrated in the accompanying drawings, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0044] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0045] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0046] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0047] The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0048] The terms "having", "comprising", "including", and variations thereof signify the presence of a component.
[0049] Referring now to FIG. 1 to FIG. 5 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a block diagram of a smart glove system 100, in accordance with an exemplary embodiment of the present disclosure.
[0050] The system 100 may include a glove 102, sensors 104, a plurality of flex sensors 106, an inertial measurement unit 108, a plurality of pressure sensors 110, a microcontroller 112, a data input module 114, a processing module 116, a gesture recognition module 118, a text-to-speech module 120, a user device 122 and a display unit 128.
[0051] The glove 102 is a wearable device specifically designed to translate sign language gestures into spoken or written language in real time, thereby bridging the communication gap for deaf and mute individuals. The glove 102 incorporates multiple advanced components, including flex sensors 106, accelerometers, gyroscopes, and pressure sensors 110, which are critical for accurately capturing the movements and positions of the fingers and hands. These components work together to process gestures efficiently and provide immediate translation output. In the preferred embodiment of the present invention, the glove 102 is constructed from lycra, spandex or similar stretchable, durable fabrics, which not only ensures a comfortable fit for the user but also enhances the glove 102's durability and compatibility with the embedded sensors .
[0052] In one embodiment of the present invention, the system further comprises conductive wires or threads, integrated within the fabric of the glove 102, and configured to connect and transmit sensor signals efficiently to the microcontroller 112. These conductive elements ensure that sensor signals are transmitted efficiently, minimizing signal loss and enhancing the accuracy of gesture recognition. This integration helps in maintaining a lightweight and unobtrusive design, making the glove 102 practical for continuous use .
[0053] In one embodiment of the present invention, the system further comprises vibration motors, integrated into the glove 102, and configured to provide haptic feedback during gesture recognition for user interaction. This feature significantly enhances the user experience by offering real-time feedback, confirming that the gesture has been correctly recognized by the system. The haptic feedback also improves the reliability of the translation process, reducing potential communication errors .
[0054] In one embodiment of the present invention, the glove 102 is made of flexible lycra, spandex fabric, providing a comfortable fit and allowing flexibility for sensor integration and user movement. This flexibility ensures that the sensors 104 can accurately detect and process hand movements without restricting the user's natural gestures. The fabric's compatibility with embedded sensors makes it ideal for long-term use, ensuring that the sensors remain securely in place, even during extended wear, without compromising comfort .
[0055] Additionally, the glove 102 incorporates a plurality of pressure sensors 110, strategically located on the fingertips or the palm. These sensors detect the amount of pressure applied by the user during gestures that involve touching or gripping, providing another layer of information that enhances the system's ability to interpret more nuanced gestures. The pressure sensors 110 work in conjunction with the flex sensors 106 and IMU to deliver a comprehensive understanding of the user's hand movements
[0056] The sensors 104 are critical components for detecting and interpreting hand gestures made by the user. They are responsible for capturing the various physical movements of the hand and fingers. In the preferred embodiment of the present invention, the sensors 104 are flex sensors 106, inertial measurement units (IMUs), and pressure sensors 110, each serving distinct roles in translating sign language into digital signals .
[0057] The plurality of flex sensors 106 is embedded within the glove 102 to detect the bending and movement of the user's fingers. These sensors capture the degree of flexion in each finger, which is essential for interpreting different sign language gestures. The flex sensors 106 are designed to detect the bending of fingers as the user performs sign language gestures. These sensors are embedded along the length of each finger and operate by measuring changes in resistance as the fingers flex. The flex sensors 106 ensure precise tracking of finger positions, which is essential for translating the intricate finger movements that form the basis of sign language. The system uses a plurality of five flex sensors 106, one for each finger, to capture the full range of hand and finger movements. In the preferred embodiment of the present invention, the plurality of flex sensors 106 consists of five sensors, one for each finger, and they work by measuring the change in resistance as the fingers bend, which is then translated into gesture data.
[0058] The inertial measurement unit 108 is a combination of accelerometers and gyroscopes that tracks the orientation and motion of the hand. It detects changes in hand position, rotation, and speed, helping to determine the spatial context of the gesture. The inertial measurement unit 108, composed of an accelerometer and a gyroscope, tracks the overall motion and orientation of the hand. This unit measures the speed, direction, and angle of the hand as it moves through space. The IMU is placed on the back of the hand, allowing it to monitor both static and dynamic gestures, such as waving or rotating the hand. By combining data from the flex sensors 106 and IMU, the system is capable of recognizing a wide variety of gestures with high accuracy. In the preferred embodiment of the present invention, the IMU is placed on the back of the hand and continuously monitors the dynamic movements of the hand, ensuring precise gesture recognition.
[0059] The plurality of pressure sensors 110 is integrated into the glove 102, positioned at key points such as the fingertips or the palm, to detect the force applied during gestures. These sensors are crucial for capturing grip or touch-based gestures that are commonly used in sign language. In the preferred embodiment of the present invention, the plurality of pressure sensors 110 consists of five sensors, each located at the fingertips or palm, allowing the system to detect the subtle differences in pressure applied during various gestures.
[0060] The microcontroller 112 is the central processing unit that governs the data flow and computation of the system. It is responsible for receiving sensor input, running gesture recognition algorithms, and translating the data into spoken or written language in real time. In the preferred embodiment of the present invention, the microcontroller 112 is a high-performance unit such as an Arduino or Raspberry Pi, equipped with at least 32 KB of flash memory and multiple I/O pins for sensor integration. This ensures that the system can handle large amounts of data while maintaining the speed necessary for real-time translation.
[0061] In one embodiment of the present invention, the system further comprises a real-time gesture recognition algorithm implemented on the microcontroller 112, and configured to process and interpret sensor data instantly, providing immediate spoken and written output. The algorithm analyzes the input from flex sensors 106, IMUs, and pressure sensors 110, identifying specific patterns corresponding to sign language gestures. It then maps these gestures to predefined words or phrases stored in a database, generating immediate spoken or written output. This real-time processing ensures that users experience no delays during communication, making interactions seamless and fluid.
[0062] The data input module 114 is responsible for capturing and transmitting gesture data from the sensors 104 embedded in the glove 102. It collects information from flex sensors 106, IMUs, and pressure sensors 110, providing the raw data needed for gesture interpretation. In the preferred embodiment of the present invention, the data input module 114 consists of conductive wires or threads that efficiently transmit the sensor signals to the processing unit, ensuring accurate and real-time data collection.
[0063] The processing module 116 is the central unit that manages the incoming data and prepares it for interpretation by the gesture recognition algorithm. It filters and digitizes the raw sensor inputs to generate structured data, which can be interpreted by the system. In the preferred embodiment of the present invention, the processing module 116 is implemented on a microcontroller 112 such as an Arduino or Raspberry Pi, providing the computational power necessary to handle multiple sensor inputs in real-time.
[0064] The gesture recognition module 118 is a critical software component that processes the data collected from the sensors and identifies specific hand gestures. It runs machine learning algorithms or pattern recognition techniques to map the collected data to predefined gestures in the system's database. In the preferred embodiment of the present invention, the gesture recognition module 118 is powered by machine learning algorithms implemented on the microcontroller 112, enabling the system to interpret and learn new gestures over time.
[0065] The text-to-speech module 120 is responsible for converting the recognized gestures into spoken language. It takes the output from the gesture recognition module 118 and translates it into an audible form, allowing real-time spoken communication. In the preferred embodiment of the present invention, the text-to-speech module 120 consists of a speaker system integrated with the microcontroller 112 to generate real-time audio output corresponding to the recognized gestures
[0066] In one embodiment of the present invention, the system further comprises a power supply unit, connected to the microcontroller 112 and sensors 104, and configured to provide reliable and extended power for continuous use of the smart glove 102 system. The power supply unit, typically a Lithium Polymer (Li-Po) or Lithium-Ion battery, is designed to provide sufficient voltage and current to support the glove 102's various components, including the microcontroller 112, sensors 104, and haptic feedback systems. It ensures uninterrupted operation of the device during communication sessions and is designed with recharge capabilities, allowing the glove 102 to be conveniently recharged for repeated use.
[0067] Additionally, the power supply unit is optimized for energy efficiency, ensuring that the glove 102 can function for extended periods without frequent recharging. This is particularly useful in daily use scenarios where the user may need the glove 102 for long communication sessions without the need to frequently recharge. The integration of power management features ensures that the system intelligently conserves energy during idle periods, further enhancing the overall efficiency and usability of the device.
[0068] The user device 122 is a key component that interacts with the smart glove 102 system, providing an interface for displaying or transmitting the translated gestures as text or speech output. It serves as a communication bridge between the smart glove 102 and the user, ensuring that the translated sign language gestures are accessible in spoken or written formats. In the preferred embodiment of the present invention, the user device 122 is a smartphone, tablet, or any external device capable of receiving and displaying data from the glove 102's microcontroller 112 via Bluetooth or Wi-Fi communication.
[0069] In one embodiment of the present invention, the user device 122 further comprises a speaker, and is configured to output the recognized speech in real-time. It allows the translated gestures to be heard immediately, ensuring seamless communication for users who need spoken language output. This real-time audio output enhances communication by providing immediate feedback without delays.
[0070] In one embodiment of the present invention, the communication network comprises a Bluetooth module connected to the microcontroller 112, which is configured to transmit processed data wirelessly to an external smartphone application for display and further interaction. The Bluetooth module facilitates a wireless connection between the glove 102 and the user device 122, allowing the system to function without physical connections, thereby improving portability and ease of use. This wireless communication ensures the data is transmitted efficiently and accurately, enhancing the user experience.
[0071] In one embodiment of the present invention, the system further comprises a mobile application configured to receive translated gestures from the glove 102 and display them as text and speech output, enhancing user interaction. The application allows the user to view the recognized gestures in written form and hear the corresponding speech, making the system more versatile. The mobile application also allows users to customize settings, such as preferred languages or gesture profiles, further improving accessibility and usability.
[0072] The display unit 128 is an optional but important component that provides a visual representation of the translated gestures. It serves as a user interface, allowing the user or others around them to see the text output of the translated sign language gestures. This is particularly useful in environments where audio output might not be practical or when the user prefers visual feedback. In the preferred embodiment of the present invention, the display unit 128 is a small-sized OLED screen (such as a 0.96-inch OLED) integrated into the glove 102 or connected to an external user device 122, such as a smartphone or tablet, providing a clear and real-time visual display of the recognized gestures. The display unit 128 plays a critical role in enhancing the versatility of the system by allowing both text and speech outputs, ensuring that the system is adaptable to various communication needs. It ensures that users can interact with the system in real time, verifying the accuracy of the gesture recognition and the corresponding text output.
[0073] FIG. 2 illustrates a block diagram of sign language translator having a smart glove system 100, in accordance with an exemplary embodiment of the present disclosure.
[0074] FIG. 2 illustrates a block diagram of the smart glove system 100, which operates by capturing, processing, and translating sign language gestures into spoken or written language. This system combines various hardware components and sophisticated software algorithms to enable seamless real-time translation of sign language. The system 100 integrates sensors within a wearable glove 102, a processing unit, and wireless communication network 126, all working together to provide a practical, portable solution for communication.
[0075] The core of the system is the smart glove 102, which is embedded with different sensors to detect the user's hand and finger movements. The glove is constructed from flexible, durable materials like lycra or spandex, which ensure comfort and ease of movement. Integrated into the glove are flex sensors, which measure the bending of the fingers and provide critical information for gesture recognition. Additionally, the glove incorporates pressure sensors 110, strategically placed on the fingertips and palms, to detect the amount of force exerted during specific gestures, particularly those involving gripping or touch-based movements. These sensors work in conjunction with an inertial measurement unit 108, which tracks the orientation, motion, and speed of the user's hand using a combination of an accelerometer and a gyroscope.
[0076] At the heart of the processing unit is the arduino microcontroller 112, which serves as the system's central processing hub. The microcontroller receives data from the sensors embedded in the glove and runs advanced gesture recognition algorithms. These algorithms analyze the sensor data to identify specific sign language gestures, translating raw data into digital signals that can be interpreted by the system. The Bluetooth transmitter 202 connected to the microcontroller then transmits the processed data wirelessly to an external device, eliminating the need for physical connections and enhancing portability.
[0077] The external device, such as an user device 122, serves as the primary interface for displaying or audibly outputting the translated gestures. The Bluetooth receiver 204 on the user device receives the data from the glove and relays it to a text-to-speech module 120 and a display unit 128. The text-to-speech module converts the recognized gestures into spoken language, which is then outputted through a speaker 124 for real-time communication. Alternatively, the gestures can be displayed as text on the display unit, providing a visual representation of the translated gestures. This dual-mode output ensures that the system is flexible enough to cater to various communication needs.
[0078] Overall, the smart glove system 100 provides a practical, efficient, and portable solution for bridging the communication gap between deaf or mute individuals and those who do not understand sign language. By combining advanced sensor technology, real-time data processing, and wireless communication, the system enables seamless, real-time translation of sign language gestures into spoken or written language, significantly enhancing communication accessibility.
[0079] FIG. 3 illustrates a flow chart of sign-to-speech converter, in accordance with an exemplary embodiment of the present disclosure.
[0080] At 302, the system begins by initializing all components necessary for operation. This includes preparing the flex sensors, contact sensors, and the accelerometer (part of the inertial measurement unit) embedded within the glove. These sensors are crucial for detecting and capturing hand and finger movements. Once the initialization process is complete, the system is ready to start collecting data.
[0081] At 304, the system starts reading input data from the sensors. The flex sensors detect the bending of the fingers, while the contact sensors measure pressure or touch applied during the gesture. The accelerometer tracks the orientation and movement of the hand in space. Together, these sensors provide comprehensive data about the user's hand movements, which are crucial for accurately interpreting the sign language gestures.
[0082] At 306, the collected sensor data is processed and mapped to specific letters or signs. Using predefined algorithms, the system interprets the gestures and converts them into corresponding letters. These letters are temporarily stored in memory as part of the ongoing word formation. The system's gesture recognition algorithm, as described in the disclosure, relies on real-time signal processing and machine learning techniques to accurately identify the hand movements and convert them into meaningful output.
[0083] At 308, the system checks whether the current sequence of gestures marks the end of a word. This check is essential for determining when the system has recognized a complete word. If the end of the word has not been reached, the system loops back and continues reading additional gestures. However, if the system detects that the word is complete, it proceeds to the next steps, which involve generating output.
[0084] At 310, the recognized word is displayed on an LCD screen. This visual output allows users to see the translated sign language gestures in text format, ensuring accessibility in scenarios where spoken language may not be practical or preferred. The LCD display provides immediate visual feedback, making the communication process more inclusive and user-friendly.
[0085] At 312, simultaneously with the text display, the system uses a text-to-speech module to convert the recognized word into spoken language. The TTS module takes the recognized word and generates an audible output through a speaker, allowing users to communicate verbally in real-time. This dual-output system, which includes both text and speech, makes the sign-to-speech converter highly versatile and accessible in a variety of settings, from public spaces to personal interactions.
[0086] FIG. 4 illustrates the integration of MIT App Inventor 2 in developing the mobile application for the smart glove-based sign language translator system 100, in accordance with an exemplary embodiment of the present disclosure.
[0087] MIT App Inventor 2, a web-based platform developed by the Massachusetts Institute of Technology (MIT), allows users to create Android applications using a visual, drag-and-drop interface, making it ideal for rapid prototyping. In the context of the invention, this platform is used to create the mobile app that facilitates seamless communication between the glove and external devices, enabling the translation of sign language gestures into text and speech.
[0088] The figure highlights the blocks-based interface of the platform, showing how Bluetooth connectivity and text-to-speech functionalities are implemented. The Bluetooth blocks handle the reception of data from the glove's sensors, including flex sensors, contact sensors, and the accelerometer. This data corresponds to the user's gestures and is processed by the mobile app. The text-to-speech block then converts the processed text into speech, providing real-time auditory feedback, allowing the system to bridge communication gaps instantly by speaking out the recognized gestures.
[0089] By employing MIT App Inventor 2, the invention achieves real-time data processing and seamless communication. The app, created using the platform, enables the system to receive gesture data wirelessly, process it efficiently, and deliver both visual and auditory outputs in real-time. This enhances the portability and usability of the system, making it an accessible solution for translating sign language into spoken language, aligning with the overall objectives of the smart glove system.
[0090] FIG. 5 illustrates a system 100 flow diagram of a method 500, outlining the sequential steps involved in the smart glove system 100 for translating sign language into spoken and written language, in accordance with an exemplary embodiment of the present disclosure.
[0091] The method 500 may include at 502, processing and transmitting input signals from a plurality of sensors via a glove; at 504, detecting finger and hand movements via sensors; at 506, detecting the bending and movement of each finger individually via a plurality of flex sensors; at 508, tracking the orientation and motion of the hand via an inertial measurement unit; at 510, detecting touch and grip pressure via a plurality of pressure sensors; at 512, receiving and processing data collected from the flex sensors, inertial measurement unit, and pressure sensors to translate sign language gestures via a microcontroller; at 514, receiving raw sensor data from the flex sensors, inertial measurement unit, and pressure sensors via a data input module; at 516, analyzing and interpreting sensor data using a gesture recognition algorithm via a processing module; at 518, converting recognized gestures into text via a gesture-to-text module; at 520, translating recognized text into spoken language via a text-to-speech module; at 522, receiving and displaying the translated gesture data as written and spoken language via a user device; at 524 presenting real-time visual output of the translated gestures via a display unit.
[0092] The system begins by processing and transmitting input signals from a plurality of sensors embedded within the glove. These sensors are responsible for detecting hand and finger movements that constitute sign language gestures. Flex sensors, placed on the fingers, detect the bending or movement of each finger individually, while an inertial measurement unit tracks the orientation and overall motion of the hand in space. The combination of flex sensors and the Inertial measurement unit provides a comprehensive dataset that accurately captures both static and dynamic gestures. Additionally, pressure sensors located on the fingertips and palms detect the amount of force applied during certain gestures, adding another layer of detail, particularly for signs involving touch or grip pressure.
[0093] Once the data is collected, it is transmitted to the system's microcontroller, where the signals from the flex sensors, Inertial measurement unit, and pressure sensors are processed. The microcontroller receives the raw sensor data and organizes it for further interpretation by the system. A gesture recognition algorithm then analyzes and interprets this sensor data. Using machine learning techniques, the algorithm identifies patterns in the data that correspond to specific letters or words in sign language. These recognized gestures are then converted into text by a gesture-to-text module, allowing the system to map the physical movements of the user to corresponding alphabetic characters or words.
[0094] After the gestures are converted into text, the system translates this text into spoken language using a text-to-speech module. This allows for real-time auditory feedback, enabling verbal communication between sign language users and those who do not understand sign language. Simultaneously, the translated gesture data is also displayed as written text on a user device, such as a smartphone or tablet, ensuring that the translation is available both visually and audibly. The system concludes by presenting a real-time visual output of the translated gestures on a display unit, providing immediate feedback to the user and those around them. This combination of visual and auditory outputs makes the system highly versatile and accessible, enabling seamless communication across different environments.
[0095] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0096] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0097] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0098] Disjunctive language such as the phrase "at least one of X, Y, Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0099] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. A smart glove system (100) for translating sign language into spoken and written language, the system (100) comprising:
a glove (102) configured to process and transmit input signals from a plurality of sensors (104) integrated into the glove (102);
sensors (104) integrated into the glove (102), positioned at various points on the fingers and palm, and configured to detect finger and hand movements, wherein the sensors (104) further comprise:
a plurality of flex sensors (106) connected to the fingers, positioned along each finger, and configured to detect the bending and movement of each finger individually;
an inertial measurement unit (108) comprising an accelerometer and gyroscope, connected to the glove (102), positioned on the back of the hand, and configured to track the orientation and motion of the hand;
a plurality of pressure sensors (110) connected to the glove (102), positioned on the fingertips and palm, and configured to detect touch and grip pressure;
a microcontroller (112) connected to the sensors (104), and configured to receive and process data collected from the flex sensors (106), inertial measurement unit (108), and pressure sensors (110) to translate sign language gestures, wherein the microcontroller (112) further comprises:
a data input module (114), integrated into the microcontroller (112), and configured to receive raw sensor data from the flex sensors (106), inertial measurement unit (108), and pressure sensors (110);
a processing module (116), integrated into the microcontroller (112), and configured to analyze and interpret sensor data using a gesture recognition algorithm;
a gesture recognition module (118), integrated into the microcontroller (112), and configured to convert recognized gestures into text;
a text-to-speech module (120), integrated into the microcontroller (112), and configured to translate recognized text into spoken language;
a user device (122), connected to the microcontroller (112) via a wireless communication network (126), and configured to receive and display the translated gesture data as written and spoken language; and
a display unit (128) integrated into the glove (102), and configured to present real-time visual output of translated gestures in the form of written text.
2. The system (100) as claimed in claim 1, wherein the system further comprises conductive wires integrated within the fabric of the glove (102), and configured to connect and transmit sensor signals efficiently to the microcontroller (112).
3. The device as claimed in claim 1, wherein the system further comprises vibration motors (130), integrated into the glove (102), and configured to provide haptic feedback during gesture recognition for user interaction.
4. The system (100) as claimed in claim 1, wherein the glove (102) is made of flexible lycra spandex fabric, providing a comfortable fit and allowing flexibility for sensor integration and user movement.
5. The system (100) as claimed in claim 1, wherein the system further comprises a real-time gesture recognition algorithm implemented on the microcontroller (112), and configured to process and interpret sensor data instantly, providing immediate spoken and written output.
6. The system (100) as claimed in claim 1, wherein the system further comprises a power supply unit, connected to the microcontroller (112) and sensors (104), and configured to provide extended power and recharge capabilities for extended use.
7. The system (100) as claimed in claim 1, wherein the user device (122) further comprises a speaker (124), and configured to output the recognized speech in real-time.
8. The system (100) as claimed in claim 1, wherein the communication network (126) comprises a Bluetooth module connected to the microcontroller (112), and configured to transmit processed data wirelessly to an external smartphone application for display and further interaction.
9. The system (100) as claimed in claim 1, wherein the system further comprises an mobile application configured to receive translated gestures from the glove (102) and display them as text and speech output, enhancing user interaction.
10. A method (500) for translating sign language into spoken and written language, the method (500) comprising:
processing and transmitting input signals from a plurality of sensors (104) via a glove (102);
detecting finger and hand movements via sensors (104);
detecting the bending and movement of each finger individually via a plurality of flex sensors (106);
tracking the orientation and motion of the hand via an inertial measurement unit (108);
detecting touch and grip pressure via a plurality of pressure sensors (110);
receiving and processing data collected from the flex sensors (106), inertial measurement unit (108), and pressure sensors (110) to translate sign language gestures via a microcontroller (112);
receiving raw sensor data from the flex sensors (106), inertial measurement unit (108), and pressure sensors (110) via a data input module (114);
analyzing and interpreting sensor data using a gesture recognition algorithm via a processing module (116);
converting recognized gestures into text via a gesture-to-text module (118);
translating recognized text into spoken language via a text-to-speech module (120);
receiving and displaying the translated gesture data as written and spoken language via a user device (122); and
presenting real-time visual output of the translated gestures via a display unit (128).

Documents

NameDate
202441087674-FORM-26 [30-11-2024(online)].pdf30/11/2024
202441087674-Proof of Right [30-11-2024(online)].pdf30/11/2024
202441087674-COMPLETE SPECIFICATION [13-11-2024(online)].pdf13/11/2024
202441087674-DECLARATION OF INVENTORSHIP (FORM 5) [13-11-2024(online)].pdf13/11/2024
202441087674-DRAWINGS [13-11-2024(online)].pdf13/11/2024
202441087674-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-11-2024(online)].pdf13/11/2024
202441087674-FORM 1 [13-11-2024(online)].pdf13/11/2024
202441087674-FORM FOR SMALL ENTITY(FORM-28) [13-11-2024(online)].pdf13/11/2024
202441087674-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-11-2024(online)].pdf13/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.