Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A SYSTEM AND METHOD FOR TRANSLATING REAL-TIME SPOKEN OR WRITTEN LANGUAGE TO INDIAN SIGN LANGUAGE (ISL)
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 6 November 2024
Abstract
The present invention relates to translating real-time spoken or written language to Indian sign language (ISL). The method (300) includes detecting, using the speech recognition module (101), a real time audio from a first user. The method further determining a context, semantics, and intent of each word of the text data using the NLP engine (103). The method further includes converting each word of the text data to Indian Sign Language (ISL) using the translation module (105). The method further includes applying Natural Language Processing (NLP) technique on the received user input. The method further includes displaying the converted ISL into an animation to a second user using the 3D animation generation module (107), wherein the animation captures subtle nuances of ISL, including facial expressions, hand shapes, and movement dynamics. Figure 1.
Patent Information
Application ID | 202421084935 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 06/11/2024 |
Publication Number | 48/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Mr. Sumit Jain | Acropolis Institute of Technology and Research, Bypass Rd, Manglia Square, Manglia, Indore, Madhya Pradesh 453771 India | India | India |
Vihaan Vijayvargiya | 79-N, Sanchar Nagar Extension, Indore, Madhya Pradesh Pin:452016 India | India | India |
Sakshi Raut | 463, Panchvati Colony, Indore, Madhya Pradesh Pin:452001 India | India | India |
Rishabh Chormare | 3245, Sector-E, Sudama Nagar, Indore, Madhya Pradesh Pin:452009 India | India | India |
Yashika Sharma | 739/8, Nanda Nagar, Indore, Madhya Pradesh Pin:452011 India | India | India |
Yash Mishra | E-592, Scheme No. 51, Near Sangam Nagar, Indore, Madhya Pradesh Pin:452007 India | India | India |
Megha Tomar | Ward No. 16, Village- Kartana, Post-Nousar, Tehsil-Timarni, District- Harda, Madhya Pradesh Pin:461228 India | India | India |
Dr. Namrata Tapaswi | Professor & Head, Department of Computer Science & Engg. (Artificial Intelligence & Machine Learning), Acropolis Institute of Technology and Research, Bypass Rd, Manglia Square, Manglia, Indore, Madhya Pradesh 453771 India | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
ACROPOLIS INSTITUTE OF TECHNOLOGY AND RESEARCH | Bypass Rd, Manglia Square, Manglia, Indore, Madhya Pradesh 453771 India | India | India |
Specification
Description:FIELD OF THE INVENTION:
[002] The present invention is in the area of assistive technology, more specially, the present invention relates to a system and method that help the deaf and mute community to communicate. The system and method provide a real-time translation between spoken/written Indian language and Indian Sign Language (ISL) by the integration of speech recognition, natural language processing (NLP), computer vision, and 3D modelling.
BACKGROUND OF THE INVENTION:
[003] In India, where there is diversity in the languages spoken, the ability to quickly translate spoken language into Indian Sign Language (ISL) is important for better communication in social, educational, and work places.
[004] Currently, deaf and hearing-impaired people face significant challenges in getting information and joining conversations that happen mostly in spoken languages. They often rely on interpreters or written text, which can be limiting and may miss important details, especially in busy places like classrooms and meetings. This can lead to feelings of isolation, making it more difficult for them to socialize.
[005] Tools like Google Cloud Speech-to-Text are very good at converting spoken words into written text. However, they do not directly translate spoken language into Indian Sign Language (ISL). Instead, these tools are designed to listen to audio and write down what is said, rather than create visual representations of sign language. Because of this limitation, they are not sufficient for facilitating real-time communication with people who use ISL. This means that while these tools can help with understanding spoken content in written form, they cannot provide the immediate visual translations that ISL users need for effective communication.
[006] Computer vision systems for American Sign Language (ASL) use advanced technology to analyze recorded videos and translate the signs into text or other formats. These systems are designed specifically for ASL and are not able to translate Indian Sign Language (ISL). Additionally, their main focus is on processing visual input from videos, which means they do not provide the ability to convert spoken audio into sign language in real time. This limitation makes them less effective for immediate communication, as they cannot help users who need instant translations from spoken language to ISL during live conversations.
[007] Augmented Reality (AR) glasses, such as Microsoft HoloLens and Google Glass, are used for many different accessibility purposes. These devices can show visual aids and provide information without needing to use your hands. However, they do not have the ability to translate spoken language into Indian Sign Language (ISL) in real time. Their current functions do not meet the specific requirement for converting spoken audio into sign language instantly. This means that while they can be helpful for certain tasks, they do not address the important need for immediate communication support for ISL users during live conversations.
[008] Educational platforms for learning sign language are designed mainly to help new learners understand and use sign language. These tools often provide video lessons and interactive activities that teach various signs and their meanings. However, they do not have the capability to convert spoken language into sign language in real time. This limitation makes them less useful for situations where deaf and non-deaf individuals need to communicate live, such as in conversations or discussions. While these platforms are great for learning, they do not help with immediate communication needs during real-world interactions.
[009] Therefore, there exists a need for an enhanced real-time translation technology that allows ISL users to communicate and understand in an effective manner and solves the above-mentioned problems.
OBJECT OF THE INVENTION:
[010] An objective of the present invention is to improve upon the conventional problems as described above, and to provide comprehensive technology-driven solution tailored for translating real-time spoken or written language to Indian sign language (ISL).
[011] Another object of the invention is to provide a system that facilitates seamless communication in professional workplaces by translating spoken language into ISL during meetings, presentations, and team discussions, enabling deaf employees to actively participate.
[012] Another object of the invention is to provide a system that offers a real-time ISL translation of audio announcements in public spaces such as railway stations, airports, bus terminals, and shopping malls, ensuring that deaf individuals have access to critical information.
[013] Another object of the invention is to provide a system that enhances communication between healthcare providers and deaf patients by translating spoken medical information and consultations into ISL, improving patient care and access to medical services.
[014] Another object of the invention is to provide a system that ensures that the critical alerts and safety instructions are instantly translated into ISL, enhancing safety and awareness for the deaf community in emergency situations.
[015] Another object of the invention is to provide a system that prioritizes ease of use and intuitive interaction. By creating a user-friendly interface and ensuring accurate, contextually relevant translations, it enhances overall communication experiences for both deaf individuals and those who do not know ISL.
[016] Another object of the invention is to provide a system that is optimized to handle various accents, dialects, and speech patterns, ensuring that it can effectively capture the diversity of spoken language inputs.
[017] Another object of the invention is to provide a system that ensures that the text is interpreted correctly, considering nuances such as idiomatic expressions, slang, and contextual meaning, which are critical for accurate translation into ISL.
[018] Another object of the invention is to provide a system that uses blender (a powerful 3D animation software) that is employed to create high-fidelity, lifelike animations of ISL gestures and expressions. The animations are meticulously designed to capture the subtle nuances of sign language, including facial expressions, hand shapes, and movement dynamics, which are essential for conveying accurate meaning in ISL.
SUMMARY OF THE INVENTION:
[019] The invention discloses a system for translating real-time spoken or written language to Indian sign language (ISL). The system comprises at least one processor a memory comprising a speech recognition module, a Natural language Processing (NLP) engine, a translation module, and a 3D animation generation module. The system further comprises a user interface and at least one processor communicatively coupled with the memory and the user interface.
[020] The at least one processor is configured to detect, using the speech recognition module. The first audio data from a first user. The at least one processor further configured to transcribe the detected first audio data into first text data
[021] In an embodiment, the at least one processor is configured to determine a context, semantics, and intent of each word of the first text data using the NLP engine.
[022] In an embodiment, the at least one processor is configured to convert each word of the first text data to Indian Sign Language (ISL) using the translation module (105).
[023] In an embodiment, the at least one processor is configured to convert display the converted ISL into an animation to a second user using the 3D animation generation module (107), wherein the animation captures subtle nuances of ISL, including facial expressions, hand shapes, and movement dynamics of the first user.
[024] In some example embodiments, the invention provides a method for translating real-time spoken or written language to Indian sign language (ISL).
[025] The method comprises detecting, using the speech recognition module, a real time audio from a first user. The method further incudes transcribing the detected audio into text data.
[026] In an embodiment, the method further includes determining a context, semantics, and intent of each word of the text data using the NLP engine.
[027] In an embodiment, the method further includes converting each word of the text data to Indian Sign Language (ISL) using the translation module.
[028] In an embodiment, the method further includes displaying the converted ISL into an animation to a second user using the 3D animation generation module. The animation captures subtle nuances of ISL, including facial expressions, hand shapes, and movement dynamics.
BRIEF DESCRIPTION OF THE DRAWINGS:
[029] The present invention will hereinafter be described in conjunction with the accompanying drawings, wherein like numerals denote like elements. Additional embodiments of the invention will become evident upon reviewing the non-limiting embodiments described in the specification in conjunction with the accompanying drawings, wherein:
[030] Figure 1 illustrates block diagram of system for translating real-time spoken or written language to Indian sign language (ISL), in accordance with an embodiment of the present invention.
[031] Figure 2 illustrates network environment for translating real-time spoken or written language to Indian sign language (ISL), in accordance with an embodiment of the present invention.
[032] Figure 3A-3C illustrates a method for translating real-time spoken or written language to Indian sign language (ISL), in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS:
[033] Before the present configuration of a system and a method for translating real-time spoken or written language to Indian sign language (ISL), it is to be understood that this disclosure is not limited to particular assembly or configuration or arrangement for achieving as described, since it may vary within the specification indicated. It is further to be understood that the terminology used in the description is only for the purpose of describing the particular versions or embodiments and is not intended to limit the scope of the present invention.
[034] The words, "comprising", "having", "including" & "containing" and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items.
[035] The proposed invention addresses key challenges in existing communication technologies for the deaf and mute community by providing a real-time, accessible, and user-friendly solution. It translates spoken language into Indian Sign Language (ISL) using standard mobile devices and AR glasses and also translates Indian regional language to Indian Sign Language, overcoming limitations such as lack of real-time conversion and inadequate support for regional variations.
[036] The proposed Indian Sign Language (ISL) translation platform is an innovative solution designed to bridge the communication gap between the hearing and deaf or hard-of-hearing communities. This platform integrates several advanced technologies, including speech recognition, natural language processing (NLP), and 3D animation, to provide real-time, accurate, and context-sensitive translations from spoken language into ISL.
[037] Figure 1 and Figure. 2 are taken together for explaining the specification. Figure 1 illustrates block diagram of system for translating real-time spoken or written language to Indian sign language (ISL), in accordance with an embodiment of the present invention. Figure 2 illustrates network environment for translating real-time spoken or written language to Indian sign language (ISL), in accordance with an embodiment of the present invention.
[038] Referring to Figure 2, there is shown a first user (200a). The first user (200a) is a user with no disability. The first user is associated with an electronic device or User equipment (UE) (100a). Further, the network environment further includes a second user (200b). The second user is a user with disability. The second is also associated with an electronic device or User equipment (UE) (100b). The user equipment of the first user (200a) and the second user (200b) having same capabilities and collectively referred as User Equipment (UE) 100. The UE (100a) of the first user and the UE (100b) of the second user (200b) may connected via a communication network.
[039] In one embodiment, the communication network includes one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network, and the like, or any combination thereof. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UNITS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth®, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), and the like, or any combination thereof.
[040] The electronic device (100) includes a user interface (230) and an application (240). The electronic device (100) may include the system (120) translating real-time spoken or written language to Indian sign language (ISL). By way of example, the UE (100) is any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistants (PDAs), audio/video player, digital camera/camcorder, positioning device, fitness device, television receiver, radio broadcast receiver, electronic book device, game device, devices associated with one or more vehicles or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the UE (100) can support any type of interface to the user (such as "wearable" circuitry, etc.).
[041] By way of example, the application (240) may be any type of application that is executable at the UE (100), such as mapping application, location-based service applications, navigation applications, content provisioning services, camera/imaging application, media player applications, social networking applications, calendar applications, and the like. In one embodiment, one of the applications (230) at the UE (101) may be system (120) for translating real-time spoken or written language to Indian sign language (ISL).
[042] The UE (100) includes a processor (210), a memory (220), and a user interface (230). The processor (210) may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP), or one or more application-specific integrated circuits (ASIC). A DSP typically is configured to process real-world signals (e.g., sound) in real time independently of the processor. Similarly, an ASIC can be configured to performed specialized functions not easily performed by a general purposed processor. Other specialized components to aid in performing the inventive functions described herein include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips. The processor (210) and accompanying components have connectivity to the memory (220) via the bus.
[043] The memory (220) includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform the inventive steps described herein for translating real-time spoken or written language to Indian sign language (ISL). The memory (220) also stores the data associated with or generated by the execution of the inventive steps. In an embodiment, the memory includes a speech recognition module (101), a Natural language Processing (NLP) engine (103), a translation module (105), and a 3D animation generation module (107).
[044] The user interface (230) may include a microphone, a keyboard, a touch display, a display, or Virtual reality (VR) Headsets such as the AR or the VR devices includes a head-mounted display (HMD) that is worn by a user and is configured to present a plurality of graphics.
[045] In an embodiment, the UE (100) may include an imaging device (109) for example, a camera configured to adjust a field of view (FOV) in real-time, enabling operators to focus on specific areas of interest. The camera (109) may be configured to capture one or more images of any objects.
[046] Figure 3A-3C illustrates a method for translating real-time spoken or written language to Indian sign language (ISL), in accordance with an embodiment of the present invention. Referring Figs. 1- Fig 3C, in an embodiment, when the first user (200a) initiates the application (240) in the UE (100a), at step 301, the processor (210) configured to detect first audio data from a first user using the speech recognition module (101). The first audio data refers to the audio or speech outputted by the first user (200a). The speech recognition module (101) captures spoken language in real-time.
[047] In an embodiment, the method further includes at step 303, the processor (210) is configured to transcribe the detected first audio data into first text data. Utilizing technologies such as Google Speech-to-Text API or similar, the speech recognition module (101) accurately transcribes spoken words into text. The speech recognition module (101) is optimized to handle various accents, dialects, and speech patterns, ensuring that it can effectively capture the diversity of spoken language inputs.
[048] In an embodiment, at step 305, the processor (210) configured to determine a context, semantics, and intent of each word of the first text data using the NLP engine (103). Once the spoken language is converted into text, the NLP engine (103) processes it to understand the context, semantics, and intent behind the words. The NLP engine (103) ensures that the text is interpreted correctly, considering nuances such as idiomatic expressions, slang, and contextual meaning, which are critical for accurate translation into ISL.
[049] In an embodiment, at step 307, the processor (210) configured to convert each word of the first text data to Indian Sign Language (ISL) using the translation module (105). The core of the system (120) is the translation module (105), which converts the processed text into Indian Sign Language. This module (105) contains a comprehensive database of ISL signs, mapped to corresponding words and phrases in the spoken language. The translation process is dynamic, allowing for real-time adjustments based on context provided by the NLP engine. This ensures that the ISL output is not only accurate but also contextually appropriate, making the communication natural and effective. The comprehensive database of ISL includes pre-recorded sign language libraries. The system uses a database of pre-recorded ISL signs and gestures that users can access for translating spoken content, though this would not offer real-time capabilities.
[050] In an embodiment, at step 309, the processor is configured to display the converted ISL into an animation to a second user using the 3D animation generation module (107). The animation captures subtle nuances of ISL, including facial expressions, hand shapes, and movement dynamics of the first user. In an embodiment, the to view the ISL through the animation, the second user may initiate the application using the UE (100b). The second user refers to a user with disability to speak or listen. The second user may choose an avatar or 3D animation as per his convenience. The application may include several avatars from which the user may chose any avatar. The processor (210) configured to display the ISL through the selected avatar to the second user. In an embodiment, the display may be embedded in the UE (100) or the display may be provided on the AR or VR glasses. To visually represent the ISL translations, the system (120) uses Blender, a powerful 3D animation software. Blender is employed to create high-fidelity, lifelike animations of ISL gestures and expressions. These animations are generated in real-time, driven by the translated ISL output from the previous module. The animations are meticulously designed to capture the subtle nuances of sign language, including facial expressions, hand shapes, and movement dynamics, which are essential for conveying meaning accurately in ISL.
[051] The user interface (UI) of the platform is designed for accessibility and ease of use, ensuring a smooth user experience across different devices, including smartphones, tablets, and computers. The UI is responsive, allowing users to interact with the platform intuitively, whether they are viewing animations, adjusting settings, or switching between different languages or sign language dialects. The platform supports touch, voice, and gesture inputs, providing flexibility and ease of use.
[052] In an embodiment, figure 3C illustrates translation of the ISL into speech or text. Referring to Figure 3C, at step 311, the processor (210) is configured to control capture of one or more images of the second user using an imaging device (109). The second user may initiate the application and switch on the camera (209) to capture the gesture of the second user.
[053] In an embodiment, at step 313, the processor (210) configured to determine a gesture of the second user based on the one or more images.
[054] The processor (210) at step 315, configured to convert the determined gesture of the second user into second audio data or second text data. In some example embodiment, the gesture may be displayed to the first user.
[055] In an embodiment, at step 317, the processor (210) at step 315, configured to output at least one of the second audio data or the second text data to the first user.
[056] The Indian sign language (ISL) translation platform represents a significant advancement in the field of speech support technologies, specially aimed at addressing the ongoing communication challenges faced by individuals within the deaf and hearing-impaired communities. The ISL translation platform is not merely a technological solution but a transformative tool designed to empower the deaf and hard-of-hearing communities in India. By harnessing cutting-edge technologies, this platform seeks to create a more inclusive society, one where communication barriers are significantly reduced, and individuals can interact seamlessly, regardless of their hearing abilities.
[057] The system disclosed herein is highly customizable, allowing developers to extend its capabilities by adding support for additional languages, regional ISL dialects, or even other sign languages. The modular architecture of the system makes it easy to integrate new features, such as additional speech recognition languages, expanded NLP capabilities, or enhanced animation details. This extensibility ensures that the system can evolve with user needs and technological advancements.
[058] The proposed ISL translation system represents a comprehensive and approach to enhancing communication for the deaf and hard-of-hearing community. By integrating advanced speech recognition, NLP, and 3D animation technologies, the platform provides a powerful, user-friendly tool that delivers real-time, accurate ISL translations. Its scalability, customization options, and seamless integration with existing devices make it a versatile and practical solution, poised to make a significant impact on accessibility and inclusivity in communication.
[059] The invention enhances communication in both professional and personal settings by translating spoken language into Indian Sign Language (ISL) in real time. In the workplace, it can be used during meetings, presentations, and discussions, enabling deaf employees to fully participate and interact with their colleagues. On a personal level, it assists in everyday conversations, allowing deaf individuals to communicate more effectively with friends, family, and others who do not know sign language, bridging communication gaps and fostering inclusivity.
[060] The system provides real-time ISL translation of spoken announcements in public spaces like railway stations, airports, bus terminals, shopping malls, and government offices. This ensures that the deaf and hard-of-hearing community receives important information, such as travel updates, safety instructions, emergency alerts, and public service announcements, at the same time as hearing individuals. By converting audio announcements into visual ISL displays on screens or AR devices, the invention enhances accessibility and safety in public environments.
[061] In healthcare settings, the invention provides a critical tool for communication between healthcare providers and deaf patients. By translating spoken medical information, diagnoses, treatment plans, and instructions into ISL, it ensures that deaf patients fully understand their health conditions and treatment options. This application reduces misunderstandings, improves the quality of care, and enhances patient satisfaction by making healthcare services more accessible to the deaf community, whether in hospitals, clinics, or telemedicine platforms.
[062] The present invention may be implemented in public information t terminals such as kiosks in public spaces (e.g., railway stations, airports) that offer ISL translation of spoken announcements via a touch screen and animated avatar.
[063] The invention may be implemented to incorporate real-time ISL translation into smart speaker systems, providing voice-to-sign language translation through a connected display or virtual avatar.
[064] Technical advantages of the present invention:
• The system uses advanced speech recognition technology that allows for accurate and immediate translation of spoken language into Indian Sign Language (ISL). This real-time conversion is a significant impact, providing an instant communication bridge between deaf individuals and those who do not know ISL.
• The invention employs a unique method of generating ISL gestures through dynamic animation, utilizing humanoid rigs in platforms like Blender. This approach ensures that the translated signs are visually accurate and expressive, closely mimicking human sign language to facilitate better understanding.
• The invention supports multiple output methods, including visual displays on screens, AR glasses, and mobile devices. This versatility allows for deployment in various settings, such as public spaces, workplaces, and educational institutions, making it accessible to a wide audience.
• Unlike existing solutions that often require specialized hardware, the proposed invention operates on widely available devices, such as smartphones and computers. This cost-effective approach makes it scalable and easier to implement in various environments, from public infrastructure to personal devices.
• Real-Time Translation: Instantly converts spoken language to ISL, offering immediate communication support.
• ISL-Specific Support: Designed specifically for Indian Sign Language, ensuring relevance and accuracy.
• No Specialized Hardware Required: Functions on standard devices (smartphones, computers), enhancing accessibility.
• Adaptable to Regional Variations: Considers diverse dialects within Indian languages for precise translations across regions.
• User-Friendly Interface: Intuitive design caters to both deaf users and those unfamiliar with sign language.
• Versatile Output Options: Compatible with various display methods, including screens, AR glasses, and mobile apps.
• Broad Applicability: Effective in diverse settings like public announcements, education, healthcare, and everyday conversations.
[065] Although the subject matter has been described in language specific to structural features and/or methods in considerable detail with reference to certain preferred embodiments thereof, it is to be understood that the implementations and/or embodiments are not necessarily limited to the specific features or methods described. The examples described in detail here are only some possible embodiments of the invention among others and it could be subjected to many alterations and variants within the grasp of those skilled in the art. As such, the spirit and scope of the appended claims should not be limited to the description of the preferred embodiments contained therein. , Claims:WE CLAIM:
1. A system (120) for translating real-time spoken or written language to Indian sign language (ISL), comprising:
at least one processor (210);
a memory (220) comprising a speech recognition module (101), a Natural language Processing (NLP) engine (103), a translation module (105), and a 3D animation generation module (107);
a user interface (230);
at least one processor (210) communicatively coupled with the memory (220) and the user interface (230), the at least one processor (210) is configured to:
detect, using the speech recognition module (101), first audio data from a first user;
transcribe the detected first audio data into first text data;
determine a context, semantics, and intent of each word of the first text data using the NLP engine (103);
convert each word of the first text data to Indian Sign Language (ISL) using the translation module (105); and
display the converted ISL into an animation to a second user using the 3D animation generation module (107).
2. The system (120) as claimed in claim 1, wherein the at least one processor (210) is configured to:
control capture of one or more images of the second user using an imaging device (109);
determine a gesture of the second user based on the one or more images;
convert the determined gesture of the second user into second audio data or second text data; and
output at least one of the second audio data or the second text data to the first user.
3. The system (120) as claimed in claim 1, wherein the user interface (230) is configured to enable the first user or second user to adjust animations, adjust settings, or switch between different languages or sign language dialects.
4. The system (120) as claimed in claim 1, wherein the first user is a user with no disability.
5. The system (120) as claimed in claim 1, wherein the second user is a user with a disability.
6. The system (120) as claimed in claim 1, wherein the animation captures subtle nuances of ISL, including facial expressions, hand shapes, and movement dynamics of the first user.
7. A method for translating real-time spoken or written language to Indian sign language (ISL), comprising:
detecting, using the speech recognition module (101), a real time audio from a first user;
transcribing the detected audio into text data;
determining a context, semantics, and intent of each word of the text data using the NLP engine (103);
converting each word of the text data to Indian Sign Language (ISL) using the translation module (105); and
displaying the converted ISL into an animation to a second user using the 3D animation generation module (107), wherein the animation captures subtle nuances of ISL, including facial expressions, hand shapes, and movement dynamics.
8. The method as claimed in claim 7, comprising:
controlling capture of one or more images of the second user using an imaging device (109);
determining a gesture of the second user based on the one or more images;
converting the determined gesture of the second user into second audio data or second text data; and
outputting at least one of the second audio data or the second text data to the first user.
9. The method as claimed in claim 7, wherein the first user is a user with no disability.
10. The method as claimed in claim 7, wherein the second user is a user with a disability.
Documents
Name | Date |
---|---|
Abstract 1.jpg | 27/11/2024 |
202421084935-Proof of Right [14-11-2024(online)].pdf | 14/11/2024 |
202421084935-COMPLETE SPECIFICATION [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-DRAWINGS [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-EDUCATIONAL INSTITUTION(S) [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-FORM 1 [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-FORM 3 [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-FORM FOR SMALL ENTITY [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-FORM FOR SMALL ENTITY(FORM-28) [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-FORM-26 [06-11-2024(online)].pdf | 06/11/2024 |
202421084935-FORM-5 [06-11-2024(online)].pdf | 06/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.