image
image
user-login
Patent search/

System and Method of Kannada Speech-to-Braille Translation for Individuals with Visual and Hearing Impairments

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

System and Method of Kannada Speech-to-Braille Translation for Individuals with Visual and Hearing Impairments

ORDINARY APPLICATION

Published

date

Filed on 16 November 2024

Abstract

Existing assistive technologies for individuals with visual and hearing impairments often fall short in providing real-time communication solutions. While some systems convert between audio, text, and Braille or translate static documents into Braille, they typically lack support for specific regional languages like Kannada and do not offer real-time, wearable solutions. Current technologies may also be limited to cloud-based systems, which can hinder accessibility in offline environments. The present invention addresses these gaps by offering a wearable device that dynamically converts spoken Kannada into Braille. The device utilizes piezoelectric actuators to provide precise, real-time Braille display, ensuring immediate and effective communication. Unlike existing solutions, this system operates offline, making it portable and accessible in various environments. By integrating advanced components into a compact, user-friendly design, this invention enhances communication for individuals with visual and hearing impairments, providing a practical, real-time solution for everyday interactions.

Patent Information

Application ID202441088793
Invention FieldPHYSICS
Date of Application16/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Harpreet Kaur ThindDepartment of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore-560111IndiaIndia
Keerthi K SDepartment of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore-560111IndiaIndia
Khushi VDepartment of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore-560111IndiaIndia
Lavanya HUDepartment of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore-560111IndiaIndia
Kavana N SDepartment of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore-560111IndiaIndia

Applicants

NameAddressCountryNationality
Dayananda Sagar College of EngineeringShavige Malleshwara Hills, Kumaraswamy Layout, BangaloreIndiaIndia

Specification

Description:FIELD OF INVENTION
[001] The present disclosure generally falls within the broad area of assistive technology, specifically focusing on devices that facilitate communication and accessibility for individuals with both visual and auditory impairments. This invention is a subclass of audio-to-Braille devices, designed to convert spoken language into tactile Braille output.
BACKGROUND AND PRIOR ART
[002] The field of assistive technology has seen significant advancements, yet deaf-blind individuals continue to face challenges in accessing spoken information. Existing tools often rely on static Braille displays or delayed translations, limiting the ability of deaf-blind users to engage with their surroundings in real-time.
[003] The proposed wearable device is readily available to Individuals with Visual and Hearing Impairments users at any time to convert spoken Kannada to Braille script.
[004] The field of assistive technology has seen significant advancements, yet deaf-blind individuals continue to face challenges in accessing spoken information. Existing tools often rely on static Braille displays or delayed translations, limiting the ability of deaf-blind users to engage with their surroundings in real-time. This invention is focused on solving these challenges by creating a system that dynamically converts speech into Braille, making communication more seamless for Kannada-speaking individuals. The device is portable and wearable, providing users with real-time, tactile feedback for better accessibility. Below, prior art developments are examined to highlight existing solutions and how they fall short of addressing the real-time needs of people with multi-modal disabilities.
[005] Patent AU 2021100994 A4 describes a multi-modal communication device designed to help individuals with various disabilities by converting between audio, text, and Braille. The system allows for the conversion of audio to text and audio to Braille, facilitating communication for individuals with hearing and visual impairments. However, it falls short in addressing regional language requirements like Kannada and lacks a wearable, on-the-go solution. This invention aims to provide not just language-specific accessibility but also portability for real-time communication.
[006] US Patent 11,354,511 B2 explores a system for translating communication between users who rely on different modalities, such as speech versus sign language. This invention utilizes context and intention recognition to deliver intuitive cross-communication solutions through wearable devices like glasses and wristbands. While highly advanced in its interpretation of gestures and speech, it does not specifically address the tactile needs of blind users. There is also a lack of Braille support, which this current invention aims to address by providing a tactile, speech-to-Braille communication method for real-time usage.
[007] The system described in US Patent 11,393,361 B1 utilizes deep learning to interpret Braille from printed materials and convert it into audio output. This innovation helps blind individuals read printed Braille through speech synthesis. However, its application is limited to static documents and does not provide a dynamic solution for real-time communication. This invention takes a step forward by focusing on converting live speech into Braille, making it more suitable for interactive communication, particularly for Kannada speakers.
[008] In US 2016/014.8538A1, a system is introduced to convert Portable Document Format (PDF) files into Braille using a refreshable Braille board. This invention focuses on making digital text documents accessible to Braille readers through an electrically driven dynamic board. Although it is valuable for document conversion, the system lacks real-time speech recognition, making it unsuitable for interactive communication scenarios. This invention bridges this gap by offering live speech conversion into Braille, specifically tailored for use in everyday conversations.
[009] US Patent 11,472,197 B2 presents a high-speed Braille printer that uses strike pins to emboss Braille on paper. This system enables the production of Braille documents quickly and efficiently, but its application is confined to the physical production of Braille on paper. It lacks dynamic, real-time interactivity and is not a portable solution. The present invention takes a different approach by offering a refreshable Braille display that can be integrated into a wearable device, allowing users to access Braille information from speech inputs instantly and without the need for printed materials.
[010] A Japanese patent introduces a Braille learning device and method that uses voice input and a refreshable Braille board to teach users how to read and write Braille. While this device is beneficial for educational purposes, it focuses primarily on Braille literacy and is not intended for real-time communication. Moreover, the system does not offer portability or speech recognition features, which are critical for interactive communication. In contrast, this invention is designed for dynamic speech-to-Braille conversion, making it more practical for everyday use by individuals with visual impairments.
[011] US 2016/014.8538A1 also discusses a method of converting PDF files into Braille via an interactive dynamic board. Like similar document conversion systems, this invention provides an efficient way to make digital documents accessible to visually impaired individuals. However, it is not designed for real-time communication or speech-to-Braille conversion. This invention addresses this need by providing a portable, real-time speech-to-Braille solution that is ideal for conversational use, particularly in the Kannada language.
[012] US Patent 20150302120 A1 describes a multi-modal communication system that aims to facilitate communication between users of different modalities, such as speech and Braille. While the system offers some level of versatility, it is primarily designed for written or pre-recorded communication. It does not support real-time speech-to-Braille conversion or provide a wearable, portable solution for users on the go. The current invention advances beyond this by providing real-time, dynamic communication through a wearable device, making it more suitable for day-to-day use.
[013] Finally, US Patent 20200296002 A1 discusses a system that utilizes haptic feedback for visually impaired users but lacks a comprehensive speech-to-Braille system. While the system integrates feedback mechanisms to aid in interaction, it is limited to a pre-programmed set of responses and does not facilitate open-ended, real-time communication. This invention overcomes this limitation by using a more flexible and dynamic approach to communication, converting speech into tactile Braille for real-time conversations, particularly in the Kannada language.
[014] The proposed wearable device is readily available to individuals with visual and hearing impairments, enabling real-time conversion of spoken Kannada into Braille script. With its compact, user-friendly design, the system ensures accessibility at any time, improving communication for the visually and hearing impaired.
SUMMARY OF THE INVENTION
[015] The present disclosure details a novel system and method for a wearable device designed to convert spoken Kannada into Braille dynamically. This system includes a hardware device in the form of a compact, portable unit, integrating a range of components to deliver a seamless user experience for individuals with visual and hearing impairments.
[016] The system includes an nRF5340 module as the primary control unit, managing Bluetooth communication and interfacing with other components. The Syntiant NDP120 is employed for offline speech-to-text processing, specifically tuned for Kannada language support. The system is powered by a rechargeable battery, ensuring longevity and portability for the wearable design. A microphone connected to a Raspberry Pi captures audio input, which is processed by the Syntiant NDP120 to convert spoken Kannada into text.
[017] The device features a Braille display controlled by the nRF5340 module. This display utilizes piezoelectric technology to raise and lower the Braille pins, translating the converted text into readable Braille. While the use of piezoelectric technology for Braille displays is not new, the present system includes specific adaptations to meet the unique needs of the device's design and functionality.
[018] The present system is a novel approach to wearable communication devices, providing offline functionality that sets it apart from existing cloud-based solutions. It offers an accessible and convenient means of communication, delivering real-time translation of spoken Kannada into Braille without the need for an internet connection. This integration of advanced components ensures immediate and effective communication, enhancing interaction and accessibility for users.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWING
[019] An Innovative Kannada Speech-to-Braille Translation System for Individuals with Visual and Hearing Impairments, the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1, illustrates a block diagram of Kannada Speech-to-Braille Translation System, in accordance with an embodiment of the present disclosure.
Figure 2, illustrates overall workflow of the system, in accordance with an embodiment of the present disclosure.
This diagram shows the steps involved in converting spoken Kannada into Braille.
Figure 3, illustrates the overall method, in accordance with an embodiment of the present disclosure.
This flowchart shows the process of converting spoken Kannada into Braille.
Figure 4, illustrates the schematic diagram of the proposed wearable communication device for deaf-blind.
The figure shows the connections between the components of the device.
Figure 5, illustrates the Bharathi Braille characters mapping for alphabets in Kannada.
Figure 6, illustrates the transliteration of words in Kannada to Bharathi Braille.
DETAILED DESCRIPTION OF THE INVENTION
[020] Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
[021] The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiments and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "comprises," "comprising," "including," and "having," are open-ended transitional phrases and therefore specify the presence of stated features, elements, modules, units and/or components, but do not forbid the presence or addition of one or more other features, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
[022] The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
[023] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, step, or group of elements, steps, but not the exclusion of any other element, step, or group of elements, or steps.
[024] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[025] The present disclosure envisages an Innovative Kannada Speech-to-Braille Translation System for Individuals with Visual and Hearing Impairments.
Figure 1 depicts the system diagram outlining the comprehensive functionality of the proposed system.
[026] It is describing the steps involved in transforming spoken Kannada into Braille. The process starts with capturing Kannada audio and generating a text transcription. Then, the Kannada script is transcribed into Braille, and finally, the Braille characters are displayed. This process could be useful for people who are blind or visually impaired, enabling them to access and understand information spoken in Kannada.
1. Transforming voice input from a microphone into text format: This step uses speech recognition technology to convert spoken words into written text.
2. Capturing Kannada audio and generating its text transcription: This step specifically captures audio in the Kannada language and converts it into Kannada text.
3. Transcribing Kannada script into Braille: This step translates the Kannada text into Braille, which is a tactile reading and writing system for people who are blind or visually impaired.
4. Displaying Braille characters: This step displays the Braille characters, either on a Braille display or by printing them on Braille paper.
[027] This process could be used to create a system that enables people who are blind or visually impaired to access information that is originally presented in spoken Kannada. For example, such a system could be used to:
● Provide real-time transcription of Kannada speeches and presentations: This would allow blind or visually impaired people to follow along with the spoken content.
● Create Braille versions of books and other written materials: This would allow blind or visually impaired people to access a wider range of written content.
● Develop educational materials in Braille for Kannada speakers: This would help to ensure that blind or visually impaired Kannada speakers have access to quality education.
Figure 2, illustrates overall workflow of the system, in accordance with an embodiment of the present disclosure.
[028] The diagram represents a system designed to convert spoken Kannada language into Braille, thereby enhancing accessibility for visually impaired individuals. This process integrates advanced technologies like speech recognition, natural language processing (NLP), and Braille transcription to create a seamless experience. Below is a detailed breakdown of the workflow and the critical aspects of each stage.
Step 1: Transforming Voice Input from a Microphone into Text Format
[029] In the first stage, the system captures the user's voice input through a microphone. This voice input is in the Kannada language, and the goal is to convert it into text. This transformation involves the following processes:
● Speech Recognition:
○ Technology: Speech recognition technology is employed to convert spoken words into text. For Kannada, this process must be equipped with a language model trained on Kannada phonetics and vocabulary. Technologies like Google's Speech-to-Text API or open-source tools such as Mozilla's DeepSpeech can be tailored to recognize Kannada speech.
○ Challenges: Recognizing speech in Kannada presents specific challenges, including the diversity of dialects, pronunciation variations, and the presence of unique phonemes not found in other languages. Thus, an accurate speech recognition system for Kannada requires a comprehensive dataset representing these variations to enhance accuracy.
● Noise Filtering and Preprocessing:
○ Before converting the audio to text, the system preprocesses the audio input. This involves filtering out background noise and normalizing audio levels to ensure clarity and improve recognition accuracy. Techniques such as spectral subtraction and adaptive filtering are employed to enhance the quality of the captured audio.
Step 2: Capturing Kannada Audio and Generating Its Text Transcription
[030] The second stage focuses on handling the audio input specifically in Kannada and transcribing it in text format. This stage builds on the speech recognition process, tailored for the nuances of the Kannada language:
● Language-Specific Speech Recognition:
○ Language Model: The speech recognition system must include a Kannada language model that understands the syntax, semantics, and phonetic aspects of Kannada. Unlike English or other widely spoken languages, Kannada has its script and phonetic structure, requiring a dedicated language model to transcribe audio accurately.
○ Acoustic Model: The acoustic model should be trained on diverse Kannada audio samples, including various dialects and speaking speeds. This ensures the system can accurately transcribe speech even in the presence of accent variations and regional pronunciations.
● Post-Processing of Text:
○ After transcription, the raw text is processed to correct any recognition errors. This involves applying natural language processing (NLP) techniques such as spell-checking and grammar correction specific to the Kannada language. This step enhances the readability and accuracy of the transcribed text.
Step 3: Transcribing Kannada Script into Braille
[031] The third stage involves translating the transcribed Kannada text into Braille, a tactile writing system used by visually impaired individuals. This step is crucial as it transforms the text into a format that can be read through touch:
● Kannada to Braille Mapping:
○ Braille System: Braille is a system of raised dots representing letters, punctuation, and symbols. For Kannada, the system must map Kannada characters to their corresponding Braille representations. Kannada Braille follows a specific set of rules and mappings to represent Kannada alphabets, vowels, consonants, and symbols in a tactile format.
○ Transcription Process: The system employs a mapping algorithm that converts each Kannada character into its Braille equivalent. This process must consider Kannada's script's linguistic and grammatical rules to ensure that the Braille output accurately represents the text's meaning and structure.
● Handling Complex Characters:
○ Kannada, like many Indian languages, has complex characters, including compound consonants and vowel signs. The system must effectively handle these complexities to provide an accurate Braille translation. For instance, a compound consonant in Kannada must be represented in Braille using a combination of dots that accurately reflect its pronunciation and meaning.
Step 4: Displaying Braille Characters
[032] The final step involves presenting the Braille text to the user. This can be done in various ways depending on the user's needs and available resources:
● Braille Display:
○ Refreshable Braille Display: For real-time interaction, a refreshable Braille display can be used. This electronic device has pins that move up and down to form Braille characters. As the Kannada text is converted to Braille, the display dynamically updates, allowing the user to read the text through touch.
○ Printing Braille: In cases where a permanent record is needed, the Braille text can be printed on paper using a Braille embosser. This device punches dots onto the paper to form the Braille characters, providing a tangible output that can be read anytime.
● Accessibility and Usability:
○ The system must ensure that the Braille output is accurate and easily readable. This involves considerations such as dot spacing, tactile clarity, and the layout of Braille characters on the display or printed page. The system should also allow for user adjustments, such as modifying the speed of text presentation on a refreshable Braille display to match the user's reading pace.
Figure 3, illustrates the overall method used to compute, in accordance with an embodiment of the present disclosure.
[033] This diagram depicts a system that converts spoken words into Braille output. It illustrates the flow of information through the different components of the system.
1. Microphone: This is the input device that captures the spoken words.
2. Syntiant NDP120: This component processes the audio input and converts it into text. It's essentially a speech-to-text engine.
3. nRF5340 Module: This module acts as the brain of the system. It processes the text received from the Syntiant chip and controls the Braille display.
4. Braille Display: This is the output device that displays the text in Braille format, enabling visually impaired individuals to read it.
5. Rechargeable Battery: This provides the power necessary to operate the entire system.
The arrows indicate the direction of data flow, showing how the audio input is converted into Braille output.
Figure 4, demonstrates how technology can be used to bridge communication gaps for people with visual impairments. It combines speech recognition, text processing, and Braille output to create a functional and accessible tool.
[034] This device is a wearable assistive technology designed for individuals who are blind or visually impaired, enabling them to access information in Kannada through the tactile sense of braille. The device utilizes speech recognition technology to convert spoken Kannada words into text, which is then translated into Bharathi Braille for Kannada. This braille representation is then displayed on a refreshable braille board.
[035] The device's functionality begins with the microphone capturing the user's spoken Kannada input. This audio signal is then processed by the Syntiant ND120 chip, a low-power speech-to-text processor specifically designed for edge computing applications. The chip converts the audio into text, which is then sent to the nRF5340 module for further processing.
[036] The nRF5340 module, acting as the central processing unit, houses the software responsible for translating the Kannada text into Bharathi Braille. This translation involves converting each Kannada character into its corresponding braille pattern, which is then transmitted to the refreshable braille display. The display, equipped with a grid of pins that can be raised or lowered, physically represents the braille dots, enabling the user to read the text by touch. The user can then interact with the device by pressing control buttons for actions like navigating through the text or performing other desired functions.
Figure 5, illustrates the transliteration of Kannada alphabets to Bharathi Braille Kannada which is a specialized adaptation of the standard Braille system tailored for the Kannada language, one of the major Dravidian languages spoken in Karnataka, India. This system was developed to enhance the literacy and accessibility of Kannada for visually impaired individuals by mapping Kannada characters to the Braille script. Bharathi Braille Kannada utilizes a unique set of Braille symbols to represent the various Kannada alphabets and consonants, ensuring that users can read and write in their native language effectively.
[037] In Bharathi Braille Kannada, each Kannada character is assigned a specific Braille cell configuration. The Kannada alphabet consists of vowels (swaras) and consonants (vyanjanas), each with its corresponding Braille symbol. For instance, the Braille representation of Kannada vowels like 'ಅ' (a), 'ಇ' (i), and 'ಉ' (u) are distinct, allowing visually impaired readers to differentiate them easily. Similarly, consonants such as 'ಕ' (ka), 'ಗ' (ga), and 'ಚ' (cha) have their unique Braille patterns. The mapping process ensures that the characters are represented accurately, preserving the phonetic and linguistic integrity of Kannada. This tailored approach facilitates smoother learning and reading experiences for users who rely on Braille for communication and education in Kannada.
[038] Figure 6, illustrates a table mapping Kannada words to their corresponding Braille representations, offering a clear visual guide to how Kannada can be transcribed into Braille. In this table, the "Kannada Word" column lists specific Kannada words, while the "Braille Representation" column depicts how each word is converted into Braille using the standard six-dot cell system. Each Braille cell is organized in a two-by-three grid, where various combinations of raised dots represent different Kannada characters, including vowels, consonants, and other linguistic symbols.
[039] This visual mapping is especially useful for visually impaired individuals learning to read and write in Kannada. By providing an accessible reference, the table helps them understand how the Kannada script corresponds to the tactile Braille system. Each dot pattern in the Braille cell aligns with a specific Kannada character, ensuring that the phonetics and structure of the language are preserved. Tools like this can also contribute to the development of resources for teaching Kannada Braille, as well as creating educational materials, Braille readers, and writing tools for visually impaired individuals.
[040] This system of transcribing Kannada into Braille is an essential educational resource for improving literacy among visually impaired Kannada speakers. It empowers them to access written content in their native language, offering independence in reading and writing. The table serves as an introductory tool that bridges the gap between the Kannada language and Braille, fostering better communication and accessibility for the visually impaired.
[041] This system demonstrates how technology can be used to bridge communication gaps for people with visual impairments. It combines speech recognition, text processing, and Braille output to create a functional and accessible tool.
OBJECTS OF THE INVENTION
[042] Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
[043] It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
[044] An object of the present disclosure is to provide a system that enables individuals with visual and hearing impairments to communicate effectively by dynamically converting spoken Kannada into Braille script through a wearable device, offering offline functionality, supporting localized language needs, and utilizing piezoelectric actuators for precise Braille display.
[045] The foremost aim is to provide an alternative solution to existing cloud-based communication tools by offering a wearable device that converts spoken Kannada into Braille script offline, ensuring accessibility and ease of use for individuals with visual and hearing impairments.
[046] Another object of the present disclosure is to enhance communication accessibility for individuals with multiple disabilities by integrating dynamic Braille display technology that allows real-time conversion of speech to Braille, supporting seamless interaction across different modes of communication.
[047] Another object of the present disclosure is to develop a cost-effective and portable wearable device that ensures reliable offline functionality, catering specifically to users who require immediate and constant access to Braille output from spoken Kannada, without reliance on internet connectivity.
[048] Yet another object of the present disclosure is to provide a versatile communication system that supports seamless interaction between visually impaired users and others, by converting spoken Kannada into Braille, thus addressing communication barriers for those who are both visually and hearing impaired.
[049] Another object of the present disclosure is to ensure the system's functionality in offline environments, distinguishing it from existing cloud-based solutions and making it accessible in various settings without reliance on internet connectivity.
[050] Another object of the present disclosure is to integrate seamless translation between spoken Kannada and Braille, enhancing communication accessibility for users with visual impairments in their native language. , C , Claims:1. A wearable device for hearing and visually impaired individuals, comprising: an audio input module, a processing unit, and a refreshable Braille display, wherein the device dynamically converts spoken Kannada into Braille in real-time using piezoelectric actuators for pin movement.
2. The device of claim 1, wherein the processing unit translates received audio into digital text format, converting it into Braille output using piezoelectric actuators to raise and lower Braille pins for tactile feedback.
3. The device further comprises an offline mode to ensure operation without internet connectivity, distinguishing it from cloud-based systems, enabling seamless usage in various environments.
4. The device of claim 1, wherein the audio input module includes a microphone that captures speech, and a speech recognition processor that filters background noise, ensuring clear input in diverse surroundings.
5. The refreshable Braille display of the device is designed with 2x6 Braille cells that are raised and lowered using piezoelectric actuators, facilitating real-time Braille output with continuous, dynamic translation.
6. The device is further configured to operate in a portable, user-friendly wearable format, providing extended battery life, ensuring accessibility for individuals who require on-the-go speech-to-Braille conversion, and supporting seamless interaction between users.

Documents

NameDate
202441088793-COMPLETE SPECIFICATION [16-11-2024(online)].pdf16/11/2024
202441088793-DRAWINGS [16-11-2024(online)].pdf16/11/2024
202441088793-FORM 1 [16-11-2024(online)].pdf16/11/2024
202441088793-FORM 18 [16-11-2024(online)].pdf16/11/2024
202441088793-FORM-9 [16-11-2024(online)].pdf16/11/2024
202441088793-REQUEST FOR EARLY PUBLICATION(FORM-9) [16-11-2024(online)].pdf16/11/2024
202441088793-REQUEST FOR EXAMINATION (FORM-18) [16-11-2024(online)].pdf16/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.