Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
WEARABLE ASSISTANCE DEVICE FOR VISUAL AND HEARING-IMPAIRED INDIVIDUAL
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 20 November 2024
Abstract
The present invention discloses a wearable assistance device for visually and hearing-impaired individuals, designed to enhance environmental awareness, navigation, and safety. Integrating advanced hardware and software, the device includes a dual-camera module for stereoscopic imaging, a processing unit with AI for real-time object recognition and obstacle prioritization, and a haptic feedback system for tactile alerts. A bone conduction audio system provides spatial auditory cues, while a gesture recognition module allows hands-free control. The power management unit supports wireless charging and adaptive power-saving. Together, these components provide real-time, intuitive feedback on obstacles and surroundings, allowing users to navigate safely and independently.
Patent Information
Application ID | 202411089783 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 20/11/2024 |
Publication Number | 48/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. Prashant Mani Tripathi | Assistant Professor, Electronics and Communication Engineering, Ajay Kumar Garg Engineering College, 27th KM Milestone, Delhi - Meerut Expy, Ghaziabad, Uttar Pradesh 201015, India. | India | India |
Shubham Yadav | Department of Electronics and Communication Engineering, Ajay Kumar Garg Engineering College, 27th KM Milestone, Delhi - Meerut Expy, Ghaziabad, Uttar Pradesh 201015, India. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Ajay Kumar Garg Engineering College | 27th KM Milestone, Delhi - Meerut Expy, Ghaziabad, Uttar Pradesh 201015 | India | India |
Specification
Description:[014] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit, and scope of the present disclosure as defined by the appended claims.
[015] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[016] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[017] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[018] The word "exemplary" and/or "demonstrative" is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as "exemplary" and/or "demonstrative" is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms "includes," "has," "contains," and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising" as an open transition word without precluding any additional or other elements.
[019] Reference throughout this specification to "one embodiment" or "an embodiment" or "an instance" or "one instance" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[020] In an embodiment of the invention and referring to Figures 1, the present invention is a wearable assistance device specifically designed for individuals with both visual and hearing impairments, providing them with enhanced environmental awareness, navigation, and safety features. This wearable device integrates an array of innovative hardware and software components, each of which performs critical functions to achieve a seamless and highly responsive assistive experience. The key components of the device include a camera module, a processing unit with artificial intelligence (AI) capabilities, haptic and auditory feedback systems, a gesture recognition system, and a power management unit. Together, these components work in tandem to deliver real-time feedback to the user, facilitating safe and independent navigation.
[021] The camera module forms the core of the visual data acquisition system in the wearable device. This module comprises dual wide-angle cameras capable of capturing a 180-degree field of view. To ensure high-quality performance in various lighting conditions, each camera is equipped with an infrared (IR) sensor and a low-light image processor. The dual-camera setup enables stereoscopic imaging, allowing the device to estimate the distance of objects and obstacles. The cameras are connected to the main processing unit via high-speed data channels, ensuring minimal latency in data transfer. The visual data captured is crucial for the device's navigation and obstacle detection capabilities, forming the first layer of environmental awareness.
[022] Data captured by the camera module is continuously sent to the processing unit, which is the computational center of the device. This unit is powered by a multi-core processor integrated with a neural processing unit (NPU) optimized for machine learning (ML) tasks. The NPU executes complex algorithms such as object detection, depth perception, and obstacle classification in real-time. The processor employs convolutional neural networks (CNNs) to recognize and categorize objects, such as pedestrians, vehicles, walls, and open spaces, allowing the device to generate an accurate and detailed map of the surroundings. The processing unit's machine learning model has been pre-trained on a dataset of varied environments, enhancing its ability to generalize across different conditions and locations.
[023] A unique feature of the processing unit is its dynamic obstacle prioritization algorithm. This algorithm is responsible for categorizing obstacles by their relative risk to the user. For example, if an obstacle is detected in the user's direct path within a predefined proximity, it is assigned a high-priority status and immediately triggers feedback through the haptic and auditory modules. Conversely, objects that are farther away or not directly in the user's path are marked as low-priority and are either ignored or processed with reduced feedback intensity. This prioritization reduces unnecessary feedback, minimizing cognitive load on the user.
[024] The haptic feedback system is a crucial element in communicating obstacle information to users with visual and hearing impairments. It consists of an array of vibration motors embedded in a wearable band around the wrist or arm. These motors are controlled by the processing unit to convey information about the direction, distance, and type of obstacles. For example, a stronger vibration indicates proximity to a closer obstacle, while distinct vibration patterns correspond to different types of objects, such as moving vehicles or stationary walls. This system is particularly advantageous for users who are accustomed to tactile feedback, as it provides an intuitive and non-intrusive way to interpret environmental cues.
[025] In addition to haptic feedback, the device incorporates a bone conduction audio system that transmits auditory signals directly to the inner ear through the skull. This system is ideal for individuals with partial hearing ability, as it enables them to perceive audio cues without blocking ambient sounds. Audio cues, generated by the processor, indicate the presence and direction of obstacles, as well as navigation prompts for guiding the user along a safe path. The audio feedback system uses spatial sound design, where the pitch, volume, and tempo of sounds adjust based on the relative position and speed of obstacles, creating a comprehensive auditory map of the surroundings.
[026] To ensure a hands-free user experience, the device includes a gesture recognition system. This system is powered by an array of micro-electromechanical sensors (MEMS), including an accelerometer, gyroscope, and magnetometer, which detect specific wrist and hand movements. The gesture recognition system enables users to control various device functions, such as toggling between indoor and outdoor modes, adjusting feedback sensitivity, and switching the device on or off. By processing gesture data through a pattern recognition algorithm, the device allows users to interact intuitively without the need for physical buttons.
[027] Powering the entire device is a power management module that includes a high-capacity lithium-ion battery and an adaptive power-saving circuit. This module is designed to maximize the operational time of the device without frequent recharging. The power-saving circuit dynamically allocates power based on the device's operational state; for example, it reduces power to non-essential components when the device is idle. Additionally, the device supports wireless charging, allowing for convenient recharging without requiring removal of the device.
[028] The device's software framework integrates all these hardware components into a cohesive system, governed by a real-time operating system (RTOS) optimized for low-latency performance. The RTOS handles concurrent tasks such as camera data processing, gesture detection, and feedback generation, ensuring seamless intercommunication between components. The software is also equipped with an auto-calibration feature, which allows the device to adjust its sensitivity based on ambient conditions, such as lighting and noise levels. This adaptability ensures consistent performance across diverse environments.
[029] An important part of the software framework is the machine learning model update mechanism. This mechanism enables periodic updates to the device's AI algorithms, either through a connected smartphone application or via Wi-Fi, enhancing the device's ability to recognize new types of objects or adjust to evolving user preferences. The updates are stored in the device's memory and loaded into the processor's NPU, ensuring that the device stays current with advancements in assistive AI.
[030] To validate the device's efficacy, several tests were conducted, measuring key performance metrics such as obstacle detection accuracy, response time, and battery life. Table 1 below illustrates the obstacle detection accuracy of the device in different environments, showcasing its adaptability.
[031] | Table 1: Obstacle Detection Accuracy | |---------------------------|--------------------|---------------------| | Environment | Accuracy (%) | False Positives (%) | | Indoor | 96 | 1 | | Outdoor (Day) | 94 | 2 | | Outdoor (Night) | 92 | 3 |
[032] In addition to accuracy, the response time of the device's feedback system was evaluated, as shown in Table 2. The results indicate that the device delivers feedback with minimal delay, ensuring that users receive timely alerts to navigate safely.
[033] | Table 2: Response Time of Feedback System | |-----------------------------|-------------------|--------------------| | Feedback Mode | Average Delay| Standard Deviation | | Haptic Feedback | 30 ms | ±5 ms | | Audio Feedback | 32 ms | ±6 ms |
[034] Battery performance was also tested across different usage scenarios, as shown in Table 3. The device demonstrated sufficient battery life for extended use, which is essential for daily wearable applications.
[035] | Table 3: Battery Life by Usage Scenario | |---------------------------|------------------|------------------------| | Usage Scenario | Battery Life | Power Consumption | | Active Navigation Mode | 8 hours | High | | Standby Mode | 24 hours | Low |
[036] Overall, the device's hardware and software components work harmoniously to deliver a responsive and effective assistive experience. The integration of the camera module, processor, haptic and audio feedback systems, gesture recognition, and power management ensures that users receive real-time, context-aware information about their environment. The adaptable feedback systems, coupled with the intelligent processing capabilities, provide comprehensive assistance to visually and hearing-impaired individuals, addressing their need for enhanced independence, safety, and ease of navigation. The invention's unique combination of novel components and sophisticated software algorithms represents a significant advancement in wearable assistive technology. , Claims:1. An assistive wearable device designed for individuals with visual and hearing impairments, comprising:
a) a camera module configured to capture a 180-degree field of view and produce stereoscopic images for depth perception;
b) a processing unit with artificial intelligence (AI) capabilities, including a neural processing unit (NPU) optimized for machine learning tasks, configured to process image data, recognize obstacles, and categorize environmental elements;
c) a haptic feedback system with a plurality of vibration motors, arranged to convey obstacle information to the user based on the direction, distance, and type of detected obstacles;
d) a bone conduction audio system to deliver auditory signals directly to the inner ear of the user without occluding ambient sounds, providing spatial sound feedback based on obstacle proximity and direction;
e) a gesture recognition module equipped with micro-electromechanical sensors (MEMS) that detect specific hand and wrist movements to control device functions;
f) a power management unit with an adaptive power-saving circuit and wireless charging capability to optimize the operational time of the device.
2. The device as claimed in claim 1, wherein the camera module includes dual wide-angle cameras with infrared sensors and low-light processors to enhance image quality in various lighting conditions.
3. The device as claimed in claim 1, wherein the processing unit includes a convolutional neural network (CNN) for object detection, depth perception, and obstacle classification, and further includes a dynamic obstacle prioritization algorithm that assigns risk levels to obstacles based on their proximity and path relevance.
4. The device as claimed in claim 3, wherein the dynamic obstacle prioritization algorithm reduces cognitive load on the user by adjusting feedback intensity for lower-priority objects that are farther from the user's direct path.
5. The device as claimed in claim 1, wherein the haptic feedback system provides varied vibration patterns corresponding to different obstacle types, wherein the intensity of vibration increases as obstacles come closer to the user.
6. The device as claimed in claim 1, wherein the bone conduction audio system generates spatially distinct auditory cues with variable pitch, volume, and tempo based on the location and movement of obstacles, enabling comprehensive auditory mapping.
7. The device as claimed in claim 1, wherein the gesture recognition module is configured to detect gestures for specific commands, such as toggling between indoor and outdoor modes, adjusting feedback sensitivity, and powering the device on or off, through a pattern recognition algorithm.
8. The device as claimed in claim 1, wherein the power management unit allocates power based on the device's operational state, reducing power to non-essential components during idle states.
9. The device as claimed in claim 1, wherein the software framework governing the device integrates a real-time operating system (RTOS) for concurrent task management, providing low-latency performance across camera data processing, gesture detection, and feedback generation.
Documents
Name | Date |
---|---|
202411089783-COMPLETE SPECIFICATION [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-DECLARATION OF INVENTORSHIP (FORM 5) [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-DRAWINGS [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-EDUCATIONAL INSTITUTION(S) [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-EVIDENCE FOR REGISTRATION UNDER SSI [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-FORM 1 [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-FORM 18 [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-FORM FOR SMALL ENTITY(FORM-28) [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-FORM-9 [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-REQUEST FOR EARLY PUBLICATION(FORM-9) [20-11-2024(online)].pdf | 20/11/2024 |
202411089783-REQUEST FOR EXAMINATION (FORM-18) [20-11-2024(online)].pdf | 20/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.