image
image
user-login
Patent search/

AI-DRIVEN VOICE-CONTROLLED MOBILITY ASSISTANCE DEVICE FOR VISUALLY IMPAIRED INDIVIDUALS WITH OBSTACLE DETECTION

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

AI-DRIVEN VOICE-CONTROLLED MOBILITY ASSISTANCE DEVICE FOR VISUALLY IMPAIRED INDIVIDUALS WITH OBSTACLE DETECTION

ORDINARY APPLICATION

Published

date

Filed on 15 November 2024

Abstract

AI-Driven Voice-Controlled Mobility Assistance Device for Visually Impaired Individuals with Obstacle Detection This invention describes an AI-driven voice-controlled mobility assistance device for visually impaired individuals for navigation and obstacle detection. The device features a voice-command interface, enabling hands-free interaction, and uses voice recognition software to process real-time commands. Obstacle detection is achieved via ultrasonic sensors, which detect objects, and a camera with AI-powered object recognition, distinguishing between stationary and moving hazards. The GPS module provides real-time location tracking and navigational guidance, delivering auditory feedback regarding directions and nearby landmarks. The device incorporates a haptic feedback system that generates specific vibration patterns to alert users to obstacles, particularly in noisy environments where auditory alerts may be insufficient. A processing unit integrates data from all components to provide seamless real-time feedback, ensuring user safety. Lightweight and ergonomically designed, the device is wearable and powered by a rechargeable battery, offering hours of continuous operation. The system enhances independence and mobility for visually impaired users by providing multi-sensory feedback in diverse environments.

Patent Information

Application ID202421088434
Invention FieldBIO-MEDICAL ENGINEERING
Date of Application15/11/2024
Publication Number49/2024

Inventors

NameAddressCountryNationality
Mrs. Jayamala D. PakhareDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Mr. Shrenik R. PatilDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Miss. Samruddhi S. DesaiDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Miss. Sakshi S. ChouguleDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Miss. Maithili V. ChouguleDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia

Applicants

NameAddressCountryNationality
DKTE Society’s Textile and Engineering InstituteRajwada, Ichalkaranji, Maharashtra 416115IndiaIndia
Mrs. Jayamala D. PakhareDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Mr. Shrenik R. PatilDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Miss. Samruddhi S. DesaiDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Miss. Sakshi S. ChouguleDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia
Miss. Maithili V. ChouguleDKTE Society’s Textile and Engineering Institute, Ichalkaranji, India 416115IndiaIndia

Specification

Description:[0001] This invention relates to the field of computer sciences and electronics engineering more particularly an AI-driven mobility assistance device designed specifically for visually impaired individuals. The invention integrates advanced technologies such as voice recognition, ultrasonic sensors, AI-powered object recognition, and haptic feedback to provide real-time navigation, obstacle detection, and situational awareness. This device enables users to interact hands-free through voice commands and offers auditory, tactile, and visual feedback to enhance safety and mobility in various environments. It is particularly useful in providing assistance in navigating both indoor and outdoor spaces, ensuring independence and accessibility for individuals with visual impairments.

PRIOR ART AND PROBLEM TO BE SOLVED

[0002] For visually impaired individuals, mobility poses significant challenges, especially in navigating unfamiliar or complex environments. The most common mobility aid, the long white cane, provides tactile feedback to help users detect obstacles in their immediate path. However, its effective range is limited to a few feet, and it cannot identify hazards at head or chest level, such as low-hanging branches or overhangs. This forces users to rely heavily on memory and spatial awareness, leaving them vulnerable to obstacles that may pose a threat beyond the cane's reach.

[0003] In recent years, GPS-based mobility aids have also been developed to assist with navigation. While they offer turn-by-turn guidance, these systems are typically designed for sighted individuals and are not optimized for the specific needs of the visually impaired. GPS devices tend to struggle in real-time situations, unable to adapt quickly to sudden changes in the environment like new construction or moving vehicles. Moreover, they often lack precision indoors, where GPS signals are weak or unavailable, and provide minimal information beyond basic route directions. A significant gap in current mobility aids is the lack of adaptive, voice-controlled navigation systems. Devices today generally require manual input or attention, such as the need to touch a screen or press buttons, which diverts the user's focus away from their surroundings. Without hands-free and real-time navigation capabilities, users are left vulnerable to unpredictable obstacles, shifting terrain, or the movement of other people and vehicles, significantly limiting their ability to navigate safely and confidently.
[0004] The limitations of conventional mobility aids are numerous. Traditional canes can only detect obstacles within a short range and are ineffective at sensing hazards above waist level, which could lead to accidents. GPS-based systems, while helpful in outdoor navigation, lack the ability to provide real-time updates, leaving users vulnerable to unexpected changes in their environment, such as road closures or temporary obstructions. Moreover, these devices often require manual input or constant attention, which is not ideal for visually impaired individuals who need to maintain focus on their surroundings. Current GPS technology also struggles in environments where satellite signals are weak or obstructed, such as indoor spaces or dense urban areas. This imprecision makes it difficult for users to navigate complex environments like shopping malls or airports. Furthermore, none of these solutions adequately address dynamic factors like moving vehicles, pedestrians, or environmental hazards that could significantly impact the safety of the user.

[0005] Smartphone-based GPS applications have also been developed to assist visually impaired individuals with navigation. These apps offer voice-guided turn-by-turn directions and sometimes integrate with public transport systems. However, smartphone apps have significant limitations. GPS accuracy can be poor, especially in indoor environments, and the small touchscreen interfaces of most smartphones can be challenging for visually impaired users to operate. Additionally, these apps lack real-time information about immediate obstacles in the user's path, leaving them vulnerable to accidents. Voice-activated assistants like those found in smartphones or smart home devices can offer some hands-free assistance. These systems allow users to ask for directions or information through voice commands. However, they still rely on the same flawed GPS data and do not provide the real-time, adaptive guidance needed for safe navigation in dynamic environments. Furthermore, voice recognition may not function effectively in noisy or crowded settings, limiting its usefulness in public areas.

[0006] To resolve the aboeve mentioned problem here AI-driven voice-controlled mobility assistant is designed for visually impaired individuals to navigate their surroundings independently and safely. The device integrates voice recognition, and ultrasonic sensors for real-time guidance and obstacle detection. Using GPS for precise location tracking and a camera for object recognition, the system provides auditory feedback on the user's surroundings, including directions, street names, and nearby landmarks. The lightweight and portable device is designed to be carried in a pocket or worn like a crossbody bag, allowing for ease of use. Its rechargeable battery ensures several hours of continuous operation. With AI algorithms for obstacle detection, the system delivers real-time auditory cues for enhanced navigation. The voice-controlled design enables users to interact seamlessly with the device, asking for directions or receiving updates on the environment without the need for manual input.

THE OBJECTIVES OF THE INVENTION:

[0007] It has already been proposed where electronic smart canes have been developed with advanced sensors, such as ultrasonic or infrared detectors, to help detect obstacles beyond the range of a traditional cane. These devices often provide feedback through vibrations or sound cues, alerting the user to nearby hazards. While these canes extend the detection range, they still fall short in several ways. Smart canes are typically expensive, and their reliance on batteries or complex electronics means they can fail unexpectedly. The alerts they provide are often generalized, making it difficult for users to understand the nature of the obstacle. Another solution comes in the form of wearable devices, such as sensor-equipped glasses or vests. These wearables detect obstacles and provide feedback through vibrations or sound, leaving the user's hands free. Although this approach shows promise, these devices often suffer from issues related to comfort, complexity, and social acceptance. Wearing such technology for extended periods can be uncomfortable, and users may feel self-conscious about their appearance. Additionally, interpreting the feedback from these devices can be confusing, especially in environments with many obstacles.

[0008] The principal objective of the invention is an AI-driven, voice-controlled mobility assistance device for visually impaired individuals. This system integrates advanced navigation and obstacle detection features, utilizing a combination of GPS, ultrasonic sensors, and object recognition software to ensure real-time situational awareness. The primary objective is to enable seamless, hands-free operation, providing the user with continuous auditory feedback, such as directional cues, street names, and nearby landmarks, while ensuring safe and reliable mobility in both indoor and outdoor environments.

[0009] Another objective of the invention is to incorporate ultrasonic sensors within the device to detect nearby obstacles and avoid collisions. These sensors will continuously scan the environment, feeding data into AI algorithms, which will analyze and respond with real-time voice feedback to alert users of potential hazards or obstructions in their path.

[0010] The further objective of the invention is to integrate GPS technology for accurate location tracking and navigation. The GPS module will provide users with real-time information about their geographical position, offering auditory updates about street names, intersections, and other significant landmarks, facilitating smooth travel through unfamiliar surroundings.

[0011] The further objective of the invention is to include an object recognition system utilizing a built-in camera that works with AI-powered software to identify and categorize objects or hazards, such as vehicles, pedestrians, or obstacles. This system will further enhance the user's awareness of their environment by providing detailed feedback on what lies ahead.

[0012] The further objective of the invention is to ensure the device offers a fully hands-free, voice-controlled interface that allows users to issue commands, request directions, and receive updates about their surroundings without needing manual input. This feature emphasizes accessibility and ease of use, ensuring that visually impaired individuals can operate the device effortlessly.

[0013] The further objective of the invention is to develop a portable and ergonomic design that allows users to carry the device like a crossbody bag or store it in a pocket. The lightweight nature of the system will ensure it is practical for daily use, while its durable construction will support extended periods of operation.

[0014] The further objective of the invention is to build the device on a Raspberry Pi platform, enabling continuous software updates and future enhancements. This will ensure the system remains current, capable of integrating new features and improving its performance over time.

[0015] The further objective of the invention is to power the system with a rechargeable battery that provides several hours of operation, ensuring that the device is both energy-efficient and capable of supporting extended periods of mobility assistance without frequent recharging.

SUMMARY OF THE INVENTION

[0016] In response to the limitations of existing solutions, researchers and developers have tried to create more sophisticated systems to assist visually impaired individuals with navigation. One method has been to enhance traditional canes by incorporating advanced sensors, such as LIDAR or radar, capable of detecting both stationary and moving obstacles. These systems aim to give users better awareness of their surroundings by providing detailed feedback through vibrations or auditory signals. However, while these enhancements improve obstacle detection, they still fail to address the need for real-time situational awareness in dynamic environments. The feedback provided by these devices is often delayed or generalized, leaving the user to interpret the situation themselves.

[0017] Another approach has been the development of multi-sensor wearables, such as haptic belts or shoes equipped with obstacle detection technology. These wearables aim to provide continuous feedback on the user's surroundings, integrating inputs from GPS, accelerometers, and obstacle sensors. While these devices offer the potential for a more comprehensive navigation solution, they face significant challenges. Synchronizing the input from multiple sensors and delivering meaningful feedback in real-time remains a technical hurdle. Furthermore, these wearables tend to be expensive and uncomfortable for prolonged use. AI-driven navigation systems have also been proposed as a way to enhance situational awareness. By leveraging artificial intelligence and machine learning, these systems could potentially recognize complex environments, adapt to changes in real time, and provide voice-guided navigation based on an understanding of the user's immediate surroundings. While this technology holds promise, it remains in its early stages and faces several challenges. AI systems require access to consistent data and robust connectivity to function effectively, both of which can be difficult to maintain in real-world scenarios. Additionally, there are concerns about the reliability of AI in unpredictable environments and the privacy implications of the data required for its use. Despite these efforts, there remains a crucial need for a navigation solution that provides real-time, hands-free assistance, adaptive guidance, and enhanced situational awareness specifically designed to meet the needs of visually impaired individuals.

[0018] So here in this invention AI-driven voice-controlled mobility assistance device integrates advanced technologies to aid visually impaired individuals in navigating their surroundings safely and independently. With ultrasonic sensors embedded in a compact body, the device detects nearby obstacles, while AI processes this data to provide auditory alerts. It utilizes GPS for precise location tracking and delivers auditory navigation assistance, including street names and landmarks. The device features an integrated camera for real-time object recognition, identifying potential hazards. Users can control the system using voice commands for hands-free interaction, making navigation seamless and reducing dependence on manual controls. The device includes a small speaker and microphone for communication, powered by a rechargeable battery for hours of continuous operation. The lightweight design makes it portable, and its Raspberry Pi-based platform allows for continuous software improvements, ensuring adaptability and enhanced functionality over time.


DETAILED DESCRIPTION OF THE INVENTION

[0019] While the present invention is described herein by example, using various embodiments and illustrative drawings, those skilled in the art will recognise recognize invention is neither intended to be limited that to the embodiment of drawing or drawings described nor designed to represent the scale of the various components. Further, some features that may form a part of the invention may not be illustrated with specific figures for ease of illustration. Such omissions do not limit the embodiment outlined in any way. The drawings and detailed description are not intended to restrict the invention to the form disclosed. Still, on the contrary, the invention covers all modification/s, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. The headings are used for organizational purposes only and are not meant to limit the description's size or the claims. As used throughout this specification, the worn "may" be used in a permissive sense (That is, meaning having the potential) rather than the mandatory sense (That is, meaning, must).
[0020] Further, the words "an" or "a" mean "at least one" and the word "plurality" means one or more unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents and any additional subject matter not recited, and is not supposed to exclude any other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents acts, materials, devices, articles and the like are included in the specification solely to provide a context for the present invention.
[0021] In this disclosure, whenever an element or a group of elements is preceded with the transitional phrase "comprising", it is also understood that it contemplates the same component or group of elements with transitional phrases "consisting essentially of, "consisting", "selected from the group comprising", "including", or "is" preceding the recitation of the element or group of elements and vice versa.
[0022] Before explaining at least one embodiment of the invention in detail, it is to be understood that the present invention is not limited in its application to the details outlined in the following description or exemplified by the examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for description and should not be regarded as limiting.
[0023] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention belongs. Besides, the descriptions, materials, methods, and examples are illustrative only and not intended to be limiting. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention.
[0024] The present invention is an AI-driven, voice-controlled mobility assistance device is designed to empower visually impaired individuals by providing a reliable, hands-free solution for navigating their surroundings. The primary purpose of the device is to enhance mobility and safety, enabling users to move independently through both indoor and outdoor environments. Through the integration of advanced AI technology, the device offers real-time assistance, ensuring that users can confidently traverse a wide range of settings without relying on external support.
[0025] The device focuses on providing seamless navigation and obstacle detection. It does this by continuously analyzing the environment and offering immediate auditory feedback. As users move through their surroundings, the system relays essential information, including directional cues, street names, and nearby landmarks. This real-time guidance ensures that users are constantly aware of their location and any potential hazards, enhancing their confidence and independence. Whether navigating a busy city street or an unfamiliar indoor space, the device ensures that users receive constant support to avoid obstacles and stay on the correct path.
[0026] Ease of use is a core feature of the device. Designed to operate via voice commands, it allows for hands-free interaction, making it accessible to all users, regardless of their familiarity with technology. Users can ask for directions, request updates on their surroundings, and issue other commands entirely through voice, providing a seamless, intuitive experience. The immediate auditory feedback ensures that users receive timely information without the need for manual inputs.
[0027] Additionally, the device is designed with portability and practicality in mind. Lightweight and compact, it is easy to carry and can be worn comfortably or tucked into a pocket. Its ergonomic design ensures that it fits into the user's daily life without causing inconvenience. This portability, combined with the hands-free functionality, makes the device an indispensable tool for users who seek independence and enhanced mobility.
[0028] By focusing on real-time situational awareness, ease of use, and portability, the AI-driven mobility assistance device significantly improves the quality of life for visually impaired individuals. Its advanced features offer a practical and efficient solution to everyday mobility challenges, ensuring that users can navigate their environments safely and confidently. From an external perspective, at the first glance, the device resembles a small, sophisticated personal accessory, akin to a crossbody bag or compact gadget pouch. Its smooth, durable outer casing is composed of high-quality, impact-resistant materials, designed to withstand the wear and tear of daily use while maintaining an understated, polished aesthetic. The finish of the device is minimalistic, available in neutral tones that complement a wide range of personal styles, ensuring that it fits effortlessly into the user's daily routine. The device's shape is ergonomic, featuring rounded edges that prevent discomfort when worn close to the body or carried in a pocket. It is designed to be portable and easily wearable, with an adjustable strap for users who prefer to wear it as a crossbody or shoulder bag. The strap itself is made from durable, soft material, ensuring that it is comfortable for prolonged use without causing irritation or strain. The lightweight nature of the device further enhances its portability, ensuring that it doesn't become a burden for users throughout the day.
[0029] The device's front-facing interface is clean and streamlined. There are no protruding buttons or switches, which could disrupt its smooth surface. Instead, the control mechanisms are discreetly integrated into the body, with a single, tactile control button located on the side. This button is ergonomically positioned, allowing users to access it easily by touch. Its texture is distinct enough for visually impaired users to locate it without difficulty, ensuring effortless control when necessary. The design emphasizes minimal physical interaction, encouraging users to rely primarily on voice commands. The top portion of the device houses a small speaker and microphone system, both expertly embedded to maintain the sleek appearance of the device. The speaker is designed to project clear, precise audio feedback to the user without overwhelming their auditory senses, while the microphone is sensitive enough to capture voice commands even in noisy environments. Both components are subtly integrated into the casing to avoid any bulky protrusions, preserving the minimalist design. Embedded sensors and technology are hidden within the casing, ensuring that the advanced features do not detract from the external simplicity of the device. These sensors are seamlessly integrated into the body, invisible to the naked eye, so the device appears as a simple, elegant accessory rather than a piece of complex technology. The overall look emphasizes both style and subtlety, ensuring that the device remains unobtrusive while providing essential assistance to the user.
[0030] The AI-driven voice-controlled mobility assistance device for visually impaired individuals is a sophisticated system that relies on the seamless integration of multiple advanced components, each playing a critical role in delivering real-time navigation and obstacle detection. The core components of this system include ultrasonic sensors, GPS technology, a camera with object recognition software, a voice-command interface, a Raspberry Pi platform for processing, and a rechargeable battery for powering the entire unit. Each of these components is designed to work together harmoniously, creating a robust system that enhances the user's mobility and safety.
[0031] At the heart of the device's obstacle detection system are the ultrasonic sensors. These sensors, strategically embedded within the body of the device, continuously emit ultrasonic waves that bounce back upon hitting nearby objects. By calculating the time it takes for the sound waves to return, the device determines the distance between the user and the obstacle. This data is then sent to the processing unit for further analysis. The ultrasonic sensors are essential for the system's real-time obstacle detection feature, enabling the device to detect even subtle changes in the environment, such as the presence of low obstacles or moving pedestrians.
[0032] So here the ultrasonic sensors, function by emitting high-frequency sound waves that are beyond the range of human hearing. These sound waves travel through the air and bounce back when they strike an object. The sensors then measure the time it takes for the waves to return, which is known as the echo time. By using this echo time, the device calculates the distance between the user and the detected object. This process occurs continuously and in real-time, allowing the device to constantly assess the user's proximity to obstacles in their environment. The system is sensitive enough to detect a range of obstacles, from large, static objects such as walls or furniture to smaller, more dynamic entities like pedestrians or moving vehicles.
[0033] The ultrasonic sensors are positioned in such a way that they provide a wide detection range, covering the user's immediate surroundings in multiple directions. The placement of the sensors ensures that the device can detect obstacles from various angles, offering comprehensive coverage. This wide detection area is critical for providing real-time awareness in dynamic environments, such as busy streets or crowded indoor spaces. The sensors' ability to detect subtle changes in the environment is particularly useful when navigating through areas where obstacles may be constantly shifting, such as moving pedestrians, low obstacles like curbs, or even small objects that might pose a tripping hazard.
[0034] Once the ultrasonic sensors detect an obstacle, the data is immediately sent to the processing unit of the device. This processing unit, powered by an AI-driven algorithm, analyzes the information received from the sensors. The AI system interprets the distance and location of the obstacle relative to the user and determines the appropriate response. For instance, if an obstacle is detected directly in the user's path, the system will generate an immediate auditory alert, informing the user to change direction or be cautious. If the obstacle is at a safe distance, the system may simply notify the user of its presence without requiring immediate action. This level of real-time processing is crucial for ensuring that the user is always aware of their surroundings, even when obstacles appear suddenly.
# Threshold values (in centimeters)
SAFE_DISTANCE = 100 # Safe distance, no immediate action needed
WARNING_DISTANCE = 50 # User should be cautious, change direction
ALERT_DISTANCE = 20 # Immediate danger, stop and change direction

def ultrasonic_sensor_read():
# This function simulates reading data from the ultrasonic sensor.
# In an actual system, you'd read from the hardware sensor.
distance = get_distance_from_sensor() # Mock function
return distance

def process_obstacle_data():
while True:
# Read distance from the ultrasonic sensor
distance = ultrasonic_sensor_read()

# Determine the response based on the distance
if distance <= ALERT_DISTANCE:
# Immediate danger: the obstacle is too close
alert_user("Immediate danger! Obstacle very close, stop or change direction.")
elif distance <= WARNING_DISTANCE:
# Warning: obstacle is in the user's path
alert_user("Warning! Obstacle detected ahead, consider changing direction.")
elif distance <= SAFE_DISTANCE:
# Obstacle detected but at a safe distance
notify_user("Obstacle detected but at a safe distance.")
else:
# No obstacle detected within range
notify_user("Path is clear.")

# Sleep for a short time to prevent overwhelming the processor
time.sleep(0.5)

def alert_user(message):
# This function would use the device's audio system to alert the user
print("ALERT:", message) # Mock print statement to simulate audio feedback

def notify_user(message):
# This function provides regular updates without urgent action
print("INFO:", message) # Mock print statement to simulate audio feedback

[0035] Here ultrasonic sensors detect obstacles and relay this data to the processing unit. The ultrasonic sensor continuously measures the distance to nearby obstacles by emitting sound waves and calculating the time it takes for the echo to return. Based on the distance measured, the system determines the appropriate response. The algorithm checks whether the detected obstacle is within specific threshold values, which are predefined to classify the urgency of the situation. If the obstacle is within the "ALERT" threshold (less than or equal to 20 cm), the system immediately generates an alert, warning the user of imminent danger and advising them to stop or change direction. This threshold is set very low because, at such close proximity, the user needs to take immediate action to avoid a collision.
[0036] For obstacles detected at a distance of less than or equal to 50 cm (but greater than 20 cm), the algorithm triggers a warning. This means that the user should start being cautious and potentially adjust their path to avoid the obstacle. The system is lenient here, providing a buffer zone so the user can react before the obstacle becomes an imminent threat. For distances greater than 50 cm but less than or equal to 100 cm, the system provides a notification about an obstacle, but no immediate action is required. The user is informed about the presence of an obstacle but at a safe distance, allowing them to proceed without concern. If no obstacle is detected within 100 cm, the system informs the user that the path is clear. This continuous process ensures that the user receives real-time feedback on their surroundings, enhancing their ability to navigate safely.
[0037] The threshold values used in this system (100 cm, 50 cm, and 20 cm) are designed to balance reaction time with safety. The 100 cm threshold allows for early awareness of obstacles, providing the user ample time to adjust their path or posture. The 50 cm threshold introduces caution when the obstacle is approaching closer, encouraging a change in direction if necessary. The 20 cm threshold acts as a critical limit where the obstacle is too close, prompting an immediate stop or avoidance maneuver. These thresholds ensure that the user has enough time to process the information and act before the obstacle poses a danger. By categorizing the distance into these levels, the system offers a graduated response: non-urgent notifications when an obstacle is far, warnings when the obstacle is approaching, and critical alerts when immediate action is needed. This allows the user to navigate with confidence and safety, without being overwhelmed by unnecessary alerts.
[0038] The interaction between the ultrasonic sensors and the processing unit is highly efficient and instantaneous. The sensors continuously feed data to the processor, which interprets it within milliseconds, allowing the system to respond almost immediately. The processing unit is equipped with advanced algorithms that not only calculate the distance to obstacles but also predict their movement in real-time. This means that if a pedestrian or vehicle is approaching the user, the system can anticipate the movement and provide warnings accordingly. This predictive capability adds an extra layer of safety, ensuring that the user is not caught off guard by rapidly moving obstacles.
[0039] One of the most valuable features of the ultrasonic-based obstacle detection unit is its ability to detect obstacles that might not be immediately visible or noticeable to the user. Low obstacles, such as curbs or steps, can often pose a significant risk to visually impaired individuals. However, the ultrasonic sensors are adept at detecting these smaller, less obvious hazards and can provide timely alerts, allowing the user to adjust their course. Additionally, the system can distinguish between stationary and moving obstacles, providing more detailed information about the nature of the hazard. For instance, a moving vehicle would trigger a different response from the system compared to a stationary object like a bench.
[0040] Another key aspect of the ultrasonic sensors is their ability to function effectively in various environmental conditions. Unlike optical sensors, which can be affected by lighting or weather conditions, ultrasonic sensors are not influenced by changes in light or darkness, making them reliable in a wide range of settings, from bright outdoor environments to dimly lit indoor areas. This ensures that the obstacle detection unit operates consistently, providing uninterrupted assistance regardless of the user's surroundings.
# Threshold values (in centimeters)
LOW_OBSTACLE_THRESHOLD = 15 # Curb or small obstacle detection
STATIONARY_OBJECT_THRESHOLD = 100 # Distance for stationary objects
MOVING_OBJECT_THRESHOLD = 150 # Distance for moving objects (e.g., vehicles)

def ultrasonic_sensor_read():
# This function simulates reading data from the ultrasonic sensor
# In an actual system, you'd read from the hardware sensor
distance = get_distance_from_sensor() # Mock function for distance
return distance

def detect_movement():
# Function to determine if the object is moving or stationary
# A simple placeholder using random detection logic
# In a real system, you'd integrate this with additional sensors like Doppler radar
moving = is_object_moving() # Mock function for motion detection
return moving

def process_obstacle_data():
while True:
# Read distance from the ultrasonic sensor
distance = ultrasonic_sensor_read()

# Determine if the detected obstacle is moving
moving = detect_movement()

# Analyze obstacle and determine response
if distance <= LOW_OBSTACLE_THRESHOLD:
# Low obstacle like a curb or step detected
alert_user("Low obstacle detected. Step cautiously.")
elif moving and distance <= MOVING_OBJECT_THRESHOLD:
# Moving object detected, such as a vehicle
alert_user("Moving obstacle detected. Be aware of traffic.")
elif not moving and distance <= STATIONARY_OBJECT_THRESHOLD:
# Stationary object detected, like a bench or a wall
notify_user("Stationary obstacle detected ahead.")
else:
# No significant obstacle detected
notify_user("Path is clear.")
[0041] So using the above process the ultrasonic sensors detects a range of obstacles, including low obstacles like curbs or steps, as well as both stationary and moving objects. The ultrasonic sensors continuously emit sound waves and measure the time it takes for the echoes to return, calculating the distance to any detected object. The data is analyzed by the system to determine the nature of the obstacle and how the user should be alerted. When a low obstacle is detected (e.g., less than 15 cm away), the system triggers an immediate alert to warn the user about potential tripping hazards like curbs or steps. The low threshold value is set to 15 cm because obstacles at this height are typically small yet dangerous for visually impaired individuals, requiring cautious movement or a change in direction.
[0042] The system also distinguishes between stationary objects (such as benches or walls) and moving objects (such as vehicles or pedestrians). If a moving object is detected within 150 cm, the system prioritizes this detection, alerting the user about potential hazards that may require immediate action. Moving objects present a higher risk, as their motion can quickly bring them into the user's path, necessitating faster reactions. In contrast, stationary objects detected within 100 cm prompt a less urgent notification, as the user has more time to respond. The distance thresholds are chosen to give users ample time to react to both types of obstacles while maintaining a clear sense of urgency for moving objects compared to stationary ones.. So the obstacle detection unit operates continuously while the device is powered on. As the user moves, the ultrasonic sensors continuously emit sound waves and analyze the returning echoes to create a real-time map of the user's environment. This constant scanning ensures that the user is alerted to obstacles as soon as they are detected. The data processed by the unit is then translated into auditory feedback, which is delivered through the device's speaker. The feedback is typically verbal, providing clear, concise information about the obstacle's location and distance, ensuring that the user can respond appropriately without needing to interpret complex signals.
[0043] Working alongside the ultrasonic sensors is the GPS module, which is responsible for tracking the user's geographical position. The GPS provides constant updates on the user's location, allowing the system to give precise auditory feedback regarding street names, intersections, and significant landmarks. The GPS interacts with the processing unit to ensure that the user receives real-time guidance. As the user moves through different environments, the GPS continuously communicates with the system to keep track of their location, ensuring that the auditory navigation instructions are always accurate and timely. This feature is particularly crucial for outdoor navigation, where the user may need to know their exact position and the surrounding landmarks for orientation.
[0044] The camera, embedded discreetly within the device, plays a crucial role in the object recognition feature. The camera continuously captures images of the user's surroundings, which are then processed by an AI-driven object recognition software. This software is capable of identifying various objects, such as vehicles, pedestrians, street signs, or potential hazards, and distinguishing between different types of obstacles. Once an object is identified, the information is relayed to the processing unit, which generates the appropriate auditory feedback. For instance, if a vehicle is approaching, the system will alert the user, allowing them to adjust their path accordingly. The integration of the camera and object recognition software is vital for providing detailed information about the user's environment, enhancing both safety and awareness.
[0045] This component serves as the primary visual sensor, which feeds data into the device's object recognition system, allowing the device to perform its vital function of distinguishing between various types of objects and obstacles in real time. The camera operates through a series of high-frequency image captures, designed to work seamlessly in a variety of environmental conditions, including different lighting scenarios. The continuous image stream generated by the camera is processed by an AI-driven object recognition software that is integrated into the device's central processing unit. This software utilizes advanced machine learning algorithms to interpret the visual data. By analyzing image patterns, shapes, and movement, the system can accurately identify a range of objects, including stationary hazards, such as street signs, curbs, and benches, as well as moving obstacles, such as vehicles, bicycles, or pedestrians. The ability to distinguish between stationary and dynamic objects is essential, as it enables the device to provide specific warnings based on the nature of the identified hazard.
[0046] Once the camera captures an image, the data is relayed to the processing unit, where the object recognition software processes it in real-time. The object recognition software functions by referencing pre-trained models that have been designed to identify common obstacles in both indoor and outdoor environments. These models have been trained on vast datasets of images, enabling the system to quickly and accurately recognize objects, even in complex environments. For instance, when the camera captures the image of a pedestrian or vehicle, the software compares the visual input with its database and determines whether the object is a stationary obstacle or a moving hazard.
[0047] In the event that the system identifies a moving object, such as a vehicle approaching the user's path, the software immediately prioritizes this hazard due to the inherent risks associated with moving obstacles. The processing unit, upon receiving this information, triggers the auditory feedback system to issue an urgent warning to the user, advising them to adjust their movement or pause their progress. The feedback system provides detailed, context-specific information, informing the user not only of the presence of the vehicle but also of its relative direction and speed, thereby enabling the user to make an informed decision.
[0048] For stationary objects, the system's response is less urgent but still critical. When the camera identifies a fixed object, such as a street sign or a bench, the object recognition software analyzes the object's distance and size. This data is then relayed to the processing unit, which generates a cautionary alert, allowing the user to steer clear of the obstacle or adjust their course to avoid contact. The camera's ability to recognize the difference between a low curb and a high barrier, for example, ensures that the device provides nuanced feedback, allowing the user to navigate around obstacles of varying sizes and shapes.
[0049] So here based on the object's classification, the system determines how to respond. If the object is a low obstacle such as a curb, and it is within the set threshold of 15 cm, the system alerts the user to step carefully. Curbs and low obstacles, although not large, pose significant risks to visually impaired individuals, so the threshold is set to 15 cm to capture such hazards early enough for the user to react. For moving objects like vehicles or pedestrians, the system uses a threshold of 150 cm, allowing the user time to receive a warning and make necessary adjustments. Moving objects represent a greater potential risk because of their unpredictable trajectories, so the system prioritizes them with more urgent auditory alerts. For stationary objects like benches or street signs, the threshold is set at 100 cm, giving the user enough space to safely navigate around these fixed hazards without creating unnecessary alarm. Stationary objects are less immediately dangerous, but still require awareness to avoid collision.
[0050] The integration of the camera and object recognition software is engineered to provide continuous, real-time updates to the user, offering comprehensive situational awareness. This system works in tandem with the device's other components, such as the ultrasonic sensors and GPS, to create a fully integrated mobility solution. While the ultrasonic sensors primarily detect objects based on distance, the camera adds another layer of functionality by visually identifying and classifying objects based on their appearance and movement. This dual-layer detection ensures that the user receives more detailed information about their surroundings, thereby enhancing the overall safety and efficiency of the device. Furthermore, the camera is designed to operate effectively under diverse environmental conditions, ensuring that it remains fully functional in both brightly lit outdoor environments and poorly lit indoor areas. This adaptability is crucial in maintaining uninterrupted assistance, regardless of the user's location or the time of day. Unlike traditional optical sensors, which may be influenced by fluctuating light conditions, the camera is equipped with advanced image processing capabilities that allow it to compensate for changes in brightness and shadow, thus ensuring consistent performance.
[0051] The cautionary alert system utilizing vibration within the AI-driven voice-controlled mobility assistance device for visually impaired individuals constitutes a critical component designed to provide immediate, tactile feedback to the user. This system employs a vibration motor integrated into the device, engineered to generate specific vibration patterns that correspond to varying levels of urgency based on the nature and proximity of detected obstacles. The incorporation of haptic feedback serves as an essential complement to the auditory alerts, particularly in environments where auditory feedback may be less effective or when the user prefers silent alerts. The vibration system is programmed to activate when the obstacle detection unit identifies an object within predefined threshold distances. The device's vibration motor, a compact yet powerful actuator, is capable of generating distinct vibration patterns that vary in intensity and duration depending on the type of obstacle detected. These patterns provide the user with non-intrusive, clear, and immediate signals that correspond to the level of caution required. The differentiation between these patterns is vital, as it allows the user to intuitively understand the nature of the hazard without the need for visual or auditory cues.
[0052] For moving objects-such as vehicles or pedestrians-detected within a distance of 150 centimeters, the vibration pattern is sharp and continuous. The vibration motor engages in rapid pulses to signal the urgency of a moving hazard, which inherently poses a higher risk due to its dynamic nature. The continuous vibration prompts the user to act swiftly, adjust their movement, or pause to avoid the obstacle. The system's use of rapid vibration ensures that the user can differentiate a moving object from other types of obstacles, such as stationary objects or low curbs. This functionality is particularly useful in busy, urban environments where moving hazards are frequent and pose immediate danger. In the case of stationary objects, such as benches, street signs, or walls, detected within a range of 100 centimeters, the vibration pattern is notably different. The motor produces a slower, more intermittent vibration to indicate the presence of a fixed obstacle. This slower rhythm reflects the less urgent nature of stationary objects, providing the user with sufficient time to alter their course without immediate concern. The differentiation in vibration intensity ensures that the user receives appropriate feedback based on the severity of the detected hazard. The system is designed to generate these haptic signals in real time, ensuring that the user remains fully informed of their surroundings as they navigate through various environments.
[0053] When the system detects low obstacles-such as curbs or steps-within a distance of 15 centimeters, the vibration pattern is tailored to provide a short, firm pulse. This pattern is crucial for alerting the user to potential tripping hazards that are small but significant in terms of safety. The short and distinct pulse ensures that the user can immediately recognize the need to step carefully or adjust their movement to avoid the low obstacle. Given that curbs and steps can easily go unnoticed by visually impaired individuals, the precise use of this vibration alert is essential in preventing accidents and maintaining safe mobility. The interaction between the vibration motor and the obstacle detection unit is seamless, driven by the real-time data processed by the device's central processing unit. As the ultrasonic sensors or the camera detects an obstacle within the designated thresholds, the system calculates the nature and proximity of the hazard. This data is then relayed to the vibration motor, triggering the corresponding vibration pattern based on the obstacle type. The vibration motor's compact design allows it to operate silently and efficiently, without disrupting the user's movement or experience. Additionally, the system is designed to function consistently across different environments, ensuring that the user receives the appropriate haptic feedback regardless of ambient noise levels or external distractions.
[0054] This cautionary alert system plays a critical role in enhancing the device's overall functionality, providing a reliable, intuitive method of communication between the device and the user. By incorporating vibration alerts, the system ensures that visually impaired individuals receive clear, non-verbal cues about their surroundings, empowering them to navigate safely and confidently. This tactile feedback mechanism is particularly advantageous in situations where auditory alerts may be less effective, such as in loud environments or when the user requires discrete alerts. The use of distinct vibration patterns for different types of obstacles ensures that the user can easily interpret the urgency of each situation, further enhancing their independence and mobility.
[0055] The voice-command interface is another integral component of the system, designed to provide hands-free operation. A microphone, embedded near the top of the device, captures the user's voice commands, which are then processed by the system's voice recognition software. The user can ask for directions, inquire about their surroundings, or issue specific commands, such as requesting information about nearby obstacles or landmarks. The voice-command system is essential for ensuring that visually impaired users can interact with the device without needing to use their hands. This hands-free functionality enhances accessibility, allowing users to focus on their surroundings while receiving real-time feedback from the system.
[0056] Embedded discreetly near the top of the device is a microphone, which is sensitive enough to capture the user's voice commands even in challenging environments. The microphone, strategically placed to ensure optimal sound reception, serves as the entry point for user interaction, functioning as the primary sensor for receiving verbal instructions.
[0057] Once a voice command is captured, the audio signal is transmitted to the device's voice recognition software, which processes the input in real-time. The voice recognition software is driven by advanced algorithms capable of interpreting a wide range of commands with a high degree of accuracy. This software utilizes natural language processing (NLP) to parse the user's speech, converting verbal cues into actionable data that the device can understand and execute. The system's ability to recognize and process speech is highly adaptable, designed to function effectively across diverse accents, intonations, and languages. This adaptability ensures that the device can cater to a broad spectrum of users, providing consistent performance regardless of individual speech patterns.
[0058] The user, through the voice-command interface, is afforded the ability to interact with the device in multiple ways. The system is designed to respond to a variety of queries and instructions. For example, the user can request directions, inquire about nearby landmarks, or ask for updates on their current location. In addition to basic navigational assistance, the user can issue more specific commands, such as asking for information about obstacles in their immediate path or requesting an overview of the route ahead. This flexibility in voice interaction enhances the overall functionality of the device, making it a comprehensive tool for independent mobility.
# Threshold values
COMMAND_CONFIDENCE_THRESHOLD = 0.75 # Confidence level for processing the command
NOISE_THRESHOLD = 4000 # Noise level threshold for filtering background noise

# Initialize the speech recognizer
recognizer = sr.Recognizer()

def capture_voice_command():
# Capture voice input from the microphone
with sr.Microphone() as source:
recognizer.adjust_for_ambient_noise(source, duration=1)
recognizer.energy_threshold = NOISE_THRESHOLD
print("Listening for command...")
audio = recognizer.listen(source)

return audio

def process_voice_command(audio):
try:
# Recognize the speech using Google Speech Recognition (can be replaced with any NLP engine)
command = recognizer.recognize_google(audio)
confidence = recognizer.recognize_google(audio, show_all=True)['confidence']

if confidence >= COMMAND_CONFIDENCE_THRESHOLD:
print(f"Command recognized: {command} with confidence {confidence}")
return command.lower() # Return the command as lowercase for further processing
else:
print("Confidence too low, unable to process command.")
return None
except sr.UnknownValueError:
print("Sorry, I couldn't understand the command.")
return None
except sr.RequestError:
print("There was an error processing the voice input.")
return None

def execute_command(command):
# Handle different commands
if "direction" in command:
provide_directions()
elif "landmark" in command:
provide_nearby_landmarks()
elif "location" in command:
provide_current_location()
elif "obstacle" in command:
provide_obstacle_info()
else:
print("Unknown command, please try again.")

def provide_directions():
print("Providing directions based on current location...")
# Code to interface with GPS and give directions

def provide_nearby_landmarks():
print("Listing nearby landmarks...")
# Code to interface with location service and list landmarks

def provide_current_location():
print("You are currently at...")
# Code to get current GPS location and provide it

def provide_obstacle_info():
print("Obstacle detected ahead...")
# Code to interact with obstacle detection sensors

[0059] To ensure that the system operates reliably across different environments, the algorithm uses a confidence threshold. This threshold is set at 0.75, meaning that only voice commands recognized with a confidence level of 75% or higher will be processed and acted upon. This threshold is crucial for preventing errors or misinterpretations of voice commands, especially in noisy environments or when the user's speech is unclear. If the command's confidence score falls below the threshold, the system informs the user that it was unable to understand the command, asking them to repeat it.
[0060] The noise threshold, set at 4000, ensures that background noise is filtered out, allowing the system to accurately focus on the user's voice. This noise filtering is essential in environments with varying noise levels, such as busy streets or crowded public spaces. The system adjusts to ambient noise before capturing the user's voice, ensuring that extraneous sounds do not interfere with the recognition process.
[0061] Once the command is captured and processed, the algorithm matches the recognized command to a set of predefined actions. For example, if the user asks for directions, the system interfaces with the GPS module to provide real-time directions. Similarly, the user can ask for nearby landmarks, their current location, or information about obstacles in their path. The system's ability to recognize and process a wide range of commands gives the user full control over the device without needing to physically interact with it.
[0062] The COMMAND_CONFIDENCE_THRESHOLD of 0.75 ensures that only commands with a high level of certainty are acted upon, reducing the chance of misinterpretation. This threshold strikes a balance between allowing the system to act quickly and ensuring that only reliable commands are processed. In real-world conditions, background noise, varying accents, or unclear speech could lower the confidence score, so the system avoids acting on low-confidence commands to prevent errors.
[0063] The NOISE_THRESHOLD of 4000 is used to filter out background noise. The device adjusts its sensitivity to the user's environment by calculating the ambient noise level before capturing the voice command. This ensures that the system captures clean audio input, focusing on the user's voice and disregarding irrelevant sounds, which is essential for maintaining high command recognition accuracy in diverse settings. The voice-command interface allows visually impaired users to interact with the device naturally, through speech, which greatly enhances accessibility. This hands-free operation is invaluable for maintaining focus on surroundings and ensures that the user can navigate confidently.
[0064] In terms of operation, the voice-command interface operates continuously, ensuring that the user can interact with the device at any moment. The real-time processing capabilities of the voice recognition software allow the system to respond almost instantaneously to user input, generating auditory feedback without delay. This immediate feedback is critical in time-sensitive situations, such as when the user encounters an unexpected obstacle or needs urgent directional guidance. The auditory feedback system, which works in tandem with the voice-command interface, provides detailed, context-specific responses that guide the user safely through their surroundings.
[0065] The interaction between the voice-command interface and the rest of the device's components is both dynamic and seamless. As the user issues a command, such as asking for directions, the GPS module is activated to retrieve the user's current location and calculate the appropriate route. If the user inquires about nearby obstacles, the ultrasonic sensors or camera system immediately engage to assess the environment, relaying real-time data back to the processing unit, which then conveys the relevant information to the user via the auditory feedback system. This integrated functionality ensures that the user can manage all aspects of the device through simple voice commands, without needing to manually manipulate any controls.
[0066] The hands-free nature of the voice-command interface is of paramount importance for visually impaired users, as it allows them to maintain their focus on navigating their environment rather than on operating the device itself. This accessibility feature ensures that users can remain fully engaged with their surroundings, enhancing both safety and convenience. The elimination of the need for physical interaction with the device significantly reduces cognitive load, enabling the user to make decisions more quickly and efficiently as they receive real-time updates on obstacles, directions, and other important information.
[0067] Moreover, the system's voice recognition capabilities are designed to function effectively in a wide range of environmental conditions, including areas with background noise or wind interference. The microphone is equipped with noise-cancellation technology that filters out ambient sounds, ensuring that the user's commands are accurately captured and processed, even in noisy urban environments. This robust audio capture system guarantees consistent performance, making the voice-command interface reliable across diverse settings.
[0068] Central to the device's functionality is the Raspberry Pi platform, which serves as the processing hub. All data from the ultrasonic sensors, GPS, camera, and voice-command interface is sent to the Raspberry Pi, where it is processed and analyzed in real-time. The Raspberry Pi runs complex AI algorithms that interpret the sensor data, determine the user's position, identify obstacles, and generate the appropriate auditory and haptic feedback. The system's ability to perform these tasks in real-time is critical for ensuring the user's safety and smooth navigation. The Raspberry Pi platform also enables continuous updates, allowing the device to evolve with new features and improvements over time, ensuring that the system remains state-of-the-art and adaptable to future advancements.
[0069] Powering the entire system is a rechargeable battery that is designed for extended use. The battery is integrated into the device's compact body, providing hours of uninterrupted operation. The power management system ensures that each component receives the necessary power while maximizing energy efficiency. The battery powers all aspects of the device, from the ultrasonic sensors and GPS to the voice-command system and Raspberry Pi platform. A critical feature of the battery system is its ability to provide consistent performance even during extended periods of use, ensuring that the device remains operational when the user needs it most.
[0070] The interaction between these components is what makes the device an effective mobility assistant. The ultrasonic sensors detect obstacles and relay data to the Raspberry Pi, which processes the information and generates auditory alerts through the speaker. The GPS provides continuous location tracking, ensuring that the system knows the user's exact position and can guide them with auditory feedback on street names and landmarks. The camera and object recognition software work in tandem to provide a more detailed analysis of the environment, allowing the user to navigate complex settings with confidence. The voice-command interface allows users to interact with the system seamlessly, issuing commands and receiving real-time updates without needing to handle the device manually. All of these components are powered efficiently by the rechargeable battery, ensuring that the system remains functional throughout the user's journey.
[0071] Working of the AI-Driven Voice-Controlled Mobility Assistance Device starts with the obstacle detection unit, which employs ultrasonic sensors to continuously scan the environment for nearby obstacles. These sensors emit ultrasonic waves, which bounce back upon striking objects, allowing the device to calculate the distance between the user and the detected obstacle. The camera, embedded within the device, complements the ultrasonic sensors by providing visual data. It captures images of the surroundings, which are processed by AI-powered object recognition software capable of identifying various objects, such as vehicles, pedestrians, and stationary hazards like curbs or street signs. Together, the ultrasonic sensors and camera ensure the device can detect both static and moving objects.
[0072] The GPS module integrated into the device provides real-time location tracking and navigation. By constantly monitoring the user's position, the GPS enables the system to give directions, announce street names, and inform the user of nearby landmarks. This module is critical for outdoor navigation, ensuring that the user remains aware of their surroundings, even in unfamiliar environments. One of the key elements of the system is its voice-command interface, which allows the user to interact with the device hands-free. A microphone captures the user's voice commands, which are processed by voice recognition software powered by natural language processing (NLP). The user can request directions, inquire about nearby obstacles, or ask for updates on their location, receiving real-time auditory feedback through the system's speaker. The voice-command interface is designed to operate effectively across various speech patterns and accents, making the device accessible to a broad user base.
[0073] In environments where auditory feedback may be insufficient-such as in noisy surroundings-the device incorporates a haptic feedback system. The device's vibration motor generates specific vibration patterns to alert the user about nearby obstacles or hazards. The intensity and frequency of the vibrations are adjusted depending on the nature of the obstacle. For instance, a rapidly approaching vehicle would trigger a sharp, continuous vibration, while a stationary object like a bench would result in a slower, intermittent vibration. The processing unit serves as the central hub, receiving data from all the system's components-ultrasonic sensors, the camera, GPS, and the voice-command interface. It analyzes this data in real-time, determining the appropriate feedback to provide to the user, whether through voice commands, auditory alerts, or haptic vibrations. The device is powered by a rechargeable battery, providing several hours of continuous operation. It is designed to be lightweight, portable, and ergonomic, allowing users to wear it comfortably throughout the day. The system's seamless integration of multiple feedback mechanisms-auditory, visual, and haptic-ensures that users can navigate with confidence in a wide range of environments.
[0074] Case Study: Suppose John, a visually impaired individual, is using the AI-driven voice-controlled mobility assistance device to travel to a public library. It is a busy afternoon, and John has to cross a street near a public square where a rally is taking place. The area is loud, with voices, music, and traffic noise filling the environment, making it difficult for John to rely solely on auditory feedback from the device. John activates the device using a voice command, "Start navigation to the public library." The device responds by confirming the destination and begins to provide directions using its GPS module. As John approaches the square, the device audibly informs him of the street names and nearby landmarks. As John gets closer to the rally, the device's ultrasonic sensors detect a nearby stationary obstacle-a bench-and the device sends an auditory alert: "Bench ahead, one meter." John adjusts his direction based on the feedback and safely navigates around the bench.
[0075] Suddenly, a group of pedestrians participating in the rally moves quickly across John's intended path. The device's camera detects the moving objects and immediately processes the data using the object recognition software. Recognizing that the objects are moving pedestrians, the device generates an auditory alert: "Pedestrians approaching from the right, three meters. However, due to the high noise levels from the rally, John does not hear the auditory alert. Recognizing this situation, the device automatically activates the haptic feedback system. The vibration motor begins to generate rapid, continuous vibrations, indicating the presence of a dynamic hazard-a moving crowd. John feels the vibrations through the device and understands that he must stop and wait for the pedestrians to pass. The vibrations stop once the pedestrians have moved out of John's path. The device then provides a new auditory alert: "Path clear. Proceed straight ahead. As John moves toward the street crossing, the device's ultrasonic sensors detect an approaching vehicle at a distance of 10 meters. The device switches to an urgent mode, where it combines both auditory and haptic feedback. The voice system announces, "Vehicle approaching, ten meters ahead," while simultaneously generating strong, sharp vibrations to emphasize the danger. John waits until the vibrations cease, indicating that the vehicle has passed. Once the rally noise subsides, the auditory alerts resume as the primary feedback mechanism. The device guides John safely through the remaining distance to the library, providing detailed directions as he approaches the final destination. Upon arrival, the device audibly confirms, "You have arrived at the public library."
[0076] In this case, the haptic feedback system played a crucial role in maintaining John's safety when auditory feedback was not sufficient. The use of vibration alerts, tailored to the type and proximity of obstacles, ensured that John was continuously aware of his surroundings, even in a high-noise environment where verbal cues might have been missed. This scenario demonstrates the robust capabilities of the AI-driven voice-controlled mobility assistance device, which integrates multiple sensory feedback mechanisms to adapt to varying environmental challenges, ensuring that visually impaired users can navigate safely and independently.
[0077] While there has been illustrated and described embodiments of the present invention, those of ordinary skill in the art, to be understood that various changes may be made to these embodiments without departing from the principles and spirit of the present invention, modifications, substitutions and modifications, the scope of the invention being indicated by the appended claims and their equivalents.

FIGURE DESCRIPTION

[0078] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate an exemplary embodiment and explain the disclosed embodiment together with the description. The left and rightmost digit(s) of a reference number identifies the figure in which the reference number first appears in the figures. The same numbers are used throughout the figures to reference like features and components. Some embodiments of the System and methods of an embodiment of the present subject matter are now described, by way of example only, and concerning the accompanying figures, in which:

[0079] Figure - 1 illustrates the line diagram representing the AI-driven voice-controlled mobility assistance device. The microphone is positioned at the top of the device, allowing it to capture voice commands clearly from the user. This component is integral to the voice-command interface, enabling hands-free operation of the device. The microphone works in conjunction with the voice recognition software, which processes the user's commands in real-time, converting them into actions that control the navigation and obstacle detection features. On the front and sides of the device, ultrasonic sensors are embedded. These sensors continuously emit ultrasonic waves, which detect obstacles by measuring the time it takes for the waves to bounce back after hitting an object. This information is processed to provide real-time data on nearby obstacles, helping the user avoid potential hazards while navigating. The camera is mounted on the front of the device, capturing visual data about the user's surroundings. This data is analyzed by the AI-powered object recognition software, which identifies and classifies objects such as pedestrians, vehicles, and stationary obstacles like benches or curbs. This feature ensures that the user is aware of both moving and static hazards in their path. At the top front of the device, a speaker is installed to provide auditory feedback. This speaker delivers voice-based guidance, including directions, obstacle alerts, and updates on the user's location. The speaker plays a crucial role in ensuring that the user remains informed while navigating, particularly in quieter environments where auditory feedback is easily heard. Internally, the GPS module is located within the device, responsible for providing real-time location tracking. This module allows the device to offer precise navigational assistance, announcing street names, directional changes, and nearby landmarks to the user, ensuring they stay oriented in both familiar and unfamiliar areas. A vibration motor is embedded inside the body of the device, providing haptic feedback. This motor generates specific vibration patterns that vary in intensity and frequency, depending on the proximity and nature of detected obstacles. When auditory feedback is insufficient-such as in loud environments-this tactile alert system ensures that the user receives timely warnings about nearby hazards. At the core of the device is the processing unit, which integrates data from all components-the ultrasonic sensors, camera, GPS, and voice-command interface. , C , Claims:1. An AI-driven voice-controlled mobility assistance device for visually impaired individuals with advanced navigation and obstacle detection features, the device comprising:
a voice-command interface configured to receive voice commands from a user, wherein the interface includes a microphone for capturing said commands;
a voice recognition software operatively connected to the voice-command interface, wherein the software processes the voice commands in real-time using natural language processing to interpret user instructions;
an obstacle detection unit comprising ultrasonic sensors, configured to emit ultrasonic waves, detect obstacles by receiving echoes of said waves, and calculate the distance between the user and the detected obstacle;
a camera integrated within the device, wherein the camera continuously captures images of the user's surroundings, and an AI-powered object recognition software processes said images to identify and classify objects, including stationary obstacles and moving hazards;
a GPS module, wherein the module provides real-time location tracking and navigational guidance by relaying the user's geographical position and offering auditory feedback regarding directions, street names, and nearby landmarks;
a haptic feedback system, comprising a vibration motor configured to generate distinct vibration patterns based on the proximity and nature of the detected obstacles;
and a processing unit, operatively connected to the ultrasonic sensors, the camera, the GPS module, and the voice-command interface, wherein the processing unit processes data from said components in real-time and provides user feedback through auditory alerts, haptic vibrations, or a combination thereof.
2. The AI-driven mobility assistance device as claimed in claim 1, wherein the voice recognition software is configured to adapt to diverse user accents, intonations, and languages, ensuring accurate interpretation of user commands across a broad spectrum of speech patterns.
3. The AI-driven mobility assistance device as claimed in claim 1, wherein the obstacle detection unit is further configured to differentiate between stationary objects, such as curbs, benches, and street signs, and moving objects, such as vehicles and pedestrians, providing distinct auditory or haptic feedback based on the nature of the detected obstacle.
4. The AI-driven mobility assistance device as claimed in claim 1, wherein the haptic feedback system generates a sharp, continuous vibration pattern when a moving object is detected within a predetermined distance, and a slower, intermittent vibration pattern for stationary obstacles, allowing the user to discern between different types of hazards based on tactile feedback alone.
5. The AI-driven mobility assistance device as claimed in claim 1, wherein the camera's object recognition software is configured to operate effectively under various environmental conditions, including changes in lighting, allowing the device to accurately identify objects in both brightly lit outdoor environments and dimly lit indoor settings.
6. The AI-driven mobility assistance device as claimed in claim 1, wherein the GPS module is further configured to provide navigational guidance that includes auditory feedback regarding nearby landmarks, the names of streets, and changes in route direction, ensuring the user is continuously informed of their geographical surroundings.
7. The AI-driven mobility assistance device as claimed in claim 1, wherein the device is equipped with a rechargeable battery, capable of providing several hours of continuous operation, and designed to be lightweight and ergonomically suited for being worn or carried by the user.
8. The AI-driven mobility assistance device as claimed in claim 1, wherein the device is configured to activate the haptic feedback system in situations where auditory feedback may be insufficient, such as in environments with high ambient noise levels, thereby ensuring that the user remains continuously aware of obstacles through tactile feedback.
9. The AI-driven mobility assistance device as claimed in claim 1, wherein the processing unit is further configured to combine data from the ultrasonic sensors, camera, GPS module, and voice-command interface to provide real-time, context-specific feedback, optimizing the user's navigational safety and independence.

Documents

NameDate
Abstract.jpg03/12/2024
202421088434-FORM 18 [16-11-2024(online)].pdf16/11/2024
202421088434-FORM 3 [16-11-2024(online)].pdf16/11/2024
202421088434-FORM-5 [16-11-2024(online)].pdf16/11/2024
202421088434-FORM-9 [16-11-2024(online)].pdf16/11/2024
202421088434-COMPLETE SPECIFICATION [15-11-2024(online)].pdf15/11/2024
202421088434-DRAWINGS [15-11-2024(online)].pdf15/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.