Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
GESTURE AND VOICE-CONTROLLED SYSTEM FOR SAFER DRIVING
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 29 October 2024
Abstract
A gesture and voice-controlled system for safer driving, comprising a cuboidal body 101 affixed to the inner cabin of the vehicle via multiple suction cups 102, an artificial intelligence-based imaging unit 103 mounted on a gear and gear train for 360-degree rotational movement on the body 101, paired with IR sensor to monitor facial expressions and body 101 posture, a speaker unit 104 comprising directional and ultrasonic speakers with ball-and-socket joints 201 for dynamic sound direction adjustment, a gesture detection sensor detecting hand gestures made by the user, multiple motorized hinge joints 201 adjusts the airflow as per the user requirement, a microcontroller capturing voice commands via an integrated microphone 106, an angle monitors positioning of the speaker unit 104, a piezoelectric unit generating vibrating alerts when signs of drowsiness or fatigue.
Patent Information
Application ID | 202411082840 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 29/10/2024 |
Publication Number | 46/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. Meenakshi Gupta | Associate Professor, Department of Electronics and Communication Engineering, Manav Rachna University, Sector-43, Aravali Hills Delhi Suraj Kund, Faridabad, Haryana, India, Pin – 121004. | India | India |
Ujjawal Arora | Department of Computer Science and Engineering, Manav Rachna University, Sector-43, Aravali Hills Delhi Suraj Kund, Faridabad, Haryana, India, Pin – 121004. | India | India |
Harshil Aron | Department of Computer Science and Engineering, Manav Rachna University, Sector-43, Aravali Hills Delhi Suraj Kund, Faridabad, Haryana, India, Pin – 121004. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Manav Rachna University | Sector-43, Aravali Hills Delhi Suraj Kund, Faridabad, Haryana, India, Pin – 121004. | India | India |
Specification
Description:FIELD OF THE INVENTION
[0001] The present invention relates to a gesture and voice-controlled system for safer driving that is capable of enhancing driver safety and comfort by monitoring the driver's physical and mental state, detecting signs of fatigue, distraction, and drowsiness and also capable of automatically adjusting the driving environment to keep the driver alert and allows hands-free control of vehicle functions, thereby providing a personalized driving experience, reducing accidents and promoting comfort.
BACKGROUND OF THE INVENTION
[0002] The alarming rise in road accidents, with over 1.3 million fatalities annually, underscores the urgent need for innovative safety solutions. This staggering figure translates to approximately 3,700 deaths per day, with countless more injured or affected by these tragedies. Driver fatigue and distraction are leading causes of accidents, accounting for 20-30% of all crashes, and resulting in devastating consequences. In the US alone, drowsy driving claims over 800 lives each year, with tired or distracted drivers being 4-6 times more likely to be involved in a crash. Furthermore, road accidents are the leading cause of death among people aged 5-29, emphasizing the gravity of the situation and the need for effective countermeasures to prevent accidents and ensure road safety.
[0003] Traditionally, drivers have relied on methods such as taking regular breaks, consuming caffeine, and using alertness apps to stay awake. However, these methods have significant drawbacks, breaks might not always be feasible, caffeine's effects are temporary, and alertness apps can be distracting. Additionally, reliance on driver self-reporting and honor systems often leads to inaccurate fatigue assessments. Existing in-vehicle solutions, such as lane departure warning systems and driver monitoring cameras, are limited by their rearview focus and inability to proactively prevent accidents.
[0004] US20130261871A1 discloses a methods and apparatuses for gesture-based controls are disclosed. In one aspect, a method is disclosed that includes maintaining a correlation between a plurality of predetermined gestures, in combination with a plurality of predetermined regions of a vehicle, and a plurality of functions. The method further includes recording three-dimensional images of an interior portion of the vehicle and, based on the three-dimensional images, detecting a given gesture in a given region of the vehicle, where the given gesture corresponds to one of the plurality of predetermined gestures and the given region corresponds to one of the plurality of predetermined regions. The method still further includes selecting, based on the correlation, a function associated with the given gesture in combination with the given region and initiating the function in the vehicle.
[0005] CN105501121A discloses an intelligent awakening method and system. The method comprises: initializing images and in-car voice, monitoring voice information of a driver, recognizing awakening voice information according to the voice information, performing gesture detection on the driver, recognizing triggering movement information according to gestures, matching the triggering movement information or the awakening voice information, and if the matching of any information is succeeded, awakening. According to the method, under a car-mounted scene, the manual manipulation is reduced, and the safety is improved. An intelligent car-mounted terminal supports user-defined naming, and a personalized usage mode is realized. The system disclosed by the invention comprises a gesture monitoring and recognizing module, a voice monitoring and recognizing module, a key monitoring module and an awakening module. According to the system, the car-mounted terminal is activated by voice or gestures, the relatively safe driving and the relatively intelligent driving experience can be realized.
[0006] Conventionally, there exist many systems that are capable of detecting concentration of the driver while driving the vehicle, however these existing systems fail in providing a means to analyze gestures and expressions to evaluate the context of user actions without causing any accidents. In addition, these existing systems are also incapable of preventing the driver from drowsy driving conditions by alerting the driver interpreting driver inputs.
[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a system that is requires to be capable of assisting a user in managing basic functions of a vehicle while driving as well as monitoring the driver's physical and mental state. Furthermore, the developed system also needs to be potent enough of reducing accidents by preventing the driver from drowsy driving.
OBJECTS OF THE INVENTION
[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.
[0009] An object of the present invention is to develop a system that is capable of enhancing driving safety by monitoring the driver's state of alertness, concentration, and fatigue levels to prevent accidents.
[0010] Another object of the present invention is to develop a system that is capable of preventing drowsy driving by detecting signs of drowsiness and alerting the driver through sound, vibration, or changes in airflow and accordingly optimizes driver comfort by adjusting sound direction, airflow, and temperature based on the driver's preferences and real-time needs.
[0011] Another object of the present invention is to develop a system that is capable of streamlining vehicle control by analyzing hand gestures and facial expressions to determine the context of user actions for keeping the driver alert and engaged.
[0012] Yet another object of the present invention is to develop a system that is capable of minimizing accidental activations by requiring driver confirmation before executing certain functions, thereby ensuring safe operation.
[0013] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0014] The present invention relates to a gesture and voice-controlled system for safer driving that is capable of prioritizing driver well-being by continuously monitoring vital signs of fatigue, distraction, and drowsiness and also intuitively adjusts the driving environment and enables seamless hands-free control, ensuring a safer and more personalized driving experience.
[0015] According to an embodiment of the present invention, a gesture and voice-controlled system for safer driving, comprising a cuboidal body affixed to the inner cabin of the vehicle via multiple suction cups, an artificial intelligence-based imaging unit mounted on a gear and gear train for 360-degree rotational movement on the body, paired with a processor and IR sensor to monitor facial expressions and body posture, a speaker unit arranged on the body, comprising directional and ultrasonic speakers and mounted on ball-and-socket joints for dynamic sound direction adjustment to enhance wakefulness and concentration of the user and while a gesture detection sensor is arranged on the body for detecting hand gestures made by the user for increasing or decreasing airflow inside the vehicle.
[0016] According to another embodiment of the present invention, the proposed device further comprises of multiple motorized hinge joints, integrated with AC vents, are attached to the body and controlled via an IoT module interfacing with the microcontroller to adjust the airflow as per the user requirement, a microcontroller is embedded within the body, capturing voice commands via an integrated microphone, an angle attached on the body that continuously monitors positioning of the speaker units to ensure optimal sound directionality, a piezoelectric unit is integrated into the driving seat, generating vibrating alerts when signs of drowsiness or fatigue are detected and a battery is associated with the device to supply power to electrically powered components which are employed herein.
[0017] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of a cuboidal body associated with a gesture and voice-controlled system for safer driving; and
Figure 2 illustrates a perspective view of the body installed in an inner cabin of a vehicle associated with the proposed system.
DETAILED DESCRIPTION OF THE INVENTION
[0019] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.
[0020] In any embodiment described herein, the open-ended terms "comprising," "comprises," and the like (which are synonymous with "including," "having" and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.
[0021] As used herein, the singular forms "a," "an," and "the" designate both the singular and the plural, unless expressly stated to designate the singular only.
[0022] The present invention relates to a gesture and voice-controlled system for safer driving that is capable of enhancing driver safety and comfort by monitoring physical, mental state, fatigue and distraction, and responding with adaptive adjustments. Additionally, the proposed system is also capable of providing hands-free control, real-time feedback, and personalized settings, reducing accidents and promoting a safe and secured driving experience.
[0023] Referring to Figure 1 and 2, an isometric view of a cuboidal body 101 associated with a gesture and voice-controlled system for safer driving and a perspective view of the body installed in an inner cabin of a vehicle associated with the proposed system are illustrated, respectively, comprising a cuboidal body 101, multiple suction cups 102 are arranged beneath the body 101, an artificial intelligence-based imaging unit 103 installed on the body 101, a speaker unit 104 mounted on the body 101, a pair of motorized hinge joints 201 integrated with louvers of the AC (air conditioning) vents pre-installed inside the vehicle, the imaging unit 103 is mounted on a gear and gear train arrangement 105, a microphone 106 mounted on the body 101.
[0024] The system disclosed herein, comprises of a cuboidal body 101, which is specially designed to be positioned inside a vehicle's cabin. The body's 101 cuboidal shape allows for efficient use of space, making it easy to integrate into the vehicle's interior. The design of the cuboidal body 101 allows for easy installation and adjustment, accommodating various vehicle models and cabin layouts. Its compact size and strategic placement enable seamless integration into the vehicle's interior.
[0025] To secure the cuboidal body 101 in place, multiple suction cups 102 are arranged beneath it to provide a strong grip, attaching the body 101 to a fixed surface inside the cabin, such as the dashboard or center console. The suction cups 102 are used to create a vacuum seal between the surface and the body 101. When the suction cups 102 are pressed against the surface, the initial contact creates a seal between the cups 102 and the surface, this seals off the area within the suction cups 102. The suction cups 102 are designed to maintain a relatively airtight seal.
[0026] The suction cups 102 create a reliable and stable connection, preventing the cuboidal body 101 from shifting or detaching while the vehicle is moving, which ensures that the body 101 remains securely in place, providing optimal performance and minimizing distractions for the driver. The body 101 allows for easy installation and adjustment, accommodating various vehicle models and cabin layouts. The compact size and strategic placement of the body 101 enables seamless integration into the vehicle's interior.
[0027] After securing the body 101 with the surface inside the cabin, an artificial intelligence-based imaging unit 103 integrated on the body 101 get actuated by an inbuilt microcontroller to monitor the driver's state of alertness and concentration. The artificial intelligence-based imaging unit 103 is constructed with a camera lens and a processor, wherein the camera lens is adapted to capture a series of images of the user's surrounding, including their facial expressions and body 101 posture, using the high-resolution camera. The imaging unit 103 utilizes artificial intelligence and machine learning protocol.
[0028] The processor carries out a sequence of image processing operations including pre-processing, feature extraction, and classification. The image captured by the imaging unit 103 is real-time images of the user's surrounding. The artificial intelligence based imaging unit 103 in communication with the microcontroller, wherein the microcontroller used herein is an Arduino Uno microcontroller. The artificial intelligence based imaging unit 103 transmits the captured image signal in the form of digital bits to the microcontroller.
[0029] The imaging unit 103 is mounted on a gear and gear train arrangement 105, enabling unparalleled flexibility and range of motion, which allows the imaging unit 103 to rotate a full 360 degrees, providing an unobstructed view of the driver and surrounding environment. The gear train arrangement 105 facilitates smooth and precise movement, enabling the imaging unit 103 to capture high-quality images from various angles. This comprehensive monitoring capability ensures that the system is able to detect even subtle changes in the driver's behavior, facial expressions, and body 101 posture.
[0030] The core of the gear train arrangement 105 consists of toothed wheels called gears, which transmit rotational motion through a series of interconnected gears known as a gear train. The gears are connected by shafts, and bearings reduce friction to ensure smooth operation. When the imaging unit's 103 rotation is initiated by an electric motor or actuator, it turns the input shaft, which then connects to a gear. This gear meshes with adjacent gears in the gear train, transmitting rotational force, or torque, from one gear to the next. The gear ratio, determined by the tooth count of each gear, adjusts the rotational speed and torque.
[0031] As the gears mesh, they convert rotational motion into linear motion, providing mechanical advantage and amplifying or reducing rotational force. The output shaft, connected to the final gear in the train, ultimately rotates the imaging unit 103. This precise control and high torque enable the imaging unit 103 to rotate smoothly and maintain its position, even in the presence of vibrations or vehicle movements.
[0032] For instance, if the driver glance away from the road, the imaging unit 103 rotates to capture their line of sight, detecting potential distractions. Similarly, if a passenger enters the vehicle, the imaging unit 103 adjusts its focus to include the new occupant.
[0033] To enhance accuracy, the imaging unit 103 works in synchronization with an IR (infrared) sensor to detect subtle changes in the driver's facial expressions, even in low-light conditions, allowing the microcontroller to assess their emotional state. The IR (Infrared) sensor is basically a thermophile or pyroelectric detector used for detecting and measuring the IR (Infrared) radiation. This sensor consists of multiple thermocouples or sensitive materials that generate a voltage or current when exposed to IR radiation.
[0034] To isolate the desired IR wavelength range, a filter is used to block the unwanted ambient radiation and ensure that the IR (Infrared) sensor focuses only on the specific range of IR (Infrared) radiation emitted by the user's face. Any deformity, inconsistency, or defects on the user's surface led to variations in the emitted IR radiation. These variations are detected by the IR (Infrared) sensor as changes in the intensity of the IR signal. The output from the IR (Infrared) sensor is sent to the microcontroller which processes the signals for determining the state of alertness and concentration of the user while driving.
[0035] The acquired data from the imaging unit 103 and IR sensor are transmitting to the microcontroller, which uses machine learning protocols to assess the driver's state of alertness and concentration. The microcontroller evaluates various factors, including Eye movement and gaze direction, Facial expressions, such as yawning or frowning, body 101 posture, including slouching or leaning and Head position and movement. The artificial intelligence-based imaging unit 103 helps in preventing accidents caused by driver fatigue, distraction, or lapses in attention by continuously monitoring the driver's state.
[0036] For example, a driver on a long highway drive, may start feeling drowsy. The imaging unit 103 captures images of the driver's drooping eyelids, and the IR sensor detects changes in their facial muscles. The processor analyzes this data and sends alerts to the microcontroller, which then assesses the driver's state of alertness. If the driver's alertness level falls below a certain threshold, the system triggers alerts, such as sound or vibration, to rouse the driver and prevent potential accidents.
[0037] If the driver's alertness level falls below a certain threshold, the system triggers alerts, such as sound or vibration with the help of a speaker unit 104 integrated into the body 101 that plays a crucial role in maintaining the driver's alertness and focus to rouse the driver and prevent potential accidents. The speaker unit 104 comprises of directional and ultrasonic speaker unit 104 mounted on ball-and-socket joints 201, which enables dynamic adjustment of sound direction towards the driver, based on their seating position and comfort preferences.
[0038] The directional speaker unit 104 focuses on sound waves towards a specific area or listener. At the core of these units lies a traditional speaker driver, which converts electrical signals into sound waves. A specially designed waveguide shapes and directs these sound waves, while an acoustic lens focuses them, creating a narrow beam. This directional sound beam is then emitted towards the target area, ensuring that the sound is concentrated and intense.
[0039] The ultrasonic speaker unit 104, on the other hand, produce high-frequency sound waves beyond human hearing, typically above 500 Hz. These units employ a piezoelectric transducer to convert electrical signals into ultrasonic sound waves. A frequency generator produces the high-frequency electrical signals, which are then amplified by an amplifier. The amplified signals are sent to the transducer, which emits the ultrasonic sound waves through a radiating surface. As air molecules vibrate, pressure waves propagate, creating the desired ultrasonic effect.
[0040] Combining directional and ultrasonic speaker unit 104 creates a speaker unit 104 that focuses high-frequency sound waves. This is achieved through a phased array of multiple ultrasonic transducers, arranged to create a directional beam. A beamforming algorithm controls the transducer phases, steering the sound beam towards the target area, which results in increased sound intensity and reduced sound spillage, providing improved audio privacy and directionality.
[0041] In the system, the speaker unit 104 are specifically designed to emit sound frequencies around 500 Hz to awaken drivers from light sleep and produce louder sounds with a tempo of 100-120 beats per minute to rouse drivers from deeper sleep states, which ensures that drivers receive the appropriate level of auditory stimulation to regain alertness and focus. Simultaneously, the ball-and-socket joints 201 allow the speaker unit 104 to pivot and rotate, ensuring optimal sound direction and coverage. This feature enables to adapt sound output to accommodate different driving position, compensate for variations in cabin acoustics and provide personalized audio experiences for each driver.
[0042] The device features an angle sensor that works in synchronization with the imaging unit 103 to ensure optimal sound directionality by monitoring the positioning of the speaker unit 104 in relation to the user's real-time location and posture. To achieve this, the angle sensor syncs with the imaging unit 103 to gather data on the user's location and posture. The imaging unit's 103 high-resolution camera captures detailed images of the user, which are then analyzed by the microcontroller. The imaging unit 103 uses machine learning protocols to detect subtle changes in the user's posture, such as leaning forward or backward, and adjusts the speaker unit 104 direction accordingly.
[0043] This real-time monitoring enables the system to maintain an optimal sound directionality. The angle sensor used herein is preferably an optical angle sensor that use light beams and optical detectors to measure changes in light reflection or transmission caused by the angle of the speaker unit 104. As the angle changes, the amount of light reflected or transmitted varies, allowing the sensor to calculate the angle. The angle sensor provides an output signal to the microcontroller that represents the detected angle of the speaker unit 104.
[0044] For instance, if the user adjusts their seat or leans forward, the angle sensor detects the change and adjusts the speaker unit 104 direction to maintain optimal sound directionality. This ensures that the user receives the intended audio cues, such as alerts or warnings, without distraction or compromise.
[0045] A gesture detection sensor configured with the body 101, works in harmony with the imaging unit 103 to detect hand gestures made by the user to enable the user to control airflow inside the vehicle with simple hand movements. The gesture detection sensor uses machine learning and artificial intelligence protocols to track the user's hand movements. This synchronization ensures that the gestures are accurately detected for effective analysis. The microcontroller receives data from the gesture detection sensor and the imaging unit 103 and processes the information. The microcontroller compares the user-made gestures with the pre-saved gestures stored in a database linked with the microcontroller. The database contains a library of features that the microcontroller recognizes and responds to.
[0046] When the user makes a specific hand gesture, the gesture detection sensor captures it and sends it to the microcontroller. The microcontroller according to the user-made gestures sends a signal to regulate working of a pair of motorized hinge joints 201 integrated with the louvers of the air conditioning vents. These motorized joints 201 adjust the louver positions that allows dynamic adjustment of airflow inside the vehicle. The motorized hinge joints 201 typically involves the use of an electric motor to control the movement of the hinge and the connected component.
[0047] The hinge joints 201 provides the pivot point around which the movement occurs. The motor is the core component responsible for generating the rotational motion. It converts the electrical energy into mechanical energy, producing the necessary torque that drives the hinge joints 201. As the motor rotates, the motorized hinge joints 201 adjust the airflow.
[0048] For instance, a user makes a specific hand gesture to increase airflow on a hot day or decrease it on a cold day. The gesture detection sensor recognizes the gesture and adjusts the louvers accordingly, ensuring optimal airflow and comfort.
[0049] The motorized hinge joints 201 are controlled via an advanced IoT (Internet of Things) module that seamlessly interfaces with the microcontroller. This integration enables automated adjustments of the airflow vents based on detected user fatigue and concentration levels. The IoT module connects to the microcontroller, receiving real-time data on the user's physical and mental state. This data is gathered through various sensors, which are mentioned above. The microcontroller processes this data, using machine learning protocols to detect early signs of user fatigue and decreased concentration.
[0050] When these conditions are identified, the microcontroller sends signals to the IoT module, which controls the motorized hinge joints 201, adjusting the airflow vents to optimize air circulation and refresh the user. This automated response helps to reduce driver fatigue and drowsiness, improve concentration and alertness, and enhances overall driving safety and performance.
[0051] For example, if the user's decreased concentration detected, it automatically adjusts the airflow vents to increase oxygen flow and refresh the user. Conversely, if the user appears relaxed and alert, it adjusts the vents to maintain a comfortable environment.
[0052] The imaging unit 103 plays a vital role in monitoring the user's hand gestures and facial expressions, enabling the body 101 to determine the context of user actions and enhance vehicle control and safety. The imaging unit 103 uses artificial intelligence protocols to track the user's hand movements and facial expressions in real-time. For instance, if the user makes a hand gesture to adjust the temperature, the imaging unit 103 captures the gesture and sends the data to the microcontroller. The microcontroller analyzes the gesture in conjunction with the user's facial expressions to assess the urgency and appropriateness of the action.
[0053] If the user appears frustrated or distracted, the microcontroller prioritizes the gesture, adjusting the temperature promptly. Conversely, if the user seems relaxed, the microcontroller adjusts the temperature gradually. Upon detecting an uncertain hand gesture, the microcontroller generates an auditory prompt through the speaker unit 104 to solicit confirmation from the user.
[0054] The microcontroller enabling the users to confirm or deny the execution of vehicle functions using voice commands. This functionality is facilitated by a microphone 106 integrated with the body 101 that captures the user's voice inputs and sends them to the microcontroller for processing. The microphone 106 plays a crucial role by converting spoken words or commands into electrical signals which are then processed and analyzed to trigger specific actions. When the user speaks or commands about confirmation or denial of the execution of vehicle functions, their vocal cords vibrate, creating sound waves.
[0055] These sound waves travel through the air as variations in air pressure. The microphone 106 mentioned herein is a transducer that converts these variations in air into electric signals. The analog electrical signal is converted into digital form which is done by an analog-to-digital converter (ADC). The digital signal is then subjected to various signal processing techniques to enhance voice quality and eliminate noise. The microcontroller uses sophisticated voice recognition protocols to interpret the user's voice commands, allowing for seamless interaction with the vehicle's systems, which enables a multi-modal interface, where the user combines hand gestures with voice confirmation to execute various vehicle functions.
[0056] The body 101 incorporates a feedback mechanism that works in tandem with the imaging unit 103 to detect signs of drowsiness in the user. When the imaging unit 103 identifies early indicators of fatigue, such as drooping eyelids, yawning, or decreased pupil activity, it sends a signal to the microcontroller. The microcontroller instantly responds by adjusting the cooling output and direction of airflow to stimulate the user and promote alertness. This sudden change in environmental conditions helps to rouse the user from their drowsy state, reducing the risk of accidents caused by driver fatigue.
[0057] To maximize effectiveness, the microcontroller utilizes various strategies to stimulate the user, including sudden changes in airflow direction, increased airflow velocity, cooling or heating bursts, and alternating airflow patterns. The benefits of this feedback mechanism are numerous, which enhances driver safety, reduces the risk of accidents caused by fatigue, improves user alertness and attention, and provides a personalized response to individual needs.
[0058] A piezoelectric unit integrated into the driving seat of the vehicle, which is designed to generate a subtle vibrating effect to alert the user when signs of drowsiness or fatigue are detected. The piezoelectric unit works in tandem with the imaging unit 103 and microcontroller to provide a multi-sensory warning. When the imaging unit 103 detects early indicators of fatigue, such as drooping eyelids, yawning, or decreased pupil activity, it sends a signal to the microcontroller.
[0059] The microcontroller then activates the piezoelectric unit, which generates a gentle yet noticeable vibrating effect through the seat. This tactile stimulation helps to rouse the driver from their drowsy state, reducing the risk of accidents caused by driver fatigue. The piezoelectric unit's vibrating effect is carefully calibrated for maximum effectiveness. The vibration pattern, intensity, and duration are adjustable to accommodate individual user preferences. For instance, the piezoelectric unit employs gentle, pulsing vibrations to alert the driver or increase vibration intensity as drowsiness persists.
[0060] A battery is associated with the device to supply power to electrically powered components which are employed herein. The battery is comprised of a pair of electrode named as a cathode and an anode. The battery uses a chemical reaction of oxidation/reduction to do work on charge and produce a voltage between their anode and cathode and thus produces electrical energy that is used to do work in the system.
[0061] The present invention works best in following manner, where the body 101 as disclosed in the invention is developed to be positioned in inner cabin operates by capturing multiple images of the driver's surroundings through its imaging unit 103, monitoring facial expressions and body 101 posture via infrared sensing. The microcontroller processes this data, assessing the driver's state of alertness and concentration. If signs of drowsiness or distraction are detected, the microcontroller activates the speaker unit 104 to generate sound outputs, enhancing wakefulness and concentration. Simultaneously, the gesture detection sensor monitors hand gestures, allowing the driver to adjust airflow inside the vehicle. The microcontroller compares detected gestures with pre-saved ones, regulating motorized hinge joints 201 to dynamically adjust airflow. If uncertain hand gestures are detected, the microcontroller generates auditory prompts through the speaker unit 104, soliciting confirmation from the driver. Continuous monitoring of the driver's state enables adjustments to airflow and sound outputs. Additionally, a piezoelectric unit generates vibrating alerts to rouse the driver, when necessary, while voice commands are captured through an integrated microphone 106 for confirming or denying vehicle functions. An IoT module interfaces with the microcontroller for automated adjustments based on detected fatigue and concentration levels, ensuring proactive enhancement of driving safety through multi-sensory warnings and adjustments and a battery supplies power to electrically powered components which are employed herein.
[0062] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) A gesture and voice-controlled system for safer driving, comprising:
i) a cuboidal body 101 developed to be positioned in inner cabin of a vehicle, wherein multiple suction cups 102 are arranged beneath said body 101 for affixing said body 101 with a fixed surface inside said cabin;
ii) an artificial intelligence-based imaging unit 103 installed on said body 101 and paired with a processor for capturing and processing multiple images of surroundings, respectively, in sync with an IR (Infrared) sensor for facial expression monitoring and body 101 posture, based on which an inbuilt microcontroller assesses state of alertness and concentration of said user while driving;
iii) a speaker unit 104 mounted on said body 101 that is actuated by said microcontroller to dynamically generate sound outputs with specific frequencies and volumes to enhance wakefulness and concentration of said user, only in case said microcontroller detects an unusual body 101 posture or facial expression indicative of drowsiness or distraction;
iv) a gesture detection sensor arranged on said body 101 that works in synchronization with said imaging unit 103 for detecting hand gestures made by said user for increasing/decreasing airflow inside said vehicle, wherein said microcontroller compares said user-made gestures with pre-saved gestures stored in a database linked with said microcontroller, in accordance to which said microcontroller regulates working of a pair of motorized hinge joints 201 integrated with louvers of said AC (air conditioning) vents pre-installed inside said vehicle, enabling dynamic adjustment of airflow; and
v) said imaging unit 103 is configured to monitor user's hand gestures and facial expressions, determining context of user actions to enhance vehicle control and safety, wherein said microcontroller is pre-fed to analyze real-time data, including user's facial expressions, in conjunction with detected hand gestures to assess urgency and appropriateness of the gesture, and upon detection of an uncertain hand gesture, said microcontroller generates an auditory prompt through said speaker unit 104 to solicit confirmation from user before executing any vehicle functions.
2) The system as claimed in claim 1, wherein said imaging unit 103 is mounted on a gear and gear train arrangement 105 that enables 360-degree rotational movement, allowing comprehensive monitoring of user and surrounding environment.
3) The system as claimed in claim 1, wherein said speaker unit 104 comprises of directional and ultrasonic speaker unit 104 mounted on ball-and-socket joints 201, enabling dynamic adjustment of sound direction towards said user based on seating position and comfort preferences.
4) The system as claimed in claim 1, wherein said speaker unit 104 is pre-fed to emit sound frequencies around 500 Hz to awaken user from light sleep, and to produce louder sounds with a tempo of 100-120 beats per minute to rouse user from deeper sleep states, enhancing overall driving safety.
5) The system as claimed in claim 1, wherein said body 101 comprises of an angle sensor synced with said imaging unit 103, which continuously monitors positioning of said speaker unit 104 to ensure optimal sound directionality based on user's real-time location and posture.
6) The system as claimed in claim 1, wherein said hinge joints 201 are controlled via an IoT (Internet of Things) module that interfaces with said microcontroller, allowing automated adjustments based on detected user fatigue and concentration levels.
7) The system as claimed in claim 1, wherein said microcontroller is further integrated with a feedback mechanism that alters cooling output and direction of airflow when said imaging unit 103 detects signs of drowsiness in said user, promoting alertness through unexpected environmental changes.
8) The system as claimed in claim 1, wherein said microcontroller captures voice commands via an integrated microphone 106, allowing said user to confirm or deny execution of vehicle functions based on previously detected hand gestures.
9) The system as claimed in claim 1, wherein a piezoelectric unit is integrated into driving seat of said vehicle, configured to generate a vibrating effect to alert said user when signs of drowsiness or fatigue are detected.
10) The system as claimed in claim 1, wherein a battery is associated with said system for powering up electrical and electronically operated components associated with said system.
Documents
Name | Date |
---|---|
202411082840-COMPLETE SPECIFICATION [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-DECLARATION OF INVENTORSHIP (FORM 5) [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-DRAWINGS [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-EDUCATIONAL INSTITUTION(S) [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-EVIDENCE FOR REGISTRATION UNDER SSI [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-FIGURE OF ABSTRACT [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-FORM 1 [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-FORM FOR SMALL ENTITY(FORM-28) [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-FORM-9 [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-POWER OF AUTHORITY [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-PROOF OF RIGHT [29-10-2024(online)].pdf | 29/10/2024 |
202411082840-REQUEST FOR EARLY PUBLICATION(FORM-9) [29-10-2024(online)].pdf | 29/10/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.