image
image
user-login
Patent search/

DRIVER DROWSINESS DETECTION AND ALERTING SYSTEM

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

DRIVER DROWSINESS DETECTION AND ALERTING SYSTEM

ORDINARY APPLICATION

Published

date

Filed on 5 November 2024

Abstract

DRIVER DROWSINESS DETECTION AND ALERTING SYSTEM ABSTRACT A driver drowsiness detection and alerting system (100) is disclosed. The system (100) comprising: an image capturing unit (102) to capture a real-time video and an image processing unit (104) to receive the captured real-time video for identifying and marking facial landmarks on a face in the video. A controller unit (106) configured to: receive the facial landmarks marked; calculate an Eye Aspect Ratio (EAR); compare the calculated Eye Aspect Ratio (EAR) with a first threshold value; calculate a Mouth Opening Ratio (MOR) and a Nose to Lip Ratio (NLR), when the calculated Eye Aspect Ratio (EAR) is less than the first threshold value; determine a drowsiness condition when the calculated Mouth Opening Ratio (MOR) and the Nose to Lip Ratio (NLR) is less than a second threshold value and a third threshold value; and generate an alert. The system (100) eliminates physical sensors and relies on camera-based monitoring. Claims: 10, Figures: 3 Figure 1 is selected.

Patent Information

Application ID202441084632
Invention FieldELECTRONICS
Date of Application05/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
Dr. Ch. Rajendra PrasadSR University, Ananthasagar, Hasanparthy(PO), Warangal, Telangana-506371, IndiaIndiaIndia
Ravishetty. Sai VishwanthSR University, Ananthasagar, Hasanparthy(PO), Warangal, Telangana-506371, IndiaIndiaIndia
Haripuri. HiranmayeeSR University, Ananthasagar, Hasanparthy(PO), Warangal, Telangana-506371, IndiaIndiaIndia
Mandala Vinay Kumar YadavSR University, Ananthasagar, Hasanparthy(PO), Warangal, Telangana-506371, IndiaIndiaIndia
Bathini SirishaSR University, Ananthasagar, Hasanparthy(PO), Warangal, Telangana-506371, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
SR UniversitySR University, Ananthasagar, Warangal Telangana India 506371 patent@sru.edu.in 08702818333IndiaIndia

Specification

Description:BACKGROUND
Field of Invention
Embodiments of the present invention generally relate to safter automobile driving accessory and particularly to a driver drowsiness detection and alerting system.
Description of Related Art
Driver fatigue is a significant issue contributing to road accidents globally, leading to severe injuries and fatalities. Various efforts have been made to mitigate this problem by developing systems aimed at detecting and alerting drivers when signs of drowsiness are detected. These efforts are driven by the increasing number of accidents associated with drowsy driving, which often occur due to prolonged hours on the road, especially among professional drivers and long-distance travelers.
Historically, fatigue detection systems have been based on intermittent checks or self-reporting mechanisms. These methods lack real-time precision and are not fully reliable, as they rely heavily on subjective input or non-continuous monitoring. Early commercial solutions, such as dashboard warning lights and auditory alarms, typically trigger based on vehicle behavior, like sudden lane changes or erratic steering. However, these systems often fail to capture early signs of driver drowsiness, as they react only after the driver's performance has already deteriorated.
Recent advances in technology have led to the development of more sophisticated solutions, such as systems that monitor the driver's physiological state, including eye movement, head position, and facial expressions. However, these systems often require additional hardware, such as external sensors and cameras, to capture the necessary data. While this approach has improved detection accuracy, the reliance on physical sensors and integration with the vehicle's existing systems creates additional costs and complications for implementation.
Despite these advancements, existing driver drowsiness detection systems still face significant challenges. False positives remain a problem, where normal behavior, such as checking mirrors or briefly closing eyes, is interpreted as signs of drowsiness. Conversely, some systems fail to recognize genuine fatigue indicators, leading to delayed or missed alerts. The lack of continuous monitoring and adaptability to changing environmental conditions also limits the effectiveness of these solutions in preventing accidents.
There is thus a need for an improved and advanced driver drowsiness detection and alerting system that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
Embodiments in accordance with the present invention provide a driver drowsiness detection and alerting system The system comprising: an image capturing unit, arranged in a visual proximity of a driver of a vehicles, and adapted to capture a real-time video of the driver. The system further comprising: an image processing unit, adapted to receive the captured real-time video of the driver. The image processing unit is configured to: identify a presence of a face in a frame of the received real-time video; and mark facial landmarks on the face identified in the frame. A controller unit is communicatively connected to the image processing unit. The controller unit is configured to: calculate an Eye Aspect Ratio (EAR) from the received facial landmarks, wherein the Eye Aspect Ratio (EAR) measures a ratio of a width to a height of eyes; compare the calculated Eye Aspect Ratio (EAR) with a first threshold value; calculate a Mouth Opening Ratio (MOR) and a Nose to Lip Ratio (NLR), when the calculated Eye Aspect Ratio (EAR) is less than the first threshold value; determine a drowsiness condition when the calculated Mouth Opening Ratio (MOR) and the Nose to Lip Ratio (NLR) is less than a second threshold value and a third threshold value; and generate an alert, upon detecting the drowsiness condition.
Embodiments in accordance with the present invention further provide a method for detecting drowsiness and alerting a driver using a driver drowsiness detection and alerting system. The method comprising steps of: receiving facial landmarks marked on a face identified in a frame from an image processing unit; calculating an Eye Aspect Ratio (EAR) from the received facial landmarks, wherein the Eye Aspect Ratio (EAR) measures a ratio of a width to a height of eyes; comparing the calculated Eye Aspect Ratio (EAR) with a first threshold value; calculating a Mouth Opening Ratio (MOR) and a Nose to Lip Ratio (NLR), when the calculated Eye Aspect Ratio (EAR) is less than the first threshold value; determining a drowsiness condition when the calculated Mouth Opening Ratio (MOR) and the Nose to Lip Ratio (NLR) is less than a second threshold value and a third threshold value; and generating an alert, upon detecting the drowsiness condition.
Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a driver drowsiness detection and alerting system.
Next, embodiments of the present application may provide a drowsiness detection system that continuously monitors the driver's alertness, providing more accurate and timely detection of drowsiness.
Next, embodiments of the present application may provide a drowsiness detection system that uses AI-driven deep learning algorithms to analyze eye movements, facial expressions, and head positioning, eliminating the need for physical sensors attached to the driver, and making it more comfortable and user-friendly.
Next, embodiments of the present application may provide a drowsiness detection system that uses advanced algorithms to assess multiple indicators of drowsiness (such as Eye Aspect Ratio, Mouth Opening Ratio, and Head Position), the system reduces false positives and negatives, offering a higher level of detection precision.
Next, embodiments of the present application may provide a drowsiness detection system that dynamically adjusts its sensitivity based on the individual driver's fatigue profile and external conditions like the time of day or driving duration, providing personalized monitoring.
Next, embodiments of the present application may provide a drowsiness detection system that stores and analyzes data in a cloud storage, allowing continuous learning and improvements in the system's predictive accuracy. This further enables fleet operators to track and analyze fatigue trends across multiple drivers over time.
Next, embodiments of the present application may provide a drowsiness detection system that eliminates physical sensors and reliance on camera-based monitoring lowers hardware costs, and simplifies installation, making it easier to integrate into a variety of vehicles without major modifications.
Next, embodiments of the present application may provide a drowsiness detection system that can provide suggestions for nearby rest stops if fatigue levels become critical, further enhancing driver safety.
Next, embodiments of the present application may provide a drowsiness detection system that can factor in environmental conditions like lighting and weather, adjusting its fatigue detection models for more accurate assessment in varying driving scenarios.
Next, embodiments of the present application may provide a drowsiness detection system that delivers alerts through auditory, visual, and haptic feedback, ensuring that the driver is effectively informed of their fatigue status and reducing the risk of accidents.
Next, embodiments of the present application may provide a drowsiness detection system that is highly scalable and can be improved over time with updates to the software, without requiring significant hardware changes.
These and other advantages will be apparent from the present application of the embodiments described herein.
The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
FIG. 1 illustrates a driver drowsiness detection and alerting system, according to an embodiment of the present invention;
FIG. 2 illustrates a block diagram of a controller unit of the driver drowsiness detection and alerting system, according to an embodiment of the present invention; and
FIG. 3 depicts a flowchart of a method for detecting drowsiness and alerting a driver using the driver drowsiness detection and alerting system, according to an embodiment of the present invention.
The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words "include", "including", and "includes" mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
In any embodiment described herein, the open-ended terms "comprising", "comprises", and the like (which are synonymous with "including", "having" and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", "consists essentially of", and the like or the respective closed phrases "consisting of", "consists of", the like.
As used herein, the singular forms "a", "an", and "the" designate both the singular and the plural, unless expressly stated to designate the singular only.
FIG. 1 illustrates a driver drowsiness detection and alerting system 100 (hereinafter referred to as the system 100), according to an embodiment of the present invention. In an embodiment of the present invention, the system 100 may be installed and/or retrofitted in a vehicle. Further, the system 100 may be adapted to monitor actions and facial expressions of a driver driving the corresponding vehicle, in an embodiment of the present invention. In an embodiment of the present invention, the system 100 may be adapted to alert the driver when indications of drowsiness, sleepiness, tiredness, and so forth may be detected from the monitored actions and the facial expressions.
According to embodiments of the present invention, the vehicle may be, but not limited to, a passenger vehicle, a private vehicle, a freight carrier, a locomotive, an aerial vehicle, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the vehicle in which the system 100 may be installed and/or retrofitted, including known, related art, and/or later developed technologies.
According to embodiments of the present invention, the system 100 may comprise non-limiting elements such as an image capturing unit 102, an image processing unit 104, a controller unit 106, and an alert unit 108.
In an embodiment of the present invention, the image capturing unit 102 may be installed in a cabin of the vehicle. The image capturing unit 102 may be installed in such a location, angle, and orientation, so that the driver may be in a field of view of the image capturing unit 102, in an embodiment of the present invention. In an embodiment of the present invention, the image capturing unit 102 may be configured to capture a real-time video of the driver. In another embodiment of the present invention, the image capturing unit 102 may be adapted to capture a plurality of images of the driver. The images may be captured with a pre-defined time delay among the images, in an embodiment of the present invention.
According to the other embodiments of the present invention, the image capturing unit 102 may be, but not limited to, a still camera, a video camera, a color balancer camera, a thermal camera, an infrared camera, a telephoto camera, a wide-angle camera, a macro camera, a Close-Circuit Television (CCTV) camera, a web camera, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the image capturing unit 102, including known, related art, and/or later developed technologies.
In an embodiment of the present invention, the captured real-time video of the driver may be transmitted to a central control room (not shown) of a vehicle service provider (not shown). The captured real-time video of the driver transmitted to the central control room may be manually monitored for drowsiness detection. In another embodiment of the present invention, the captured real-time video of the driver may be transmitted to the image processing unit 104.
In an embodiment of the present invention, the image processing unit 104 may be a physical peripheral that may be physically installed with the image capturing unit 102 and may be configured to communicate with the controller unit 106. In another embodiment of the present invention, the image processing unit 104 may be remotely installed and virtually configured on a cloud based server (not shown). The virtual configuration of the image processing unit 104 may be achieved using means such as, but not limited to, an Oracle VMWare, a Sandbox, a VMware Horizon Client, and so forth. Embodiments of the present invention are intended to include or otherwise cover any means for achieving the virtual confutation of the image processing unit 104 over cloud based server.
In an embodiment of the present invention, the image processing unit 104 may be configured to receive the captured real-time video of the driver. The image processing unit 104 may further be configured to disintegrate the received the real-time video to access frames in the received the real-time video. Further, upon disintegration, the image processing unit 104 may identify a presence of a face in one or more of the frames in the disintegrated real-time video. According to embodiments of the present invention, the presence of the face in one or more of the frames may be identified using algorithms such as, but not limited to, an OpenCV Haarcascade, an OpenCV DNN (Deep Neural Network), a Dlib Algorithm, a Multi-Task Cascaded Convolutional Neural Network (MTCNN), a Face Net Algorithm (FNA), and so forth. Embodiments of the present invention are intended to include or otherwise cover any algorithm for identification of the presence of the face in one or more of the frames of the real-time video, including known, related art, and/or later developed technologies.
Further, upon identification of the presence of the face in one or more of the frames of the real-time video, the image processing unit 104 may be configured to mark facial landmarks on the face identified in one or more of the frames, in an embodiment of the present invention. In an embodiment of the present invention, the facial landmarks may be marked on a predefined location of the face identified in one or more of the frames. According to embodiments of the present invention, the location on the face for marking of the facial landmarks may be, but not limited to, eyes, lips, a mouth, a nose, a head, and so forth. Embodiments of the present invention are intended to include or otherwise cover any location on the face, identified in one or more of the frames, for marking the facial landmarks, including known, related art, and/or later developed technologies. According to embodiments of the present invention, the facial landmarks may be marked on the predefined location of the face using algorithms such as, but not limited to, a FacemarkLBF model, a Dlib library, a Media pipe model, and so forth. Embodiments of the present invention are intended to include or otherwise cover any algorithm for marking the facial landmarks on the predefined location of the face, including known, related art, and/or later developed technologies.
Further, after marking the facial landmarks on the face identified in one or more of the frames, the image processing unit 104 may transmit the marked facial landmarks on the face to the controller unit 106.
In an embodiment of the present invention, the controller unit 106 may be communicatively connected to the image processing unit 104. The controller unit 106 may further be configured to execute computer-executable instructions to generate an output relating to the system 100. According to embodiments of the present invention, the controller unit 106 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the controller unit 106 including known, related art, and/or later developed technologies. In an embodiment of the present invention, the controller unit 106 may further be explained in conjunction with FIG. 2.
In an embodiment of the present invention, the alert unit 108 may be installed in the cabin of the vehicle. The alert unit 108 may be installed in an audio-visual proximity of the driver. The alert unit 108 may be adapted to alert the driver of the corresponding vehicle. The alert unit 108 may alert the driver, when the driver may be exhibiting indications of drowsiness, sleepiness, tiredness, and so forth.
Further, the alert unit 108 may comprise a reset button (not shown). The reset button may be adapted to reset and/or deactivate the alert unit 108 after the alert unit 108 has been activated. The reset button may be pressed by the driver after the driver gains consciousness from drowsiness and may be paying attention while driving the vehicle.
According to embodiments of the present invention, the alert unit 108 may be, but not limited to, a Light Emitting Diode (LED), a buzzer, a speaker, pneumatically activated vibrators, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the alert unit 108, including known, related art, and/or later developed technologies.
FIG. 2 illustrates a block diagram of the controller unit 106 of the system 100, according to an embodiment of the present invention. The controller unit 106 may comprise the computer-executable instructions in form of programming modules such as a data receiving module 200, a data calculation module 202, a data comparison module 204, a data determination module 206, and an alert module 208.
In an embodiment of the present invention, the data receiving module 200 may be configured to receive the facial landmarks marked on the face identified in the frame from the image processing unit 104. Further, upon receipt of the facial landmarks marked on the face, the data receiving module 200 may transmit the received facial landmarks to the data calculation module 202.
The data calculation module 202 may be activated upon receipt of the facial landmarks from the data receiving module 200. In an embodiment of the present invention, the data calculation module 202 may be configured to calculate an Eye Aspect Ratio (EAR) from the received facial landmarks. The Eye Aspect Ratio (EAR) may be a measure of a ratio of a width to a height of the eyes. The Eye Aspect Ratio (EAR) may be mathematically represented as using an equation 1:
Eye Aspect Ratio (EAR)=width of eyes:height of eyes --- 1
Further, the Eye Aspect Ratio (EAR) may be calculated using an equation 2:
Eye Aspect Ratio (EAR)=(Width of eyes)/(Height of eyes) ---2
In an embodiment of the present invention, the data calculation module 202 may be configured to calculate a Mouth Opening Ratio (MOR) from the received facial landmarks. The Mouth Opening Ratio (MOR) may be a measure of a wide opening of a mouth of the driver. The calculation of the Mouth Opening Ratio (MOR) may enable a detection of yawning action by the driver. The Mouth Opening Ratio (MOR) may be mathematically represented as using an equation 3:
Mouth Opening Ratio (MOR)=A total width of mouth from end to end --- 3
In an embodiment of the present invention, the data calculation module 202 may be configured to calculate a Nose to Lip Ratio (NLR) from the received facial landmarks. The Nose to Lip Ratio (NLR) may be a measure of a distance between the lip(s) and the mouth of the driver. The calculation of the Nose to Lip Ratio (NLR) may enable the detection of changes in the facial expressions of the driver. The Nose to Lip Ratio (NLR) may be mathematically represented as using an equation 4:
Nose to Lip Ratio (NLR)=Avergage distance between lip and mouth --- 4
Further, the Nose to Lip Ratio (NLR) may be calculated using an equation 5:
Nose to Lip Ratio (NLR)=(Commissure height+Philtrum height )/2 ---5
The data calculation module 202 may further be configured to transmit the Eye Aspect Ratio (EAR), the Mouth Opening Ratio (MOR), and the Nose to Lip Ratio (NLR) to the data comparison module 204
The data comparison module 204 may be activated upon receipt of the Eye Aspect Ratio (EAR), the Mouth Opening Ratio (MOR), and the Nose to Lip Ratio (NLR) from the data calculation module 202. The data comparison module 204 may be configured to compare the Eye Aspect Ratio (EAR), the Mouth Opening Ratio (MOR), and the Nose to Lip Ratio (NLR) with a first threshold value, a second threshold value, and a third threshold value respectively, in an embodiment of the present invention.
In an embodiment of the present invention, the data comparison module 204 may be configured to calibrate the first threshold value, the second threshold value, and the third threshold value by accessing a set of 300 frames from the real-time video of the driver captured by the image capturing unit 102. The set of 300 frames may be assessed using advanced machine learning algorithms techniques. Upon assessment of the 300 frames and derivation of the first threshold value, the second threshold value, and the third threshold value.
However, if the first threshold value, the second threshold value and the third threshold value may be unable to detect and flag a drowsiness condition in the driver, then the data comparison module 204 may be configured to calibrate the first threshold value, the second threshold value, and the third threshold value by accessing more than 300 frames from the real-time video of the driver captured by the image capturing unit 102. The data comparison module 204 may be configured to continue calibrating the first threshold value, the second threshold value, and the third threshold value, by using more and more frames from the real-time video of the driver captured by the image capturing unit 102, until the first threshold value, the second threshold value, and the third threshold value may be derived that may be able to detect and flag the drowsiness condition in the driver.
Further, the derived first threshold value, the second threshold value, and the third threshold value may be normalized. The normalization of the first threshold value, the second threshold value, and the third threshold value may provide an enablement of the system for any kind of driver exhibiting any kind of facial features and facial constructions.
In an embodiment of the present invention, the data comparison module 204 may be configured to compare the calculated Eye Aspect Ratio (EAR) with a first threshold value. Upon comparison, if the calculated Eye Aspect Ratio (EAR) is greater than the first threshold value, then the data comparison module 204 may reactivate the data receiving module 200 to continue receiving the facial landmarks marked on the face identified in the frame from the image processing unit 104.
However, if the calculated Eye Aspect Ratio (EAR) is less than the first threshold value, then the data comparison module 204 may be configured to compare the Mouth Opening Ratio (MOR) to the second threshold value. Upon comparison, if the calculated Mouth Opening Ratio (MOR) is greater than the second threshold value, then data comparison module 204 may reactivate the data receiving module 200 to continue receiving the facial landmarks marked on the face identified in the frame from the image processing unit 104.
However, if the calculated Mouth Opening Ratio (MOR) is less than the second threshold value, then the data comparison module 204 may be configured to compare the Nose to Lip Ratio (NLR) to the third threshold value. Upon comparison, if the calculated Nose to Lip Ratio (NLR) is greater than the third threshold value, then the data comparison module 204 may reactivate the data receiving module 200 to continue receiving the facial landmarks marked on the face identified in the frame from the image processing unit 104.
However, if the calculated Nose to Lip Ratio (NLR) is less than the third threshold value, then the data comparison module 204 may be configured to transmit an activation signal to the data determination module 206.
The data determination module 206 may be activated upon receipt of the activation signal from the data comparison module 204. The data determination module 206 may be configured to detect an orientation of the head of the driver. The data determination module 206 may be configured to detect an orientation of the head from the facial landmarks received from the data receiving module 200. Further, the data determination module 206 may be configured to track the orientation of the head. Moreover, if the head of the driver tends to tilt beyond a threshold angle, then the data determination module 206 may flag the drowsiness condition in the driver.
Upon flagging the drowsiness condition in the driver, the data determination module 206 may be configured to generate and transmit an alert signal to the alert module 208.
The alert module 208 may be activated upon receipt of the alert signal from the data determination module 206. In an embodiment of the present invention, the alert module 208 may be configured to generate the alert. The alert generated by the alert module 208 may be parsed through the alert unit 108. The parsing of the generated alert through the alert unit 108 may alert the alert unit 108 that may further in-turn alert the driver of the corresponding vehicle, in an embodiment of the present invention.
FIG. 3 depicts a flowchart of a method 300 for detecting drowsiness and alerting a driver using the system 100, according to an embodiment of the present invention.
At step 302, the system 100 may receive the facial landmarks marked on the face identified in the frame from the image processing unit 104.
At step 304, the system 100 may calculate the Eye Aspect Ratio (EAR) from the received facial landmarks.
At step 306, the system 100 may compare the calculated Eye Aspect Ratio (EAR) with the first threshold value. Upon comparison, if the calculated Eye Aspect Ratio (EAR) is less than the first threshold value, then the method 300 may proceed to the step 308. Else, the method 300 may revert to the step 302.
At step 308, the system 100 may calculate the Mouth Opening Ratio (MOR) from the received facial landmarks.
At step 310, the system 100 may compare the calculated Mouth Opening Ratio (MOR) with the second threshold value. Upon comparison, if the calculated Mouth Opening Ratio (MOR) is less than the second threshold value, then the method 300 may proceed to the step 312. Else, the method 300 may revert to the step 302.
At step 312, the system 100 may calculate the Nose to Lip Ratio (NLR) from the received facial landmarks.
At step 314, the system 100 may compare the calculated Nose to Lip Ratio (NLR) with the third threshold value. Upon comparison, if the calculated Nose to Lip Ratio (NLR) is less than the third threshold value, then the method 300 may proceed to the step 316. Else, the method 300 may revert to the step 302.
At step 316, the system 100 may determine the condition of the drowsiness of the driver.
At step 318, the system 100 may generate the alert.
At step 320, the system 100 may transmit the generated alert to the alert unit 108.
While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A driver drowsiness detection and alerting system (100), the system (100) comprising:
an image capturing unit (102), arranged in a visual proximity of a driver of a vehicles, and adapted to capture a real-time video of the driver;
an image processing unit (104), adapted to receive the captured real-time video of the driver, and configured to:
identify a presence of a face in a frame of the received real-time video; and
mark facial landmarks on the face identified in the frame; and
a controller unit (106) communicatively connected to the image processing unit (104), characterized in that the controller unit (106) is configured to:
receive the facial landmarks marked on the face identified in the frame from the image processing unit (104);
calculate an Eye Aspect Ratio (EAR) from the received facial landmarks, wherein the Eye Aspect Ratio (EAR) measures a ratio of a width to a height of eyes;
compare the calculated Eye Aspect Ratio (EAR) with a first threshold value;
calculate a Mouth Opening Ratio (MOR) and a Nose to Lip Ratio (NLR), when the calculated Eye Aspect Ratio (EAR) is less than the first threshold value;
determine a drowsiness condition when the calculated Mouth Opening Ratio (MOR) and the Nose to Lip Ratio (NLR) is less than a second threshold value and a third threshold value; and
generate an alert, upon detecting the drowsiness condition.
2. The system (100) as claimed in claim 1, wherein the controller unit (106) is configured to calibrate the first threshold value, the second threshold value, and the third threshold value by accessing a set of 300 frames of the real-time video captured by the image capturing unit (102) using advanced machine learning algorithms.
3. The system (100) as claimed in claim 1, wherein the controller unit (106) is configured to transmit the generated alert to an alert unit (108).
4. The system (100) as claimed in claim 1, wherein the controller unit (106) is configured to detect an orientation of the head of the driver to detect drowsiness of the driver.
5. The system (100) as claimed in claim 1, wherein an alert unit (108) is installed in a cabin of the vehicle in an audio-visual proximity of the driver.
6. The system (100) as claimed in claim 1, wherein an alert unit (108) is selected from a Light Emitting Diode (LED), a buzzer, a speaker, pneumatically activated vibrators, or a combination thereof.
7. The system (100) as claimed in claim 1, wherein the Mouth Opening Ratio (MOR) is a measure of a wide opening of a mouth of the driver to detect a yawning action by the driver.
8. The system (100) as claimed in claim 1, wherein the Nose to Lip Ratio (NLR) is a measure of a distance between a lip and a mouth of the driver to detect changes in facial expressions of the driver.
9. A method (300) for detecting drowsiness and alerting a driver using a driver drowsiness detection and alerting system (100), the method (300) is characterized by steps of:
receiving facial landmarks marked on a face identified in a frame from an image processing unit (104);
calculating an Eye Aspect Ratio (EAR) from the received facial landmarks, wherein the Eye Aspect Ratio (EAR) measures a ratio of a width to a height of eyes;
comparing the calculated Eye Aspect Ratio (EAR) with a first threshold value;
calculating a Mouth Opening Ratio (MOR) and a Nose to Lip Ratio (NLR), when the calculated Eye Aspect Ratio (EAR) is less than the first threshold value;
determining a drowsiness condition when the calculated Mouth Opening Ratio (MOR) and the Nose to Lip Ratio (NLR) are less than a second threshold value and a third threshold value; and
generating an alert, upon detecting the drowsiness condition.
10. The method (300) as claimed in claim 9, comprising a step of transmitting the generated alert to an alert unit (108).
Date: November 4, 2024
Place: Noida

Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant

Documents

NameDate
202441084632-DECLARATION OF INVENTORSHIP (FORM 5) [05-11-2024(online)].pdf05/11/2024
202441084632-DRAWINGS [05-11-2024(online)].pdf05/11/2024
202441084632-EDUCATIONAL INSTITUTION(S) [05-11-2024(online)].pdf05/11/2024
202441084632-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-11-2024(online)].pdf05/11/2024
202441084632-FORM 1 [05-11-2024(online)].pdf05/11/2024
202441084632-FORM FOR SMALL ENTITY(FORM-28) [05-11-2024(online)].pdf05/11/2024
202441084632-FORM-9 [05-11-2024(online)].pdf05/11/2024
202441084632-OTHERS [05-11-2024(online)].pdf05/11/2024
202441084632-POWER OF AUTHORITY [05-11-2024(online)].pdf05/11/2024
202441084632-REQUEST FOR EARLY PUBLICATION(FORM-9) [05-11-2024(online)].pdf05/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.