Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
SMART NIGHT VISION SYSTEM FOR PROVIDING DRIVING ASSISTANCE TO USERS AND THE METHOD THEREOF
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 14 November 2024
Abstract
The present disclosure relates to a smart night vision system for providing driving assistance to users and the method thereof. The system (102) includes an image-capturing unit (108), a display unit (110), processors (104), and a memory (106). The memory (106) includes instructions for the processors (104) to receive a real-time video stream of the road surface and surroundings. Using deep learning models, the processors (104) extract image frames from the stream and compare them to a pre-stored dataset of various weather and lighting conditions. Based on these comparisons, the system (102) identifies and classifies the road condition such as low-light, foggy, dusty, or rainy. The processors (104) then apply an image processing model to enhance the frames according to the identified condition. The enhanced video stream is displayed in real-time, providing improved visibility of the road surface and surroundings, aiding driver navigation and safety during vehicle operation.
Patent Information
Application ID | 202441088222 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 14/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
ARUN BALAJI M | Research Scholar, School of Mechanical Engineering (SMEC), Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India. | India | India |
ANOOP P P | Research Scholar, School of Mechanical Engineering (SMEC), Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India. | India | India |
DEIVANATHAN R | Professor, School of Mechanical Engineering (SMEC), Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India. | India | India |
SUGUMARAN V | Professor, School of Mechanical Engineering (SMEC), Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
VELLORE INSTITUTE OF TECHNOLOGY, CHENNAI | Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India. | India | India |
Specification
Description:TECHNICAL FIELD
[0001] The present disclosure relates to a field of deep learning-based image classification systems. More precisely, the present disclosure relates to a smart night vision system for providing driving assistance to users and the method thereof. The system enhances the visibility of road surfaces and surroundings by identifying road conditions using deep learning models.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the present disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Maintaining clear visibility in adverse weather and low-light conditions remains a significant challenge for vehicle safety systems, as fog, rain, dust, and low-light scenarios obscure road surfaces and surroundings, complicating a driver's ability to navigate safely. Although advanced technologies like thermal imaging and LIDAR have been introduced to enhance visibility, their high costs and complex setups make them less accessible for the average consumer. While infrared (IR) and thermal imaging help detect heat signatures, they often lack the resolution needed to identify road features and obstacles accurately, which is crucial for precise driver assistance.
[0004] Additional visibility challenges arise from high-beam headlights from oncoming vehicles, creating glare that impairs driver vision. Although existing vehicle systems attempt to enhance image clarity, many lack real-time processing capabilities, resulting in delays that prevent drivers from reacting quickly to road hazards. Furthermore, these existing systems often operate independently without seamless integration with other vehicle components, reducing their effectiveness in supporting drivers and limiting situational awareness.
[0005] Various low-illumination image enhancement and vehicle recognition systems aim to improve visibility in challenging lighting conditions, yet they typically focus on specific tasks like vehicle detection or aerial imaging and may not support real-time performance in vehicle-mounted environments. For example, some deep learning-based solutions use techniques like Conditional Generative Adversarial Networks (CGAN) for low-light detection, but they often lack the panoramic views necessary for comprehensive awareness. While some systems enhance specific image types, such as grayscale or license plate images, they often rely on single-spectrum imaging, which does not fully exploit the benefits of dual-spectrum imaging for clearer, multi-dimensional visibility. Fusion imaging techniques that combine IR and low-light visible images do provide improved target detection but can encounter alignment and environmental challenges, impacting their effectiveness in real-world, multi-condition vehicle operations.
[0006] Additionally, many current solutions lack wide-angle or panoramic views, essential for a complete field of vision around the vehicle. Some technologies that aim to address this through wide-angle vision systems rely on head-mounted displays, which may not integrate well with vehicle display systems. Others use ambient luminance adjustments to enhance image display, yet depend on sensor accuracy, which can affect image quality in variable lighting conditions. These limitations underscore the need for a comprehensive, integrated solution that delivers real-time video processing through dehazing, low-light enhancement, and gamma correction to optimize visibility in adverse conditions. Such a solution would significantly improve situational awareness and safety across a range of challenging driving environments.
[0007] There is, therefore, a need in the art to provide a system and method that can overcome the shortcomings of the existing prior arts.
OBJECTS OF THE PRESENT DISCLOSURE
[0008] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0009] It is an object of the present disclosure to provide a smart night vision system for providing driving assistance to users and the method thereof..
[00010] It is another object of the present disclosure to provide a smart night vision system for providing driving assistance to users and the method thereof, which improves visibility in adverse conditions using advanced image processing models such as dehazing, low-light enhancement, and gamma correction.
[00011] It is another object of the present disclosure to provide a smart night vision system for providing driving assistance to users and the method thereof, which facilitates an affordable solution by employing cost-effective components, like a Raspberry Pi and standard cameras, making it accessible to a wide range of vehicles.
[00012] It is another object of the present disclosure to provide a smart night vision system for providing driving assistance to users and the method thereof, which enhances image clarity with deep learning-based image classification models and advanced enhancement techniques, ensuring high-detail, clear visuals for drivers.
[00013] It is another object of the present disclosure to provide a smart night vision system for providing driving assistance to users and the method thereof, which mitigates glare from high-beam headlights while preserving clarity of road boundaries and other critical visual elements.
[00014] It is another object of the present disclosure to provide a smart night vision system for providing driving assistance to users and the method thereof, which process video feeds in real-time, enabling immediate display of enhanced images and improving driver reaction times for increased safety.
SUMMARY
[00015] This summary is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[00016] An aspect of the present disclosure relates to a smart night vision system for providing driving assistance to users and the method thereof. The system can include an image-capturing unit embedded with infrared sensors, a display unit, processors, a memory coupled to the processors, said memory having instructions executable by the processors to receive a real-time video stream captured by the image-capturing unit, where the real-time video stream can include an entire view of a road surface and surroundings. The processors can extract image frames from the real-time video stream using deep learning models. The processors can compare the extracted image frames with a pre-stored dataset stored in a database using the deep learning models. The pre-stored dataset can include pre-stored images of weather conditions and lighting conditions. The processors can identify a road condition based on the compared image frames and classify the compared image frames based on the identified road condition using the deep learning models. The road condition can include dark or low-light conditions, foggy or dusty conditions, or a rain condition. The processors can enhance the classified image frames based on the identified road condition using image processing model. The processors can display an enhanced real-time video on the display unit based on the enhanced image frames, where the enhanced real-time video improves visibility of the road surface and the surroundings, thereby providing driving assistance to a user during a vehicle operation.
[00017] In an aspect, a method for providing driving assistance to users using a smart night vision system. The method includes the steps of receiving a real-time video stream captured by at least one image-capturing unit embedded with a plurality of infrared sensors, wherein the real-time video stream comprising an entire view of a road surface and a plurality of surroundings. The method includes the steps of extracting one or more image frames from the real-time video stream using a plurality of deep learning models. The plurality of image frames refers to individual, static images that are sequentially captured to form a continuous video stream. The method includes the steps of comparing the one or more extracted image frames with a pre-stored dataset stored in a database, where the pre-stored dataset can include a plurality of pre-stored images of a plurality of weather conditions and lighting conditions. The method includes the steps of identifying at least one road condition based on the one or more compared image frames and classifying the one or more compared image frames based on the at least one identified road condition, where the at least one road condition can include at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition. The method includes the steps of enhancing the one or more classified image frames based on the at least one identified road condition using at least one image processing model. The method includes the steps of displaying an enhanced real-time video on a display unit based on the one or more enhanced image frames, where the enhanced real-time video improves visibility of the road surface and the plurality of surroundings, thereby providing driving assistance to at least one user during vehicle operation.
[00018] Various objects, features, aspects, and advantages of the present disclosure will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which numerals represent like features.
[00019] Within the scope of this application, it is expressly envisaged that the various aspects, embodiments, examples, and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.
BRIEF DESCRIPTION OF THE DRAWINGS
[00020] In the figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[00021] FIG. 1 illustrates a block diagram of the proposed smart night vision system for providing driving assistance to users and the method thereof, by an embodiment of the present disclosure.
[00022] FIG. 2 illustrates an exemplary representation of system, in accordance with an embodiment of the present disclosure.
[00023] FIGs. 3A-3B illustrates exemplary representations of the system (102) implemented or installed within a vehicle, in accordance with an embodiment of the present disclosure.
[00024] FIG. 4 illustrates a flow diagram illustrating a method for providing driving assistance to users using a smart night vision system, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[00025] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
[00026] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[00027] An aspect of the present disclosure relates to a smart night vision system for providing driving assistance to users and the method thereof. The system includes at least one image-capturing unit embedded with a plurality of infrared sensors, a display unit, one or more processors, at least one memory coupled to the one or more processors, said memory having instructions executable by the one or more processors to receive a real-time video stream captured by the at least one image-capturing unit embedded with the plurality of infrared sensors, where the real-time video stream includes an entire view of a road surface and a plurality of surroundings. The one or more processors can extract one or more image frames from the real-time video stream using a plurality of deep learning models. The one or more processors can compare the one or more extracted image frames with a pre-stored dataset stored in a database using the plurality of deep learning models. The pre-stored dataset can include a plurality of pre-stored images of a plurality of weather conditions and lighting conditions. The one or more processors can identify at least one road condition based on the one or more compared image frames and classify the one or more compared image frames based on the at least one identified road condition using the plurality of deep learning models, where the at least one road condition can include at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition. The one or more processors can enhance the one or more classified image frames based on the at least one identified road condition using at least one image processing model. The one or more processors can display an enhanced real-time video on the display unit based on the one or more enhanced image frames where the enhanced real-time video improves visibility of the road surface and the surroundings, thereby providing driving assistance to a user during a vehicle operation.
[00028] FIG. 1 illustrates a block diagram of the smart night vision system (102) for providing driving assistance to users and the method thereof, in accordance with an embodiment of the present disclosure.
[00029] In an embodiment, the system (102) can include one or more processors (104), a memory (106), at least one image capturing unit (108) embedded with a plurality of infrared sensors (110), and a display unit (112). The system (102) pertains to a night vision safety system or a vehicle visibility enhancement system that can be implemented or integrated within a vehicle to assist the at least one user during the vehicle operation. The vehicle may include, but not limited to, a car, autonomous driving systems, and the like. The at least one user may include, but not limited to, an individual, a driver, an occupant, and the like.
[00030] In an embodiment, the one or more processor(s) (104) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more processor(s) (104) may be configured to fetch and execute computer-readable instructions stored in the memory (106) of the system (102). The memory (106) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (106) may include any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[00031] In an embodiment, the one or more processor(s) (104) may be configured to receive a real-time video stream captured by the at least one image-capturing unit (108) embedded with the plurality of infrared sensors (110), where the real-time video stream can include an entire view of a road surface and a plurality of surroundings as captured through the at least one image-capturing unit (108) embedded with the plurality of infrared sensors (110). The road surface and the plurality of surroundings can include road boundaries, signs, potholes, or obstacles.
[00032] In an embodiment, the one or more processor(s) (104) may be configured to extract one or more image frames from the real-time video stream using a plurality of deep learning models. The plurality of deep learning models can include convolutional neural networks (CNNs) trained to classify the road condition in real-time using a pre-stored dataset. The one or more processor(s) (104) may be configured to compare the one or more extracted image frames with a pre-stored dataset stored in a database (refer FIG. 2) using the plurality of deep learning models, where the pre-stored dataset can include a plurality of pre-stored images of a plurality of weather conditions and lighting conditions.
[00033] In an embodiment, the one or more processor(s) (104) may be configured to identify at least one road condition based on the one or more compared image frames and classify the one or more compared image frames based on the at least one identified road condition using the plurality of deep learning models, where the at least one road condition can include at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition.
[00034] In an embodiment, the one or more processor(s) (104) may be configured to enhance the one or more classified image frames based on the at least one identified road condition using at least one image processing model. The one or more processors can be configured to identify the dark or low-light condition based on the one or more compared image frames using the plurality of deep learning models. The one or more processors (104) can be configured to classify the one or more compared image frames based on the identified dark or low-light conditions and implement a low-light image enhancement model to enhance at least one of brightness or contrast of the one or more classified image frames to generate the enhanced real-time video.
[00035] In another embodiment, the one or more processors (104) can be configured to identify the dark or low-light condition based on the one or more compared image frames using the plurality of deep learning models. The one or more processors (104) can be configured to classify the one or more compared image frames based on the identified dark or low-light conditions and implement a gamma correction and high-intensity pixel mapping model to regulate gamma values and pixel intensity of the one or more classified image frames to generate the enhanced real-time video, the gamma correction and high-intensity pixel mapping model can be configured to reduce glare from headlights and improve the visibility of the road surface and the plurality of surroundings during the vehicle operation.
[00036] In an embodiment, the one or more processors (104) can be configured to identify the foggy or dusty conditions based on the one or more compared image frames using the plurality of deep learning models. The one or more processors (104) can be configured to classify the one or more compared image frames based on the identified foggy or dusty conditions and implement a dehazing model to remove at least one of dust or fog from the one or more classified image frames to generate the enhanced real-time video.
[00037] In an embodiment, the one or more processors (104) can be configured to identify the rain condition based on the one or more compared image frames using the plurality of deep learning models. The one or more processors can be configured to classify the one or more compared image frames based on the identified rain condition and implement a rain removal model that detects and minimizes rain streaks from the one or more classified image frames to generate the enhanced real-time video.
[00038] In an embodiment, the one or more processors (104) can be configured to utilize a U-Net architecture model to detect and segment road boundaries and edges from the one or more extracted image frames, thereby enabling precise identification of road features such as lane markings and road boundaries in real-time, the U-Net architecture model can be configured to highlight the detected road boundaries and edges with a green line and display the enhanced real-time video with the green line on the display unit (112).
[00039] In an embodiment, the one or more processor(s) (104) may be configured to display an enhanced real-time video on the display unit (112) based on the one or more enhanced image frames to improve visibility of the road surface and the plurality of surroundings for at least one user during a vehicle operation, where the enhanced real-time video improves visibility of the road surface and the plurality of surroundings, thereby providing driving assistance to the at least one user during the vehicle operation.
[00040] FIG. 2 illustrates an exemplary representation of the system, in accordance with an embodiment of the present disclosure.
[00041] In an aspect, referring to FIG. 2, the system (102) may include one or more processor(s) (104). The one or more processor(s) (104) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more processor(s) (104) may be configured to fetch and execute computer-readable instructions stored in the memory (106) of the system (102). The memory (106) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (106) may include any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[00042] Referring to FIG. 2, the system (102) may include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication to/from the system (102). The interface(s) (206) may also provide a communication pathway for one or more components of the system (102). Examples of such components include but are not limited to, processing unit/engine(s) (208) and a local database (210).
[00043] In an embodiment, the processing unit/engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (102) may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (102) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[00044] In an embodiment, the local database (210) may include data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (112) or the processing engines (208). In an embodiment, the local database (210) may be separate from the system (102).
[00045] In an exemplary embodiment, the processing engine (208) may include one or more engines selected from any of receiving module (212), an extracting module (214), a comparing module (216), an identifying module (218), a classifying module (220), an enhancing module (222), a displaying module (224) and other modules (226) having functions that may include but are not limited to testing, storage, and peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
[00046] In an embodiment, the system (102) can include the receiving module (212) which may be configured to receive a real-time video stream captured by the at least one image-capturing unit (108) embedded with the plurality of infrared sensors (110), where the real-time video stream can include an entire view of a road surface and a plurality of surroundings as captured through the at least one image-capturing unit (108) embedded with the plurality of infrared sensors (110). The road surface and the plurality of surroundings can include road boundaries, signs, potholes, or obstacles.
[00047] In an embodiment, the system (102) can include the extraction module (214) which may be configured to extract one or more image frames from the real-time video stream using a plurality of deep learning models. The plurality of deep learning models can include convolutional neural networks (CNNs) trained to classify the road condition in real-time using a pre-stored dataset.
[00048] In an embodiment, the system (102) can include the comparing module (216) which may be configured to compare the one or more extracted image frames with a pre-stored dataset stored in the database (210) using the plurality of deep learning models, where the pre-stored dataset can include a plurality of pre-stored images of a plurality of weather conditions and lighting conditions.
[00049] In an embodiment, the system (102) can include the identifying module (218) which may be configured to identify at least one road condition based on the one or more compared image frames.
[00050] In an embodiment, the system (102) can include the classifying module (220) which may be configured to classify the one or more compared image frames based on the at least one identified road condition using the plurality of deep learning models, where the at least one road condition can include at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition.
[00051] In an embodiment, the system (102) can include the enhancing module (220) which may be configured to enhance the one or more classified image frames based on the at least one identified road condition using at least one image processing model.
[00052] In an embodiment, the identifying module (218) can be configured to identify the dark or low-light condition based on the one or more compared image frames using the plurality of deep learning models. The classifying module (220) can be configured to classify the one or more compared image frames based on the identified dark or low-light conditions and implement a low-light image enhancement model to enhance at least one of brightness or contrast of the one or more classified image frames to generate the enhanced real-time video.
[00053] In another embodiment, the identifying module (218) can be configured to identify the dark or low-light condition based on the one or more compared image frames using the plurality of deep learning models. The classifying module (220) can be configured to classify the one or more compared image frames based on the identified dark or low-light conditions and implement a gamma correction and high-intensity pixel mapping model to regulate gamma values and pixel intensity of the one or more classified image frames to generate the enhanced real-time video, the gamma correction and high-intensity pixel mapping model can be configured to reduce glare from headlights and improve the visibility of the road surface and the plurality of surroundings during the vehicle operation.
[00054] In an embodiment, the identifying module (218) can be configured to identify the foggy or dusty conditions based on the one or more compared image frames using the plurality of deep learning models. The classifying module (220) can be configured to classify the one or more compared image frames based on the identified foggy or dusty conditions and implement a dehazing model to remove at least one of dust or fog from the one or more classified image frames to generate the enhanced real-time video.
[00055] In an embodiment, the identifying module (218) can be configured to identify the rain condition based on the one or more compared image frames using the plurality of deep learning models. The classifying module (220) can be configured to classify the one or more compared image frames based on the identified rain condition and implement a rain removal model that detects and minimizes rain streaks from the one or more classified image frames to generate the enhanced real-time video.
[00056] In an embodiment, the one or more processors (104) can be configured to utilize a U-Net architecture model to detect and segment road boundaries and edges from the one or more extracted image frames, thereby enabling precise identification of road features such as lane markings and road boundaries in real-time, the U-Net architecture model can be configured to highlight the detected road boundaries and edges with a green line and display the enhanced real-time video with the green line on the display unit (112).
[00057] In an embodiment, the one or more processor(s) (104) may be configured to display an enhanced real-time video on the display unit (112) based on the one or more enhanced image frames, where the enhanced real-time video improves visibility of the road surface and the plurality of surroundings, thereby providing driving assistance to the at least one user during a vehicle operation.
[00058] FIGs. 3A-3B illustrates exemplary representations (300a), and (300b) of the system (102) implemented or integrated within a vehicle, in accordance with an embodiment of the present disclosure.
[00059] In an embodiment, the system (102) is implemented or integrated within a vehicle (302). The system (102) may be configured to improve driver visibility and situational awareness in challenging environmental conditions through an integrated setup that combines hardware components like the one or more processors (For example, Raspberry Pi) (104), a high-resolution camera (108), and a vehicle-mounted display (112) with advanced software-driven image processing techniques. The one or more processors (104) may also be represented as Raspberry Pi, the image-capturing unit (108) may be represented as high-resolution camera, the display unit (112) as the vehicle-mounted display in the following description.
[00060] The Raspberry Pi serves as the central processing unit (CPU) of the system (102), handling image processing, classification, and enhancement tasks in real-time. This compact and cost-effective microcomputer is chosen for its compatibility with the system's components and ability to support deep learning algorithms. The Raspberry Pi processes incoming video feeds from the camera, applies necessary enhancements, and delivers the improved images to the display. The Raspberry Pi connects to the camera through a Camera Serial Interface (CSI) and to the display through HDMI, allowing seamless communication between hardware components.
[00061] The high-resolution camera captures real-time video feeds of the road ahead, even in low-light and adverse weather conditions. The camera can be capable of capturing detailed images in low-light scenarios, continuously records the road environment, providing essential data for the system's image processing. In some configurations, the camera may include infrared capabilities for visibility in complete darkness, as well as infrared sensors for detecting environmental factors like fog and rain. The processed video feed is displayed on the vehicle-mounted display that is typically integrated into the dashboard or infotainment system. The display can show an enhanced view of the road, improving driver visibility by adjusting for adverse conditions in real-time. The display's placement ensures it is easily visible, allowing the driver to maintain situational awareness without diverting attention from the road.
[00062] In an embodiment, the system (102) employs deep learning-based image classification to detect specific weather and lighting scenarios, such as fog, rain, or low light. Using convolutional neural networks (CNNs) trained on a dataset of various road conditions, the system (102) classifies each frame and determines which enhancement techniques are required. This classification process operates in real-time on the Raspberry Pi, adapting dynamically to changing conditions. For visual clarity in adverse conditions, the system (102) uses image enhancement algorithms like dehazing for foggy or dusty situations, low-light image enhancement for nighttime driving, and gamma correction to balance image brightness and contrast. For example, the dehazing model removes fog or dust particles to provide a clear view of the road, while the low-light enhancement adjusts brightness to improve visibility in dark conditions. Gamma correction and pixel mapping are applied to mitigate glare from headlights, ensuring road boundaries and signage remain visible without being overly bright.
[00063] In addition, a U-Net-based road detection model identifies and segments road boundaries. The convolutional neural network architecture detects road edges and highlights them with a green outline, making it easier for drivers to identify their path, particularly in low-visibility scenarios. The system (102) processes video feeds in real time, ensuring that drivers receive continuously updated, clear visuals without delay, which is crucial for rapid decision-making and enhanced safety. The system (102) offers an affordable, effective solution for enhancing driver visibility in diverse driving environments, including those with low-light, fog, rain, and other adverse conditions.
Example scenario 1:
[00064] In nighttime driving scenarios on dark, unlit roads, visibility is often minimal, posing a challenge for drivers. The system's camera captures a low-light video feed that is processed through the Raspberry Pi using advanced low-light enhancement models. These models brighten the image to improve overall clarity while also reducing glare from oncoming headlights, a common issue that often impairs driver vision at night. The processed feed, displayed on an in-vehicle screen, offers a clearer, brighter view of the road, aiding the driver in identifying road features and potential obstacles even in dark conditions.
Example scenario 2:
[00065] Dense fog significantly obstructs a driver's view, making it difficult to see objects, road signs, or boundaries. When driving in such conditions, the system's camera captures a hazy video feed, which is immediately processed through a dehazing model designed to cut through the fog. This processing step removes or reduces the fog in the image feed, delivering a clarified view to the driver. The display screen shows a sharper, more defined image that allows the driver to see through the fog, identify obstacles, and maintain awareness of road boundaries, enhancing both safety and confidence.
Example scenario 2:
[00066] Heavy rain presents a unique challenge due to rain streaks that can distort the camera's view and impair visibility. In this scenario, the camera captures a video feed that includes these rain streaks. The system then applies a rain removal model, which detects and minimizes the rain interference on the camera lens. This model clears the view displayed on the in-vehicle screen, providing the driver with a less obstructed image of the road ahead, and thus maintaining better situational awareness during rainy conditions.
Example scenario 3:
[00067] Dusty environments, such as rural or construction-heavy areas, can reduce visibility on the road. In such scenarios, the system's camera captures a video feed clouded with dust. The dehazing model is then applied to this feed, filtering out the dust particles from the image. The resulting video displayed on the screen gives the driver a clearer view of the road, mitigating the risk of accidents and ensuring the driver can navigate safely even on dusty terrain.
Example scenario 4:
[00068] In mixed environmental conditions, such as driving at dawn with a combination of low light and light fog, the system demonstrates its versatility by simultaneously addressing multiple visibility challenges. The camera captures the video feed affected by both low light and fog. The feed is processed by both the low-light enhancement and dehazing models concurrently, creating an output that is brightened and cleared of fog. The driver sees a well-lit and fog-free image on the display, allowing them to maintain clear vision on the road, even under mixed conditions, for a safer driving experience.
[00069] The system is designed to support easy integration into a range of vehicle models, from legacy vehicles to newer models with advanced safety systems. For older vehicles, it can be implemented as an aftermarket add-on, while in new vehicles, it can be incorporated seamlessly during the manufacturing process. Its compatibility with existing driver assistance technologies allows it to function as an enhanced safety feature that complements both manual and autonomous driving systems.
[00070] FIG. 4 illustrates a flow diagram (400) illustrating a method for providing driving assistance to users using a smart night vision system, in accordance with an embodiment of the present disclosure.
[00071] As illustrated, method (400) includes, at block (402), receiving a real-time video stream captured by at least one image-capturing unit embedded with a plurality of infrared sensors, wherein the real-time video stream comprising an entire view of a road surface and a plurality of surroundings.
[00072] Continuing further, method (400) includes, at block (404), extracting one or more image frames from the real-time video stream using a plurality of deep learning models. The plurality of image frames refers to individual, static images that are sequentially captured to form a continuous video stream.
[00073] Continuing further, method (400) includes, at block (406), comparing the one or more extracted image frames with a pre-stored dataset stored in a database, where the pre-stored dataset can include a plurality of pre-stored images of a plurality of weather conditions and lighting conditions.
[00074] Continuing further, method (400) includes, at block (408), identifying at least one road condition based on the one or more compared image frames and classifying the one or more compared image frames based on the at least one identified road condition, where the at least one road condition can include at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition.
[00075] Continuing further, method (400) includes, at block (410), enhancing the one or more classified image frames based on the at least one identified road condition using at least one image processing model.
[00076] Continuing further, method (400) includes, at block (412), displaying an enhanced real-time video on a display unit based on the one or more enhanced image frames, where the enhanced real-time video improves visibility of the road surface and the plurality of surroundings, thereby providing driving assistance to at least one user during a vehicle operation.
[00077] If the specification states a component or feature "may", "can", "could", or "might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[00078] As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[00079] Moreover, in interpreting the specification, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms "comprises" and "comprising" should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refer to at least one of something selected from the group consisting of A, B, C ….and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[00080] While the foregoing describes various embodiments of the proposed disclosure, other and further embodiments of the proposed disclosure may be devised without departing from the basic scope thereof. The scope of the proposed disclosure is determined by the claims that follow. The proposed disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00081] The present disclosure provides a smart night vision system and method for enhancing visibility of road surfaces and surroundings by identifying road conditions.
[00082] The present disclosure provides a smart night vision system and method that adapts image enhancement techniques based on specific weather conditions (e.g., dehazing in fog, low-light enhancement at night) for improved visibility.
[00083] The present disclosure provides a smart night vision system and method that supports integration with various vehicle models, including those with autonomous driving capabilities, for enhanced situational awareness and driver assistance.
[00084] The present disclosure provides a smart night vision system and method that delivers a cost-effective yet high-performance solution by utilizing affordable hardware components.
[00085] The present disclosure provides a smart night vision system and method that provides actionable, real-time visual information, giving drivers immediate feedback to enhance overall safety and responsiveness on the road.
[00086] The present disclosure provides a smart night vision system and method that offers a comprehensive, integrative solution for modern driver assistance systems, including compatibility with autonomous driving technologies, supporting advanced vehicle safety needs.
, Claims:1. A smart night vision system for providing driving assistance to users and the method thereof, the system (102) comprising:
at least one image-capturing unit (108) embedded with a plurality of infrared sensors (110);
a display unit;
one or more processors (102);
at least one memory (106) coupled to the one or more processors (104), said memory (106) having instructions executable by the one or more processors (104) to:
receive a real-time video stream captured by the at least one image-capturing unit (108) embedded with the plurality of infrared sensors (110), wherein the real-time video stream comprising an entire view of a road surface and a plurality of surroundings;
extract one or more image frames from the real-time video stream using a plurality of deep learning models;
compare the one or more extracted image frames with a pre-stored dataset stored in a database (210) using the plurality of deep learning models, wherein the pre-stored dataset comprising a plurality of pre-stored images of a plurality of weather conditions and lighting conditions;
identify at least one road condition based on the one or more compared image frames and classify the one or more compared image frames based on the at least one identified road condition using the plurality of deep learning models, wherein the at least one road condition comprising at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition;
enhance the one or more classified image frames based on the at least one identified road condition using at least one image processing model; and
display an enhanced real-time video on the display unit (112) based on the one or more enhanced image frames, wherein the enhanced real-time video improves visibility of the road surface and the plurality of surroundings, thereby providing driving assistance to at least one user during a vehicle operation.
2. The system as claimed in claim 1, wherein the one or more processors (104) configured to identify the dark or low-light condition based on the one or more compared image frames using the plurality of deep learning models,
wherein the one or more processors (104) configured to classify the one or more compared image frames based on the identified dark or low-light conditions and implement a low-light image enhancement model to enhance at least one of brightness or contrast of the one or more classified image frames to generate the enhanced real-time video.
3. The system as claimed in claim 1, wherein the one or more processors (104) configured to identify the dark or low-light condition based on the one or more compared image frames using the plurality of deep learning models,
wherein the one or more processors (104) configured to classify the one or more compared image frames based on the identified dark or low-light conditions and implement a gamma correction and high-intensity pixel mapping model to regulate gamma values and pixel intensity of the one or more classified image frames to generate the enhanced real-time video,
wherein the gamma correction and high-intensity pixel mapping model configured to reduce glare from headlights and improve the visibility of the road surface and the plurality of surroundings during the vehicle operation,
wherein the road surface and the plurality of surroundings comprising road boundaries, signs, potholes, or obstacles.
4. The system as claimed in claim 1, wherein the one or more processors (104) configured to identify the foggy or dusty conditions based on the one or more compared image frames using the plurality of deep learning models
wherein the one or more processors configured to classify the one or more compared image frames based on the identified foggy or dusty conditions and implement a dehazing model to remove at least one of dust or fog from the one or more classified image frames to generate the enhanced real-time video.
5. The system as claimed in claim 1, wherein the one or more processors (104) configured to identify the rain condition based on the one or more compared image frames using the plurality of deep learning models,
wherein the one or more processors configured to classify the one or more compared image frames based on the identified rain condition and implement a rain removal model that detects and minimizes rain streaks from the one or more classified image frames to generate the enhanced real-time video.
6. The system as claimed in claim 1, wherein the one or more processors (104) configured to utilize a U-Net architecture model to detect and segment road boundaries and edges from the one or more extracted image frames, thereby enabling precise identification of road features such as lane markings and road boundaries in real-time,
wherein the U-Net architecture model configured to highlight the detected road boundaries and edges with a guide line and display the enhanced real-time video with the guide line on the display unit (112).
7. The system as claimed in claim 1, wherein the plurality of deep learning models comprising convolutional neural networks (CNNs) trained to classify the at least one road condition in real-time using the pre-stored dataset.
8. The system as claimed in claim 1, wherein the system (102) pertains to a night vision safety system that can be implemented or integrated within a vehicle (302) to assist the at least one user during the vehicle operation.
9. The system as claimed in claim 1, wherein the display unit (112) is mounted on a dashboard of the vehicle (302) or integrated into the vehicle interface console,
the display unit (112) configured to provide the at least one user with a clear and enhanced view of the road surface and the plurality of surroundings to improve situational awareness and safety.
10. A method for providing driving assistance to users using a smart night vision system, the method (400) comprising:
receiving, by one or more processors (104), a real-time video stream captured using at least one image-capturing unit (108) embedded with a plurality of infrared sensors (110), wherein the real-time video stream comprising an entire view of a road surface and a plurality of surroundings;
extracting, by the one or more processors (104), one or more image frames from the real-time video stream using a plurality of deep learning models;
comparing, by the one or more processors (104), the one or more extracted image frames with a pre-stored dataset stored in a database (210), wherein the pre-stored dataset comprises a plurality of pre-stored images of a plurality of weather conditions and lighting conditions;
identifying, by the one or more processors (104), at least one road condition based on the one or more compared image frames and classifying the one or more compared image frames based on the at least one identified road condition, wherein the at least one road condition comprises at least one of dark or low-light conditions, foggy or dusty conditions, or a rain condition;
enhancing, by the one or more processors (104), the one or more classified image frames based on the at least one identified road condition using at least one image processing model; and
displaying, by the one or more processors (104), an enhanced real-time video on a display unit (112) based on the one or more enhanced image frames, wherein the enhanced real-time video improves visibility of the road surface and the plurality of surroundings, thereby providing driving assistance to at least one user during a vehicle operation.
Documents
Name | Date |
---|---|
202441088222-Proof of Right [06-12-2024(online)].pdf | 06/12/2024 |
202441088222-FORM-8 [18-11-2024(online)].pdf | 18/11/2024 |
202441088222-COMPLETE SPECIFICATION [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-DRAWINGS [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-EDUCATIONAL INSTITUTION(S) [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-EVIDENCE FOR REGISTRATION UNDER SSI [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-FORM 1 [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-FORM 18 [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-FORM FOR SMALL ENTITY(FORM-28) [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-FORM-9 [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-POWER OF AUTHORITY [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf | 14/11/2024 |
202441088222-REQUEST FOR EXAMINATION (FORM-18) [14-11-2024(online)].pdf | 14/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.