Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A DRIVER ASSISTANCESYSTEM AND METHOD THEREOF
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 11 November 2024
Abstract
The present invention system provides driver assistance system, comprisinga a data acquisition unit (10) with a LiDAR sensor (20), a data processing unit (12) with a ToF sensor (22), and a RFID tag reader (18), wherein the system (100) is configured for real-time lane guidance, obstacle detection, and speed control in a continuous loop, to provide steering adjustments or braking to maintain lane position and avoid obstacles maintain. Additionally, the invention discloses a method of working of the driver assistance system. Figure 8.
Patent Information
Application ID | 202441086674 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 11/11/2024 |
Publication Number | 46/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
MD FAIYAZ ALAM | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
NISHANT KUMAR | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
ARUN | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
ABHISHEK KUMAR SINGH | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
V RAHUL | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
G SONIA PRIYATHARSHINI | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
A HEMAMALINIE | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
DEPAA RA B | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Dr. MGR Educational and Research Institute | Dr. MGR Educational and Research Institute, Maduravoyal, Chennai, Tamil Nadu 600095 | India | India |
Specification
Description:FIELD OF INVENTION
This invention relates generally to systems and methods for autonomous control of vehicles and vehicular sensors, actuators, or communications. More particularly, embodiments of this invention relate to systems and methods for obstacle avoidance, lane identification, Real-time fog detection and visibility estimation based on image intensity distribution accurate and reliable lane detection and tracking.
BACKGROUND OF THE INVENTION
Numerous devices and systems have been provided in the prior art to assist in lane changes, detect side objects, and warn against lane departures. These devices and systems are designed to identify vehicles or objects that are located next to, in front of, or behind the equipped vehicle, as well as those in adjacent lanes. Typically, these systems employ statistical methods to analyze the images captured by a camera or sensor installed in the vehicle.
Driving in dense fog significantly increases the difficulty to maintain lane position. We need a sensor which can guide a vehicle to be in its lane. Foggy conditions make it difficult to detect approaching obstacles and vehicles. A smart real-time information sharing sensor is needed to counter this problem. Dense fog conditions make it difficult to judge distances accurately often leading to unsafe speed choices. Providing a precise distance of an object will help drivers to choose safe speed.
EP0640903A1 disclosed a video camera or equivalent sensor is mounted on a vehicle and used to detect the lane markings on the road (usually the white painted lines). An associated signal processor (11) estimates the vehicle's lateral position in relation to the lane markings. An electric motor (4) coupled to the steering mechanism (1) is used to provide a torque input to the steering which may either assist or oppose the steering torque from the driver.
The processor is designed to assist the driver to maintain the vehicle's lane position by holding the vehicle at a set-point using a biasing torque. This simulates the effect of the lane being cambered upwards towards the lane edges. However the driver is able to override or cancel the effect if the driver applied steering torque exceeds a prescribed torque threshold.
US20160078305A1 disclosed driver assistance system for a vehicle includes first and second cameras and a rear backup camera. A control processes image data captured by the first camera and determines that the first camera is misaligned when the first camera is disposed at the left side of the vehicle. The control, responsive to a determination of misalignment of the first camera, is operable to algorithmically at least partially compensate for misalignment of the first camera. At least in part responsive to processing of captured image data, a composite image is displayed that provides a view that approximates a view from a single virtual camera. Image data captured at least by the first camera is processed using an edge detection algorithm to detect edges of objects exterior of the vehicle. Responsive at least in part to processing of captured image data, an object of interest exterior of the vehicle is determined.
US8874300B2 provided systems and methods for obstacle avoidance. In some embodiments, a robotically controlled vehicle capable of operating in one or more modes may be provided. Examples of such modes include teleoperation, waypoint navigation, follow, and manual mode. The vehicle may include an obstacle detection and avoidance system capable of being implemented with one or more of the vehicle modes. A control system may be provided to operate and control the vehicle in the one or more modes. The control system may include a robotic control unit and a vehicle control unit.
SUMMARY OF THE INVENTION
It is a principle object of the present invention to provide a driver assistance system that enhances safety and visibility in foggy conditions, assisting drivers in staying on the correct path and detecting incoming traffic.
The primary objective is to significantly enhance road safety by providing real-time information about the right path and mitigating the risks associated with limited visibility caused by fog.
It is another object of the invention to provide to provide a driver assistance utilizing LiDAR, ToF, and potentially RFID sensors, this Driver Assistance System has the potential to significantly improve navigation in foggy conditions, contributing to safer and more confident driving experiences.
DRAWINGS
Figure 1 Process flow of the driver assistance system according to embodiments of the present invention.
Figure 2: Process flow of the LiDAR, ToF sensor, RFID of the driver assistance system according to embodiments of the present invention.
Figure 3: Depicts the beginning to end of the process steps according to embodiments of the present invention.
Figure 4: Depicts the detail process steps executed by the LiDAR sensor of embodiments of the present invention.
Figure 5: Depicts the detail process steps executed by the ToF sensor of embodiments of the present invention.
Figure 6: Depicts the detail process steps executed by the RFID Reader of embodiments of the present invention.
Figure 7: Depicts the detail data processing steps executed by driver assistance system of the present invention.
Figure 8: Depicts the key components of the driver assistance system of the present invention.
Figure 9: Depicts the power supply flow of the LiDAR, ToF sensor, RFID sensor and Raspberry Pi (Controller) with the ESC, Motor and Servo of the driver assistance system according to embodiments of the present invention.
Figure 10: Interface Box
Figure 11: depicts the driver assistance system on a lane, according to an embodiment of the present invention.
DETAILED DESCRIPTION
This Driver Assistance System (100) will assist drivers in navigating safely and confidently during foggy conditions by providing real-time lane guidance, obstacle detection, and speed control, utilizing LiDAR (20), ToF sensors (22), and RFID tag reader. This system operates in continuous loop. The OBC system operates in a continuous loop, constantly collecting data, making decisions, and potentially taking actions (steering adjustments, braking) to maintain lane position and avoid obstacles.
In a second embodiment is provided a method of operation of the Driver Assistance System, comprises steps: (Figure 12)
acquiring data by the data acquisition unit (10), comprising real time scanning of the environment, generating a 3D point cloud of surrounding objects by the LIDAR sensor (20) and communicating via the communication protocol (16) to the microcontroller (14);
actuation of the ToF sensor (22) to provide precise distance measurements for nearby obstacles and lane markings distance measurements and retrieves thedata at a higher frequency of 50-100 Hz for precise real-time positioning;
scanning by RFID reader to scans for pre-programmed tags embedded in the road, detecting and transmitting the unique identifier or encoded data
associated with the tag to the microcontroller; compilation of the data by the microcontroller and actuating steering adjustment, to maintain lane position or avoid the obstacle or actuating braking, slowing down or stopping the vehicle, and actuation of the alerting unit in case of discrepancy of obstacles or lane deviation.
According to the first embodiment of the present invention is disclosed Driver Assistance System (100) comprising: (Figure 11).
Data Acquisition Unit (10): The LiDAR sensor (20) continuously scans the environment, generating a 3D point cloud of surrounding objects. This data represents the distance and location of objects in the vehicle's path. . In figure 4, described the process flow from the actuation of a LiDAR sensor (20). The sensor scans continuously, and sends the data of distance and object location in the vehicle path, comprising extract lane markings, and road edges, thus it identifies potential obstacles. Sensor data from LiDAR is fused with ToF and RFID data, and processed by the processor on a computer implementable device, and provides decision making for lane guidance, obstacle warning, in the form of alerts selected from audio or display alert at the display unit.
Reference is drawn to Figure 5, 7-the ToF sensor (22) focuses on short-range areas, providing precise distance measurements for nearby obstacles and lane markings. This data is crucial for situations where accurate positioning is essential. The data acquired comprises of measurement of distance to lane boundaries, measurement of distance of obstacles. Further, step includes sensor data fusion, and decision making by the processor based on running of the algorithm. The processor provides control of lane keeping, speed adjustment. The alerts based on the results are displayed at user interface as audio or display alert.
Figure 6, 7 RFID reader/writer (24) scans for pre-installed tags embedded in the road infrastructure. These tags trigger specific lane change maneuvers. Further, step includes sensor data fusion, and decision making by the processor based on running of the algorithm. The decision making comprises specific lane guidance, obstacle warning. The processor provides control of lane keeping, speed adjustment. The alerts based on the results are displayed at user interface as audio or display alert.
GPS receiver: (26) to acquire GPS data.
Process of Data Acquisition: (figure 10)
1. Sensor Selection and Configuration
LiDAR Sensor (20): OUSTER OS1-16
Configuration:
Scan Rate: The scan rate that balances performance and data processing requirements. A common range is 10-20 Hz.
Field of View: Setting the horizontal field of view to cover the relevant area in front of the vehicle (e.g., 120 degrees).
Output Data Format: Selecting a data format (e.g., point cloud) compatible with the software for obstacle detection.
ToF Sensor (22):
STMicroelectronics VL53L0X
Configuration: Measurement Range: Setting the range to focus on short-range obstacles and lane marking detection (e.g., 4 meters).
Integration Time: Adjusting the parameter to optimize accuracy and frame rate for the needs.
Output Data Format: Selecting a data format (e.g., distance readings) compatible with the software for lane marking and obstacle processing.
RFID Reader/Writer (24):
NXP Ucode 7 and ThingMagic M300
Configuration:
The RFID reader/writer will use a communication protocol (16) like USB or SPI to connect to Raspberry Pi.
The reader is configured according to its datasheet for proper operation and tag
detection.
Tag Programming:
The NXP Ucode 7 tags are programmed (embedded in the road) with unique identifiers or encoded data that corresponds to the desired lane change behaviors in the system.
2. Communication Protocol:
1. Sensor Communication Protocol (16):
1. LiDAR (Ouster OS1-16):
1. Using Ethernet for communication with the Raspberry Pi. This is a high-speed and reliable protocol (16) for transferring large amounts of LiDAR data (point cloud information).
2. ToF Sensor (STMicroelectronics VL53L0X): Utilizes I2C (Inter-Integrated Circuit) protocol (16) for communication with the Raspberry Pi.
3. RFID Sensor (NXP Ucode 7):
1. The NXP Ucode 7 tag itself is passive and doesn't have a communication
protocol (16).
2. A separate UHF RFID reader/writer Thing Magic M300 is used that likely
uses a protocol (16) like USB or SPI to connect to the Raspberry Pi.
Communication Library Integration:
1. Raspberry Pi software development environment includes libraries or functions
compatible with each sensor's communication protocol (16):
1. Ethernet libraries for the LiDAR.
2. I2C libraries for the ToF sensor.
3. Specific libraries designed for chosen UHF RFID reader/writer (USB or SPI).
2. These libraries provide functions for establishing communication channels, sending commands, and receiving data from the respective sensors.
3. Continuous Data Acquisition: (Figure 11)
LiDAR Data Acquisition: The LiDAR sensor (20) continuously scans the environment, generating a 3D point cloud of surrounding objects. The software running on the Raspberry Pi retrieves this data at regular intervals (e.g.,10-20 times per second) through the established communication protocol (16).
ToF Data Acquisition: The software sends commands to the ToF sensor to trigger distance measurements and retrieves the data (distance to nearby objects and lane markings) at a higher frequency (e.g., 50-100 Hz) for precise real-time positioning.
The software retrieves this data at a high frequency (e.g., 50-100 times per second) for accurate real-time positioning.
RFID Data Acquisition: The RFID reader/writer continuously scans for pre-programmed tags embedded in the road. Upon detecting a tag, the reader transmits the unique identifier or encoded data associated with the tag to the software.
In another embodiment of the present invention is provided GPS Receiver (26) for GPS positioning data.
The data acquired is communicated to the processing unit after fusion of the data, the specific algorithm provides the decision making on execution of the instructions in the processor, and control commands of the processor control the steering, braking and speed control. the commands and the alerts are displayed on the display screen of the computer implementable device.
4. Data Filtering and Pre-processing:
1. LiDAR Data (Point Cloud):
Noise Filtering: LiDAR data can contain noise points caused by sensor limitations or environmental factors like rain or dust. Statistical outlier removal or nearest neighbor averaging filtering technique is used to help eliminate these noise points, improving the accuracy of object detection algorithms.
2. ToF Sensor Data (Distance Measurements):
Outlier Detection and Correction: ToF sensor readings might be susceptible to outliers due to sensor noise or challenging environmental conditions. Implement algorithms to identify outliers e.g., sudden spikes or dips in distance readings and replacing them with corrected values based on surrounding measurements.
3. RFID Data (Tag Identifiers or Encoded Data):
Error Correction: In rare cases, RFID tag readings might contain errors due to signal interference. Error correction techniques Hamming codes are used for correct data.The software implements filtering techniques to remove noise or unwanted data points from the LiDAR data, especially for long-range scans.
For ToF data, outlier detection and correction algorithms are used to address potential sensor noise.
RFID data might require minimal pre-processing, depending on the complexity of the encoded information.
5. Data Synchronization:
1. Software Time stamping:
Implements a mechanism within Raspberry Pi software to assign a timestamp to each data point received from any sensor. This timestamp represents the exact time the data point was acquired by the sensor.
Using leverage libraries like time in Python or similar functions in chosen
programming language to generate timestamps with high precision (microsecond or nanosecond resolution).
2. Data Acquisition with Timestamps:
Modifying software's data acquisition routines for each sensor to include time stamping:
LiDAR (Ethernet): When retrieving point cloud data from the LiDAR, include a
timestamp alongside the point cloud information. Libraries for the chosen communication protocol (16) (Ethernet) offer functions to capture timestamps during data transfer.
ToF Sensor (I2C): When reading distance measurements from the ToF sensor, incorporate a timestamp into the data structure received by software. I2C libraries provide functionalities for time stamping during data acquisition.
RFID Reader/Writer (USB/SPI): Upon detecting an RFID tag and retrieving its
identifier or encoded data, includes a timestamp within the received data structure.
6. Data Delivery for Processing:
After acquisition, filtering, and synchronization, the software delivers the sensor data to the data processing stage. This data will be used to identify obstacles, lane markings, and potentially trigger lane changes (with RFID).
Data Processing and Feature Extraction:
1. Data Segmentation:
LiDAR: Segmented point cloud data to group points that likely belong to the same object. This is achieved using techniques like:
Voxel Grid Filtering: Dividing the space around the vehicle into small voxels (3D cubes). Points within a voxel are considered part of the same object. Clustering Algorithms: Grouping points based on their proximity and spatial relationships. ToF Sensor: Divides the distance readings into regions corresponding to different parts of the vehicle's surroundings (e.g., near-field for obstacles, far-field for lane markings).
2. Feature Extraction: (Figure 12)
LiDAR:
Object features: After segmentation, extracts the features from each identified object in the point cloud, such as:
1. Centroid: The center point (x, y, z coordinates) of the object.
2. Dimensions (length, width, height): Estimated size of the object.
3. Bounding box: Minimum and maximum x, y, z coordinates defining a box that encloses the object.
4. Velocity: Calculate the object's relative motion if multiple scans are available.
ToF Sensor:
Obstacle features: From the near-field readings, extracts features for detected obstacles, like:
Distance: Distance of the obstacle from the sensor.
Size: Estimate the size of the obstacle based on multiple distance readings.
Lane marking features: Analyze the far-field readings to extract features relevant to lane markings, such as:
Lane line positions: Identify the positions of lane lines on the road.
Lane curvature: Estimate the curvature of the lane for advanced lane following
algorithms.
RFID:
RFID tags for lane changes, the "feature" extracted is the identifier or decoded data from the tag, which corresponds to the intended lane change behavior.
LiDAR Processing: Apply algorithms to the point cloud data for obstacle detection. This involves Identifying clusters of points representing objects in the environment. Estimating the size, position, and velocity of detected obstacles.
ToF Processing: Analyze distance readings to identify obstacles and lane markings based on predefined thresholds or segmentation techniques.
RFID Processing: Decode the identifier or data from the RFID tag to determine the intended lane change behavior.
4. Data Fusion and Decision Making:
RFID tags for lane changes, combine the processed data from all sensors (LiDAR, ToF,RFID) based on their timestamps. This allows correlating lane change triggers with the surrounding environment (obstacle positions, lane markings).
Algorithms make informed decisions based on the fused data.
This involve: Planning a safe lane change trajectory considering detected obstacles.
Adjusting vehicle steering and speed based on lane markings and the intended lane change direction.
Data Processing and Decision Making:
All sensor data is sent to the Raspberry Pi, the central processing unit.
The application software running on the Raspberry Pi interprets the raw sensor data.
The LiDAR data helps identify obstacles at various distances.
The ToF data provides detailed information about immediate surroundings and lane markings.
RFID data is used for specific lane change triggers or additional lane positioning information.
Decision Making and Control Signal Generation:
Based on the processed data and decision-making algorithms, the software determines the necessary actions.
These actions can include:
Steering Adjustments (Optional): If the vehicle is drifting out of its lane or an obstacle is detected near the lane edge, the software sends control signals to the servo motor. The servo motor adjusts the steering wheel accordingly to maintain lane position or avoid the obstacle.
Braking (Optional): If an obstacle poses an immediate threat, the software can send a signal to the
ESC. The ESC controls the motor responsible for braking, slowing down or stopping the vehicle as needed.
Components working :
1. Data Acquisition:
The LiDAR sensor continuously scans the environment, generating a 3D point cloud of surrounding objects.
The ToF sensor focuses on short-range areas, providing precise distance measurements for nearby obstacles and lane markings.
The optional RFID reader/writer can potentially detect pre-installed tags for specific lane change scenarios.
2. Data Processing and Decision Making:
All sensor data is sent to the Raspberry Pi, the central processing unit.
The application software running on the Raspberry Pi interprets the raw sensor data.
The LiDAR data helps identify obstacles at various distances.
The ToF data provides detailed information about immediate surroundings and lane markings.
RFID data (if used) can be used for specific lane change triggers or additional lane positioning information.
3. Control Signal Generation and Actuation:
Based on the processed data and decision-making algorithms, the software generates control signals.
These signals are sent to the servo motor for steering adjustments.
The servo motor adjusts the steering wheel based on the received signal, keeping the vehicle within the lane and navigating around obstacles.
If an obstacle is too close or poses an immediate threat, the software can send a signal to the ESC.
The ESC controls the motor responsible for braking, slowing down or stopping the vehicle as needed.
4. User Interaction:
Depending on the chosen UI, the system might provide audio alerts to warn the driver about detected obstacles.
These audio alerts are triggered by the software based on its interpretation of the sensor data.
Key Points for Unison:
Real-time Processing: All components work together in real-time. Sensor data is continuously acquired, processed, and used to generate control signals for immediate action.
Data Fusion: The software incorporates data from various sensors (LiDAR, ToF, and potentially RFID) to create a comprehensive picture of the environment.
Feedback Loop: The system can be considered a feedback loop. Sensor data informs decisions, and control signals influence the vehicle's behavior, which can be detected by sensors again. This continuous cycle allows for real-time adjustments.
EXAMPLE 1: Execution of the process of driver assistance according to an embodiment of the present invention.
Reference is drawn to Figure 1:
Explanation:
1. Start (A): The process begins.
2. LiDAR Scan (B): The LiDAR sensor continuously scans the environment, generating a 3D point cloud of surrounding objects.
3. ToF Measurement (C): The ToF sensor focuses on short-range areas, providing precise distance measurements for nearby obstacles and lane markings.
4. RFID Detection (D): The RFID reader/writer can detect pre-installed tags for specific lane change scenarios.
5. Data Fusion (E): Data from sensors that detect anything (positive decisions at B, C, or D) is combined and sent to the Raspberry Pi for processing.
Note: The flowchart intentionally avoids an explicit "No" decision at point E, as the lack of detected obstacles or lane markings simply means no control signal is generated at this stage.
6. Raspberry Pi Processing (F): The Raspberry Pi software processes the combined sensor data, including:
Obstacle identification (G): Utilizing LiDAR data to determine obstacles at various distances.
Lane marking detection (H): Using ToF data to identify lane markings.
RFID data processing (I): If applicable, incorporating RFID data into the decision-making process.
7. Control Signal Generation (J): Based on the processed data, the software generates control signals for the following:
Steering adjustments (J): If an obstacle is detected (positive decision at G) or lane departure occurs (positive decision at H), a control signal is sent to the servo motor for steering adjustments.
This control signal can also incorporate RFID data (I).
Immediate threat response (M): If an obstacle poses an immediate threat (positive decision at G), a control signal is sent to the Electronic Speed Controller (ESC) (M).
8. Steering Adjustment (K): The servo motor adjusts the steering wheel based on the received control signal (J) to keep the vehicle within the lane and navigate around obstacles.
9. Lane Position Maintenance (L): The system continues to maintain the vehicle's lane position.
10. Motor Control (N) (Optional): If an immediate threat is detected (positive decision at M), the
ESC controls the motor for braking or stopping the vehicle as needed (N).
11.Vehicle Sto9p (O) (Optional): The vehicle stops if necessary (O).
12. Optional: Audio Alerts (P): Depending on the chosen user interface (UI), the system might provide audio alerts to warn the driver about detected obstacles based on sensor data interpretation (P).
13. Loop Back (A): The process continuously loops back to the start (A) for real-time monitoring and control.
Example 2: Figure 2
Discloses user interaction flow chart. Visual representation of lane markings to provide a reference for the driver. Icons or highlighted areas depicting detected obstacles, allowing the driver to see their location relative to the vehicle. The system provides audio alerts to warn the driver about detected obstacles. The system displays provide informative messages or warnings for critical actions (e.g., emergency stop confirmation).
Example 3 (Figure 3)
1. Installation: A qualified technician would install the OBC system components (LiDAR, ToF, Raspberry Pi, servo, ESC) within the vehicle, ensuring proper placement and connections.2. Calibration: The technician would perform calibrations for the LiDAR, ToF sensors, and potentially the steering and braking systems for optimal performance.
3. Lane Change Tag Programming: System utilizes RFID for lane changes, the technician might program specific tags embedded in the road infrastructure to trigger desired lane change maneuvers.
User Activation:
4. Vehicle Startup: When the user starts the vehicle, the OBC system automatically powers on along with other car systems.
5. Background Operation: The OBC system continuously operates in the background, collecting data from LiDAR, ToF (and potentially RFID) sensors.
6. User Interface: The system will have a simple user interface (UI) displayed on the car's dashboard or infotainment system. This UI will visually represent the detected lane markings and surrounding obstacles.
System Monitoring and Alerts:
7. Obstacle Detection: The OBC system constantly analyzes sensor data to detect obstacles in the environment.
8. Decision Making: Based on the analysis, the system decides on appropriate actions (steering adjustments, braking).
9. Automatic Control (Optional): The system automatically controls the steering wheel (through the servo motor) and potentially braking (through the ESC) to maintain lane position and avoid obstacles.
10. User Alerts: If the system detects a critical obstacle or requires driver intervention, it might trigger audio or visual alerts on the dashboard UI.
Pre-Use Setup - Customizable Lane Change Management:
Uniqueness: System allows user or technician programming of RFID tags embedded in the road infrastructure for specific lane change maneuvers, this could be a unique aspect. Existing OBC systems might focus on obstacle detection and lane keeping without user-configurable lane change functionality. This allows the user to change lane automatically even in low visibility without hesitation.
Pre-Use Setup:
1. Installation: A qualified technician would install the OBC system components (LiDAR, ToF, Raspberry Pi, servo, ESC) within the vehicle, ensuring proper placement and connections.
HARDWARE COMPONENTS (FIGURE 8)
SENSORS:
LiDAR (Light Detection and Ranging): (Specific model: Ouster OS1-16) This sensor uses lasers to measure distances by reflecting light off objects. It can be used for various purposes in an OBC system, such as obstacle detection, terrain mapping, or object recognition.
Time-of-Flight (ToF) sensor: (Specific model: STMicroelectronics VL53L0X) This sensor measures the time it takes for light to travel to an object and back, determining its distance. It can be used for short-range obstacle detection or object proximity sensing.
RFID reader/writer: (Specific model: NXP Ucode 7) This device can read and write data stored on RFID tags, potentially used for identification or tracking purposes in OBC system.
Microcontroller: This is the central processing unit (CPU) of an OBC system. It controls the other components, processes sensor data, and executes control algorithms.
, Claims:WE CLAIM:
1. A driver assistance system, comprising:
a data acquisition unit (10);
a data processing unit (12),;
a processor or microcontroller (14) configured to execute computer implementable instructions;
a communication protocol (16);
a RFID tag reader (18);
a network (20); and
one or more alert units (22),
wherein the data acquisition unit (10) comprises a LIDAR sensor (20), configured to provide real-time obstacle detection on the road, a ToF sensor (22) to provide precise distance measurement for obstacle and lane marking, the RFID reader (18) configured to read the data on the pre-installed tags on the road, the communication protocol (16) communicates the data acquired by the data acquisition unit (1o) and transmits to the processor (14) over a network (20), and wherein the system is configured for real-time lane guidance, obstacle detection, and speed control in a continuous loop, to provide steering adjustments or braking to maintain lane position and avoid obstacles maintain.
2. The driver assistance system as claimed in claim 1, wherein the LiDAR sensor (20) is configured to continuously scan the environment, generate a 3D point cloud of surrounding objects of the distance and location of objects in the vehicle's path, at a scan rate in the range of 10-20 Hz, and horizontal field of view of over 120 degrees
3. The driver assistance system as claimed in claim 1, wherein the LiDAR sensor identifies obstacles at various distances.
4. The driver assistance system as claimed in claim 1, wherein the ToF sensor is configured for distance measurements for nearby obstacles of the immediate surroundings and lane markings.
5. The driver assistance system as claimed in claim 1, wherein the RFID reader is a USB or SPI.
6. The driver assistance system as claimed in claim 1, wherein the microcontroller is selected from Raspberry Pi.
7. The driver assistance system as claimed in claim 1, wherein the microcontroller is configured to assign a timestamp to each data point received from the sensors, representing the exact time the data point.
8. The driver assistance system as claimed in claim 1, wherein the alert is a audio alert notification to the driver.
9. A method of operation of the Driver Assistance System as claimed in claim 1, comprises step:
acquiring data by the data acquisition unit, comprising real time scanning of the environment, generating a 3D point cloud of surrounding objects by the LIDAR sensor and communicating via the communication protocol (16) to the microcontroller;
actuation of the ToF sensor to provide precise distance measurements for nearby obstacles and lane markings distance measurements and retrieves the data at a higher frequency of 50-100 Hz for precise real-time positioning;
scanning by RFID reader to scans for pre-programmed tags embedded in the road, detecting and transmitting the unique identifier or encoded data associated with the tag to the microcontroller;
compilation of the data by the microcontroller and actuating steering adjustment, to maintain lane position or avoid the obstacle or actuating braking, slowing down or stopping the vehicle, and actuation of the alerting unit in case of discrepancy of obstacles or lane deviation.
Documents
Name | Date |
---|---|
202441086674-COMPLETE SPECIFICATION [11-11-2024(online)].pdf | 11/11/2024 |
202441086674-DECLARATION OF INVENTORSHIP (FORM 5) [11-11-2024(online)].pdf | 11/11/2024 |
202441086674-DRAWINGS [11-11-2024(online)].pdf | 11/11/2024 |
202441086674-FORM 1 [11-11-2024(online)].pdf | 11/11/2024 |
202441086674-FORM-9 [11-11-2024(online)].pdf | 11/11/2024 |
202441086674-REQUEST FOR EARLY PUBLICATION(FORM-9) [11-11-2024(online)].pdf | 11/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.