image
image
user-login
Patent search/

METHODS AND SYSTEMS FOR ASSISTING RIDER SAFETY WHILE RIDING THE VEHICLE

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

METHODS AND SYSTEMS FOR ASSISTING RIDER SAFETY WHILE RIDING THE VEHICLE

ORDINARY APPLICATION

Published

date

Filed on 28 October 2024

Abstract

The embodiments herein disclose methods and systems (300) for assisting rider and road user safety while riding an electric vehicle (EV), the method comprising: processing at least one received media content from a media acquisition unit configured on the EV, wherein the media acquisition unit captures real-time media content of road conditions, lane boundaries, and vehicle surroundings; analyzing the at least one media content using an artificial intelligence (AI) module to identify potential obstacles, potholes, and lane boundaries; generating a real-time alert to the rider based on the analysis, by determining whether a detected obstacle poses a risk to the rider; and providing a lane change recommendation on identifying that the current lane is obstructed, or if a vehicle is approaching from the rear.

Patent Information

Application ID202441082077
Invention FieldELECTRONICS
Date of Application28/10/2024
Publication Number44/2024

Inventors

NameAddressCountryNationality
Sanjeev Nadeson PonnusamyB23, Ajmera Villows, Sy No 91/1, Begur Hobli, Doddathogur, Electronic city, Phase 1, Bangalore 560 100, Karnataka, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
E3 TECHNOLOGIES PRIVATE LIMITEDB23, Ajmera Villows, Sy No 91/1, Begur Hobli, Doddathogur, Electronic city, Phase 1, Bangalore 560 100, Karnataka, IndiaIndiaIndia

Specification

Description:TECHNICAL FIELD
[001] The present disclosure relates to an electric vehicle, and more particularly assisting riders and ensuring road user safety by identifying the obstacles, and pot holes while riding the vehicle using a learning module.

BACKGROUND
[002] The increasing adoption of electric vehicles (EVs) in recent years has transformed the transportation landscape, emphasizing the need for innovative safety measures tailored to these advanced technologies. As cities become more congested and the number of road users continues to rise, ensuring the safety of riders, pedestrians, and other vehicles has never been more critical.
[003] Electric vehicles, significantly reduce their carbon footprint compared to traditional gasoline-powered vehicles. However, the transition to electric mobility also presents unique challenges. While EVs offer numerous benefits, including reduced emissions and operational costs, they also require sophisticated systems to ensure the safety of the rider and other road users.
[004] One of the primary concerns in urban mobility is the risk of accidents. Studies have shown that a significant percentage of road accidents are attributed to human error, which highlights the need for assistance systems that can enhance rider awareness and decision-making.
[005] As the demand for enhanced safety features in transportation continues to grow, the integration of artificial intelligence (AI) in rider and road user safety has emerged as a crucial area of development. Electric vehicles (EVs) and motorcycles, which are becoming increasingly popular, present unique challenges regarding the safety of riders and surrounding road users. The need for innovative solutions that leverage technology to mitigate risks on the road is more pressing than ever. Also, urban environments become increasingly congested, ensuring the safety of riders while navigating the areas (with pot-holes and obstacles) is paramount.
[006] In existing mechanism, among the various risks faced by the riders, potholes and obstacles on the road present significant hazards that can lead to accidents, injuries, and even fatalities. Riders of lightweight electric vehicles may have less stability and control when encountering unexpected obstacles, making them more susceptible to accidents. Potholes can cause a sudden loss of traction or control, leading to falls or collisions.
[007] Many urban areas suffer from poor road maintenance, resulting in potholes, debris, and other obstacles that pose risks to riders. The prevalence of these hazards makes it essential for riders to remain vigilant and be prepared to react quickly. Riders may have limited time to react to sudden obstacles or potholes, especially in heavy traffic. The inability to stop or maneuver safely can lead to severe accidents.
[008] Currently, reversing the EV can be challenging for riders due to limited visibility, which may result in blind spots that make it difficult for riders to see obstacles, pedestrians, or other vehicles while reversing. Electric scooters and motorcycles are more agile than traditional vehicles, which can lead to difficulties in judging distances and angles when reversing. This increased maneuverability can make it easier for riders to misjudge their surroundings. As a result, accidents that occur during reversing maneuvers can lead to more serious injuries.
[009] Hence, there is a need in the art for solutions which will overcome the above-mentioned drawback(s), among others.

OBJECTS
[0010] The principal object of the embodiments herein is to disclose methods and systems for assisting rider and road user safety by implementing a detecting mechanism for identifying the obstacles, providing timely warning to the riders to prevent accidents.
[0011] Another object of the embodiments herein is to disclose methods and systems for enhancing rider and road user safety by utilizing artificial intelligence (AI) to provide an integrated assistance system for the EVs.
[0012] Another object of the embodiments herein is to disclose methods and systems for providing lane change alerts to notify the riders of the vehicles passing on the sides, along with the appropriate instructions to perform lane change.
[0013] Another object of the embodiments herein is to disclose methods and systems for enabling the vehicle to stop automatically or active handlebar vibration on identifying the obstacles on the path.
[0014] Another object of the embodiments herein is to disclose methods and systems for utilizing electronic device of the rider by enabling electronic device as a cluster, providing assistance and safety, creating a frugal, safe, and affordable system that promotes accessibility.
[0015] Another object of the embodiments herein is to disclose methods and systems by facilitating reverse parking assistance using AI, and other components to enhance the safety of riders when maneuvering in reverse.
[0016] Another object of the embodiments herein is to disclose methods and systems for enabling the vehicle to interface effectively with the electronic device and sensors, leveraging edge computing capabilities to enhance the overall functionality and safety of the riding experience.
[0017] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating at least one embodiment and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.

BRIEF DESCRIPTION OF FIGURES
[0018] Embodiments herein are illustrated in the accompanying drawings, throughout which reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0019] FIG. 1 illustrates an environment for assisting rider and road user safety while riding an electric vehicle (EV), according to embodiments as disclosed herein;
[0020] FIG. 2 depicts a block diagram illustrating various units of the electronic device, which is used assist the rider while riding the EV, according to embodiments as disclosed herein;
[0021] FIG. 3 depicts a block diagram illustrating various units of a rider safety assisting system based on the identification of obstacles, lane change, and reverse parking assistance using a learning module, according to embodiments as disclosed herein.
[0022] FIG. 4 is an example diagram illustrating the identification of obstacles using the electronic device based on at least one received media content from a capturing unit/media acquisition unit, according to embodiments as disclosed herein;
[0023] FIG. 5 is an example diagram illustrating the capturing of the obstacles using the capturing unit/media acquisition unit of the electronic device, and providing the warning to the rider, according to embodiments as disclosed herein;
[0024] FIG. 6 is an example diagrams illustrating identification of lane change and providing the instructions based on the destination and speed of the rider, according to embodiments as disclosed herein;
[0025] FIG. 7 is an example diagram illustrating the usage of the electronic device as a cluster for processing the operations required to drive the EV, according to embodiments as disclosed herein;
[0026] FIG. 8 is an example diagram for assisting the EV with the reverse parking based on the identified obstacles using Artificial Intelligence (AI), and other components, according to embodiments as disclosed herein; and
[0027] FIG 9 is a flow diagram illustrating a method assisting rider and road user safety by identifying the obstacles, and pot holes while riding the vehicle, according to embodiments as disclosed herein.


DETAILED DESCRIPTION
[0028] The example embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The description herein is intended merely to facilitate an understanding of ways in which the example embodiments herein can be practiced and to further enable those of skill in the art to practice the example embodiments herein. Accordingly, this disclosure should not be construed as limiting the scope of the example embodiments herein.
[0029] The embodiments herein disclose methods and systems for assisting rider and road user safety by identifying the obstacles, and pot holes while riding the vehicle. Referring to the drawings, and more particularly to FIGS. 1 through 9, where similar reference characters denote corresponding features consistently throughout the figures, there are shown example embodiments.
[0030] FIG. 1 illustrates an environment for assisting rider and road user safety while riding an electric vehicle (EV). As illustrated in FIG. 1, environment 100 comprises the EV 104, and the electronic device 102 to a cloud server 110 through a communication network 106. The electronic device 102 may be connected to the communication network 106 connected to the cloud server 110. The electronic device 102 may be connected to the cloud server 110 through the communication network 106 and/or at least one other communication network (not shown).
[0031] As illustrated in FIG. 1, the EV 104 may be connected to the cloud server 110 using a Controller Area Network (CAN) protocol, through the communication network 106. Therefore, connecting the EV 104 to the cloud server 110 using CAN enables, which may include, but are not limited to real-time diagnostics, remote monitoring, over-the-air updates, and enhanced data analytics. This integration is a significant step toward building intelligent, connected vehicles that can provide valuable insights to manufacturers, fleet operators, and users. In the connected EV, the CAN serves as the backbone for onboard communication, while an additional gateway module interfaces the CAN bus with external networks, such as Wi-Fi, LTE, or 5G, to connect to the cloud. CAN can transmit critical vehicle data and sensor information, which may include, but are not limited to battery voltage, current, temperature, state-of-charge (SoC), motor torque, and speed.
[0032] The communication network 106 may include at least one of, but is not limited to, a wired network, a value-added network, a wireless network, a satellite network, or a combination thereof. Examples of the wired network may be but are not limited to, a Local Area Network (LAN), a Wide Area Network (WAN), the CAN, an Ethernet, and so on. Examples of the wireless network may be, but are not limited to, a cellular network, a wireless LAN (Wi-Fi), Bluetooth, Bluetooth low energy, Zigbee, Wi-Fi Direct (WFD), Ultra-wideband (UWB), infrared data association (IrDA), near field communication (NFC), and so on.
[0033] In an example, the EV 104 is configured to the Telematics Control Unit (TCU) which provide, may include, but are not limited to vehicle-to-cloud communication, remote monitoring, and control capabilities. TCU acts as the central gateway for telematics functions, integrating data from various Electronic Control Units (ECUs) and transmitting it to external servers over cellular networks. TCUs play a pivotal role in enabling connected vehicle features such as real-time diagnostics, over-the-air updates, and remote vehicle management.
[0034] The TCU configured on the EVs may perform several functions, may include but are not limited to data aggregation and transmission, real-time monitoring and diagnostics, remote control and command execution, Over-the Air (OTA) update, connectivity management, and so on.
[0035] The TCU collects data from various ECUs (e.g., Battery Management System (BMS), Motor Control Unit (MCU), Inverter Control, etc.) through the CAN or other in-vehicle communication networks. The TCU aggregates and pre-processes this data before transmitting it to cloud servers for analysis and storage using protocols such as MQTT, HTTP, or CoAP. The TCU monitors the status of critical vehicle systems such as battery health, motor temperature, charging status, and more. It provides real-time fault diagnostics, generating alerts and fault codes that can be used for predictive maintenance.
[0036] Various ECUs in the EV transmit data through the CAN bus, including battery status, motor performance, and diagnostic information. The TCU aggregates data from all the ECUs and applies pre-processing (e.g., filtering, compression) to reduce data size. The data is transmitted to the cloud using secure cellular networks. The cloud servers can then analyze the data for real-time monitoring, diagnostics, and predictive analytics.
[0037] Hence, TCU in the EV is a sophisticated component that integrates real-time monitoring, cloud connectivity, and control functionalities. It serves as the brain of the connected vehicle, enabling features such as remote diagnostics, OTA updates, and predictive maintenance. A well-configured TCU ensures the safety, reliability, and efficiency of modern electric vehicles.
[0038] In another example, EV 104, the electronic device 102, and the databases may be connected with each other directly and/or indirectly (for example, via direct communication, via an access point, and so on). In another example, the electronic device 102, and the databases may be connected with each other via a relay, a hub, and a gateway. It is understood that the electronic device 102, and the databases may be connected to each other in any of various manners (including those described above) and may be connected to each other in two or more of various manners (including those described above) at the same time.
[0039] The electronic device 102 referred to herein may be a device that enables the rider to assist safety while riding the vehicle, by identifying the obstacles or pot holes. The electronic device 102 may also be a user device that is being used by the user to connect, and/or interact, and/or control the operations of the plurality of EV. Examples of the electronic device 102 maybe, but are not limited to, a smartphone, a mobile phone, a video phone, a computer, a tablet personal computer (PC), a laptop, a wearable device, a personal digital assistant (PDA), an IoT device, or any other device that may be portable.
[0040] The electronic device 102 can be configured to capture the media content may include, but not limited to the path, lane, road/route travelling by the rider. The media content of the electronic device 102 referred to herein maybe, but not limited to audio, video, image, or any media content of the path, road travelling by the rider. Embodiments herein use the terms such as "media content", "image", and so on, interchangeably to refer to road, path captured by the electronic device 102.
[0041] The capturing unit/input unit of the electronic device 102 referred herein can be any kind of device used to capture media. The capturing unit/input unit can be, but not limited to, digital camera, web camera, single-lens reflex (SLR), Digital SLR (DSLR), mirrorless camera, compact cameras, video recorders, digital video recorders, and the like. The media content referred to herein can be, but not limited to video, image, audio, and the like. Embodiments herein use the terms such as "capturing unit", "input unit", and so on, interchangeably to refer to the device/unit used to the path, lane, road/route travelling by the rider.
[0042] In an embodiment, the EV 104 may be equipped with various sensors, actuators, and control modules that work in synchronize with the electronic device to enhance rider safety. It includes sensors and mechanisms for monitoring various vehicle parameters, such as speed, braking status, battery health, and environmental conditions. The EV sends sensor data to the electronic device, enabling it to analyze the road situation and provide the rider with relevant warnings and suggestions.
[0043] The EV may be equipped with sensors, may include, but are not limited to pot-hole and obstacle detection sensor, lane change sensor, rearview sensor, speed and position sensor. The pot-hole and obstacle detection sensor can detect road irregularities and obstacles in the vehicle's path. Lane change sensors can monitor surrounding vehicles and provide lane change alerts. Rearview sensors can assist in reverse parking, identifying obstacles from the rearview and monitor vehicles approaching from behind. Speed and position sensors can track the vehicle's speed, acceleration, and position to assist with braking and stability control.
[0044] In an embodiment, the capturing unit of the electronic device 102 can capture the media contents related to road conditions, path, lane, and route. The captured media is been processed by the electronic device 102, and can assist the rider with the corresponding alert/warning based on the identified obstacle or pot holes on the captured media.
[0045] The sensors can be configured to identify the speed of the vehicle. The vehicles passing on the side of the road can be captured using a sensor configured to the rear mirror of the vehicle. The captured media with the road conditions along with the vehicle speed can be processed to determine the lane required to be used by the rider. The electronic device 102 can provide lane change alert based on the current riding speed and vehicles passing on the side using the sensor on the rear view of the vehicle.
[0046] In an embodiment, the electronic device 102 can process the captured media content of the road conditions using an Artificial Intelligence (AI) on the cloud server. The AI can train the electronic device 102 to interpret and understand the obstacles on the road. The AI enhances rider and road safety by processing real-time data, interpreting road conditions, and providing actionable recommendations for riders. The system utilizes Artificial Intelligence (AI) models, specifically Machine Learning (ML) and Deep Learning (DL), to analyze complex scenarios and enable intelligent decision-making.
[0047] The embodiments can stop the vehicle or handlebar vibration on identifying the obstacle. The embodiment can prevent collisions and enhance rider safety by detecting obstacles in real-time and taking appropriate actions based on the distance and speed of approach.
[0048] FIG. 2 depicts a block diagram illustrating various units of the electronic device, which is used assist the rider while riding the EV. The electronic device 102 includes a memory 202, a communication interface 204, an input unit 206, an output unit 208, a sensor unit 214, a controller/processor 210, and a database 212.
[0049] The memory 202 referred herein include at least one type of storage medium, from among a flash memory type storage medium, a hard disk type storage medium, a multi-media card micro type storage medium, a card type memory (for example, an SD or an XD memory), random-access memory (RAM), static RAM (SRAM), read-only memory (ROM), electrically erasable programmable ROM (EEPROM), programmable ROM (PROM), a magnetic memory, a magnetic disk, or an optical disk.
[0050] The memory 202 may store at least one of, but is not limited to, rider behavior, road condition history, obstacle detection, and lane identification. The memory 202 can store incoming data received from the sensor, obstacle/pot-hole detection, notification/alerts providing to the user, results and decision taken by the rider.
[0051] The memory 202 may also store the learning module, neural network, and obstacle identification module 304. The learning module of the neural network can be processed by controller 210 to obtain the input, i.e., media contents captured by the capturing unit of the electronic device 102. The learning module can be provided with the nature of obstacles, pot-holes, lane information, rider's speed, previous alerts/ notification provided to the user.
[0052] The memory (202) stores pre-trained AI models that are periodically updated through cloud-based learning. This includes model parameters, neural network weights, and other elements essential for accurate decision-making.
[0053] Edge AI Models with some AI models, which operate directly on the electronic device, are stored locally to enable edge computing capabilities. This setup ensures that the device can perform basic AI operations even without cloud connectivity.
[0054] The learning module of the neural network can be processed by controller 210 to obtain the obstacles/ pot-holes based on the road conditions.
[0055] Examples of the neural network, but are not limited to, an Artificial Intelligence (AI) model, a multi-class Support Vector Machine (SVM) model, a Convolutional Neural Network (CNN) model, a deep neural network (DNN), a recurrent neural network (RNN), a restricted Boltzmann Machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), a regression-based neural network, a deep reinforcement model (with ReLU activation), a deep Q-network, and so on. The neural network may include a plurality of nodes, which may be arranged in layers. Examples of the layers may be but are not limited to, a convolutional layer, an activation layer, an average pool layer, a max pool layer, a concatenated layer, a dropout layer, a fully connected layer, a SoftMax layer, and so on. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights/coefficients. A topology of the layers of the neural network may vary based on the type of the respective network. In an example, the neural network may include an input layer, an output layer, and a hidden layer. The input layer receives a layer input and forwards the received layer input to the hidden layer. The hidden layer transforms the layer input received from the input layer into a representation, which may be used for generating the output in the output layer. The hidden layers extract useful/low-level features from the input, introduce non-linearity in the network and reduce a feature dimension to make the features equivalent to scale and translation. The nodes of the layers may be fully connected via edges to the nodes in adjacent layers. The input received at the nodes of the input layer may be propagated to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients/weights respectively associated with each of the edges connecting the layers.
[0056] The obstacles identifying module 304, lane change identifying module 306, and reverse park assistance module 308 may be trained using at least one learning method. Examples of the learning method may be, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, regression-based learning, and so on. The modules may be neural network models in which several layers, a sequence for processing the layers, and parameters related to each layer may be known and fixed for performing the intended functions. Examples of the parameters related to each layer may be, but are not limited to, activation functions, biases, input weights, output weights, and so on, related to the layers. A function associated with the learning method may be performed through the non-volatile memory, the volatile memory, and/or the controller 210. The controller 210 may include one or a plurality of processors. At the time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial Intelligence (AI)-dedicated processor such as a neural processing unit (NPU).
[0057] Here, being provided through learning means that, by applying the learning method to a plurality of learning data, a predefined operating rule, or the neural network, of the desired characteristic is made. Functions of the neural network, the modules may be performed in the electronic device 102 itself in which the learning according to an embodiment is performed, and/or maybe implemented through a separate server/system.
[0058] The communication interface 204 may include one or more components, which enable the electronic device 102 to communicate with another device (for example, the EV 104) using the communication methods that have been supported by the communication network 106. The communication interface 204 may include the components such as a wired communicator, a short-range communicator, a mobile/wireless communicator, and a broadcasting receiver.
[0059] The wired communicator may enable the electronic device 102 to communicate with the other devices using the communication methods such as, but are not limited to, wired LAN, Ethernet, and so on. The short-range communicator may enable the electronic device 102 to communicate with the other devices using the communication methods such as, but are not limited to, Bluetooth low energy (BLE), near field communicator (NFC), WLAN (or Wi-fi), Zigbee, infrared data association (IrDA), Wi-Fi Direct (WFD), UWB communication, Ant+ (interoperable wireless transfer capability) communication, shared wireless access protocol (SWAP), wireless broadband internet (Wibro), wireless gigabit alliance (WiGiG), and so on. The mobile communicator may transmit/receive wireless signals with at least one of a base station, an external terminal, or a server on a mobile communication network/cellular network. For example, the wireless signal may include a speech call signal, a video telephone call signal, or various types of data, according to transmitting/receiving of text/multimedia messages. The broadcasting receiver may receive a broadcasting signal and/or broadcasting-related information from the outside through broadcasting channels. The broadcasting channels may include satellite channels and ground wave channels. In an embodiment, the electronic device 102 may or may not include the broadcasting receiver.
[0060] The input unit 206 may be configured to enable the user to interact with the electronic device 102. The input unit 206 can be a capturing unit configured to capture the media contents, may include but not limited to the road condition, sides of the lane, reverse view of the vehicle, and so on. The capturing unit/input unit referred to herein can be any kind of device used to capture inputs (the video input, the image input, or any media input) from the environment which the vehicle is riding the vehicle.
[0061] The input unit 206 referred to herein can be any kind of device used to capture media. The input unit 110 can be, but not limited to, digital camera, media capturing device, web camera, Single-lens reflex (SLR), Digital SLR (DSLR), mirrorless cameras, compact cameras, video recorders, digital video recorders, and the like. The media referred to herein can be, but not limited to video, image, and the like.
[0062] The output unit 208 may be configured to assist the rider with the obstacles/ pot-holes identified on the road (in-front) and also on the rear side of the vehicle, lane change notification, and so on. The output unit 208 may include at least one of, for example, but is not limited to, a display, a User Interface (UI) module, a light-emitting device, and so on. The UI module may provide a specialized UI or graphical user interface (GUI), or the like, synchronized to the electronic device 102, according to the applications.
[0063] The sensor unit 214 may include sensors for managing the battery, motor and powertrain, environment and safety, chassis and suspension, charging system sensor, interior and driver assistance, and so on. The battery managing sensor comprises voltage sensor, current sensor, temperature sensor, coolant flow sensor, and so on. The motor and powertrain sensor comprises position sensor, speed sensor, torque sensor, temperature sensor, and so on. The environmental and safety sensor comprises ultrasonic sensor, Light Detection and ranging sensor.
[0064] The controller 210 may include one or a plurality of processors. The one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial Intelligence (AI)-dedicated processor such as a neural processing unit (NPU).
[0065] In another example, the electronic device 102, and the database 212 may be connected with each other directly (for example: via direct communication, via an access point, and so on). In another example, the electronic device 102, and the database 212 may be connected with each other via a relay, a hub, and a gateway. It is understood that the electronic device 212, and the database 212 may be connected to each other in any of various manners (including those described above) and may be connected to each other in two or more of various manners (including those described above) at the same time.
[0066] FIG. 3 depicts a block diagram illustrating various units of a rider safety assisting system based on the identification of obstacles, lane change, and reverse parking assistance using a learning module. As depicted in FIG. 3, the rider safety assisting system 300 comprises a media capturing module 302, an obstacle identifying and notification module 304, a lane change identifying and notification module 306, a reverse park assistance module 308, and a learning module 310.
[0067] The media capturing module 302 can be configured for capturing visual data from the surroundings using electronic device 104 mounted on the vehicle, such as front, rear, or side cameras. The captured media serves as the main input for the subsequent modules, enabling real-time processing and analysis. This module supports advanced image processing techniques such as frame extraction, object recognition, and segmentation to interpret the visual feed and identify key elements such as road conditions, pot-hole identification, road debris, lane markings, traffic signs, and road boundaries. The captured visual content is critical for features like lane departure warnings, blind spot monitoring, and reverse parking assistance. In some configurations, the module may incorporate edge AI models for pre-processing and filtering the visual data before it is sent to the main processing unit. It can also implement image stabilization and noise reduction techniques to improve accuracy, particularly in challenging driving conditions.
[0068] The media capturing module 310 referred to herein can be any kind of device that can capture media. The media content referred to herein can be, but not limited to video, image, audio, and the like. Embodiments herein use the terms such as "media capturing module", "capturing unit", "input unit", and so on, interchangeably to refer to module/device/unit used to identify the obstacles for providing rider safety.
[0069] The obstacles identifying and notification module 304 can be configured to process the media contents from the Media Capturing Module and other sensors, such as ultrasonic or radar, to detect and classify obstacles on the road. It employs AI-based object detection algorithms such as YOLO (You Only Look Once), SSD (Single Shot Detector), and Faster R-CNN (Region-Based Convolutional Neural Network) to recognize various objects, including vehicles, pedestrians, potholes, road debris, and other static or dynamic entities. This module categorizes obstacles as either static (e.g., road barriers, parked vehicles) or dynamic (e.g., moving cars, pedestrians), and performs risk analysis to determine the severity of the detected obstacles based on their size, proximity, and speed. If an obstacle poses an immediate threat, the system generates alerts for the rider through visual, auditory, or haptic means. In advanced configurations, this module may trigger automatic braking or steering adjustments to prevent collisions. The use of sensor fusion techniques allows the module to integrate data from multiple sensors, creating a unified view of the surroundings and improving detection accuracy.
[0070] The lane change identifying and notification module (306) ensures that lane changes are performed safely by continuously monitoring the position and movement of surrounding vehicles using inputs from rear-view cameras and side sensors. It uses lane detection algorithms such as Hough Transform and Convolutional Neural Networks (CNNs) to identify lane boundaries and markers, while rear-view and side cameras capture images to detect vehicles that may be in the rider's blind spot. The module analyzes vehicle speed and trajectory to assess whether a lane change is safe. If a vehicle is detected in the blind spot or approaching at high speed, it provides real-time alerts to the rider. This module can override rider commands if a lane change is deemed unsafe, issuing an override warning or vibrating the handlebar to discourage the maneuver. In addition to safety alerts, it can suggest optimal lane changes based on the current speed, traffic density, and road conditions.
[0071] The reverse park assistance module (308) provides guidance during reverse parking maneuvers. It uses rear-view cameras and ultrasonic sensors to monitor the area behind the vehicle, generating a live feed with overlaid trajectory lines to show the safest path for parking. The system calculates the expected path based on the vehicle's steering angle and the detected obstacles. Visual guidance and auditory cues indicate proximity to obstacles, alerting the rider when the vehicle is too close to a stationary or moving object. This module is effective in complex or tight parking scenarios, where rear visibility is limited. It uses image recognition and pattern analysis to differentiate between static objects like curbs and walls and moving objects like pedestrians or animals. Advanced implementations may include AI-based path planning algorithms that suggest the best approach for parking, reducing the effort required from the rider.
[0072] The learning module (310) is configured for continuous system improvement through machine learning and deep learning techniques. It records historical data from various driving scenarios, including detected obstacles, lane change patterns, and rider interactions. This data is used to train new models and update existing ones, allowing the system to learn from real-world experiences and refine its performance. The learning module personalizes the system's responses based on rider-specific patterns and behaviors, such as preferred speed ranges, typical routes, and habitual lane change tendencies. It can receive over-the-air (OTA) updates to incorporate new AI models and safety algorithms, ensuring that the system stays up-to-date with the latest advancements. By aggregating data from multiple riders, the learning module enables the system to generalize new patterns and improve its overall capabilities for all users.
[0073] The system 300 begins with the media capturing module by gathering visual data/media contents from the surroundings. The received data, combined with sensor inputs, is processed by the obstacles identifying and lane change modules to detect potential hazards and evaluate safe lane change options. If a hazard is detected, the system generates real-time alerts through visual, auditory, or haptic feedback mechanisms. During reverse maneuvers, the reverse park assistance module provides real-time guidance using visual aids and audio cues. The learning module continuously updates its internal models based on these interactions, improving the system's decision-making capabilities over time. This modular architecture ensures that the rider safety assisting system can provide comprehensive safety assistance, covering obstacle detection, lane change suggestions, and parking assistance, all while adapting to rider preferences and real-world driving conditions.
[0074] FIG. 4 is an example diagram illustrating the identification of obstacles using the electronic device based on at least one received media content from a capturing unit/media acquisition unit. As illustrated in FIG. 4, it involves mounting the electronic device 104 electronic device 102 on the vehicle's front holder, which is specifically designed to secure the device in an accessible position. The holder is equipped with a weather protection cover to ensure that the electronic device 102 is shielded from environmental factors such as rain, dust, and extreme temperatures, ensuring continuous operation in diverse riding conditions. The mounting is done in a manner that provides a clear and unobstructed view of the display screen, allowing the rider to easily see visual alerts, navigation instructions, and safety notifications without taking their eyes off the road. Once mounted, the electronic device 102 is locked into place to prevent accidental dislodgement during high-speed maneuvers or uneven terrain.
[0075] After the device is securely mounted, the next step is to connect it to a power source. This can be done using either a wired charger or an induction charging pad, depending on the rider's preference and the vehicle's configuration. The charger is necessary to keep the electronic device 102 powered throughout the ride, as the continuous use of GPS navigation, image processing, and data analysis can rapidly drain the battery. Connecting the electronic device 102 to a charger ensures that the rider assist system remains fully operational for long-distance rides, where uninterrupted functionality is crucial for safety.
[0076] Once the device is powered and ready, the rider must launch the dedicated application and set it to "Rider Assist Mode." This mode is a specialized configuration within the electronic device 102's application that locks the interface into a safety-focused setup, ensuring that only critical information related to riding safety and navigation is displayed. When in Rider Assist Mode, the app suppresses incoming notifications such as messages or calls to minimize distractions, allowing the rider to focus solely on the road and the safety alerts generated by the system. This mode prioritizes safety data from the vehicle's sensors, displaying only relevant information such as speed, lane change alerts, obstacle warnings, and road conditions.
[0077] The next step is for the rider to wear a Bluetooth Low Energy (BLE)-enabled helmet that is paired with the electronic device 102. The BLE helmet serves as an essential component of the rider safety system by enabling the rider to receive real-time audio alerts and navigation instructions directly through the helmet's built-in speakers. This setup allows the rider to stay informed about potential hazards, lane change suggestions, and other safety warnings without needing to look at the display screen, enhancing situational awareness and reducing cognitive load. The helmet can also support two-way communication, allowing the rider to send voice commands to the electronic device 102, such as requesting navigation updates or activating specific safety features.
[0078] The final step in the setup involves using the video images captured by the electronic device 102 for further analysis and action. The system leverages the device's capturing unit/camera to capture real-time video feeds of the road conditions, which are then processed using AI algorithms within the application. The AI analyzes these video feeds to identify obstacles, lane markings, potholes, and other road features. Based on this analysis, the system triggers relevant alerts and actions, such as obstacle warnings, lane change alerts, or speed adjustment suggestions. This step is critical for enabling real-time image-based safety assistance, where the electronic device 102 not only acts as a display unit but also serves as the primary processing hub for detecting and responding to road conditions.
[0079] FIG. 5 is an example diagram illustrating the capturing of the obstacles using the capturing unit/media acquisition unit of the electronic device, and providing the warning to the rider. As illustrated in FIG. 5, using AI-based image recognition, the processor identifies road surface irregularities such as potholes and bumps. The system can differentiate between minor and severe potholes, assigning a risk level to each. Upon detection, the electronic device 102 provides an alert on the display and, if paired with a BLE-enabled helmet, sends an audio warning to the rider. This ensures that the rider has enough time to avoid the hazard, preventing potential accidents or loss of balance.
[0080] FIG. 6 is an example diagrams illustrating identification of lane change and providing the instructions based on the destination and speed of the rider. As illustrated in FIG. 6, the lane change instruction feature uses data from side and rear cameras to analyze adjacent lanes and detect approaching vehicles. When a lane change is deemed safe, the electronic device 102 provides a visual or audio instruction to the rider, guiding them to switch lanes smoothly. Conversely, if a vehicle is detected in the blind spot, the electronic device 102 issues a lane change alert, warning the rider not to proceed with the maneuver. This system enhances the rider's awareness of nearby traffic and prevents risky lane changes.
[0081] FIG. 7 is an example diagram illustrating the usage of the electronic device as a cluster for processing the operations required to drive the EV. As illustrated in FIG. 7, the video on mobile cluster aspect turns the electronic device 102 into a digital instrument cluster that provides a visual overview of the rider's environment. The electronic device 102 displays the live video feed of the road, augmented with visual indicators for detected hazards, lane boundaries, and suggested paths. This feature is crucial for real-time navigation and situational awareness, as it allows the rider to make informed decisions based on the current road conditions. As illustrated in FIG. 7, the mobile as cluster is a simplified digital dashboard or display interface designed to provide fundamental ride information. The cluster may include, but not limited to speedometer, battery status indicator, odometer/trip meter, current riding mode, time and date, remaining range estimate, speed limit warning, and so on. The speedometer displays the current speed of the vehicle; battery status indicator provides state of charge (SoC); odometer displays total distance traveled and the distance covered in the current trip; current riding mode indicates the selected riding mode (e.g., Eco, normal, sport, etc.).
[0082] The mobile as cluster, uses electronic device of the rider as the primary digital dashboard and control unit for displaying real-time information about the vehicle's status, navigation, and safety alerts. This leverages the powerful processing capabilities, high-resolution screens, and connectivity features inherent in modern smartphones to create a cost-effective, customizable, and multifunctional rider interface.
[0083] Therefore, in the rider safety assisting system, the mobile as cluster transforms the mobile device into a smart instrument cluster, integrating multiple functionalities such as vehicle data monitoring, AI-based safety alerts, real-time navigation, and media display. This innovative approach not only reduces the need for expensive built-in hardware but also makes the system adaptable and user-friendly.
[0084] FIG. 8 is an example diagram for assisting the EV with the reverse parking based on the identified obstacles using Artificial Intelligence (AI), and other components. As illustrated in FIG. 8 the reverse parking assistance module 308 integrates multiple elements, including rear-facing cameras, ultrasonic sensors, and AI processing units of the learning module 310 within the electronic device 102, to provide a comprehensive view of the environment behind the vehicle. The capturing unit/media acquisition unit captures the real-time video feeds of the area behind the rider, and the ultrasonic sensors measure the distance to nearby objects. These inputs are then processed using AI to identify potential obstacles, determine the safest path for parking, and calculate trajectory lines based on the vehicle's position and steering angle.
[0085] Hence, the reverse parking assistance module 308 analyzes the captured video and sensor data to provide visual guidance to the rider. It overlays trajectory lines on the video feed displayed on the screen of the electronic device 102, indicating the suggested path for reversing safely.
[0086] FIG 9 is a flow diagram illustrating a method assisting rider and road user safety by identifying the obstacles, and pot holes while riding the vehicle.
[0087] At step 1002, the method includes, processing, by an electronic device (102), at least one received media content from a media acquisition unit configured on the EV, wherein the media acquisition unit captures real-time media content of road conditions, lane boundaries, and vehicle surroundings.
[0088] At step 1004, the method includes, analyzing, by the electronic device (102), the at least one media content using an artificial intelligence (AI) module to identify potential obstacles, potholes, and lane boundaries.
[0089] At step 1006, the method includes, generating, by the electronic device (102), a real-time alert to the rider based on the analysis, by determining whether a detected obstacle poses a risk to the rider; and
[0090] At step 1008, the method includes, providing, by the electronic device (102), a lane change recommendation on identifying that the current lane is obstructed, or if a vehicle is approaching from the rear.
[0091] The various actions, acts, blocks, steps, or the like in the method and the flow diagram 1000 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0092] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
, Claims:1. A method for assisting rider and road user safety while riding an electric vehicle (EV), the method comprising:
processing, by an electronic device (102), at least one received media content from a media acquisition unit configured on the EV, wherein the media acquisition unit captures real-time media content of road conditions, lane boundaries, and vehicle surroundings;
analyzing, by the electronic device (102), the at least one media content using an artificial intelligence (AI) module to identify potential obstacles, potholes, and lane boundaries;
generating, by the electronic device (102), a real-time alert to the rider based on the analysis, by determining whether a detected obstacle poses a risk to the rider; and
providing, by the electronic device (102), a lane change recommendation on identifying that the current lane is obstructed, or if a vehicle is approaching from the rear, wherein the electronic device (102) is configured as a fixed or removable device such as a mobile phone.

2. The method as claimed in claim 1, wherein the media acquisition unit can be mounted on the frontside of the vehicle for capturing real-time video feeds of the road ahead and rear side of the vehicle to capture reverse maneuvers.

3. The method as claimed in claim 1, wherein the AI module is configured to detect lane boundaries and for classifying obstacles, vehicles, pedestrians, pot-holes and road debris.

4. The method as claimed in claim 1, wherein the generated real-time alert comprises at least one of a visual alert, an audio alert, or a haptic alert provided to the rider through a Bluetooth Low Energy (BLE)-enabled helmet.

5. The method as claimed in claim 1, wherein the lane change recommendation is based on the speed, distance, and trajectory of nearby vehicles detected using side-mounted sensors and rear-view cameras.

6. The method as claimed in claim 1, comprises updating the AI module in the electronic device (102) based on newly acquired media content and rider feedback, enabling to adapt to new road conditions and scenarios.

7. The method as claimed in claim 1, wherein the electronic device (102) is configured as a mobile cluster, displaying real-time vehicle information such as speed, battery status, trip information, and real-time alerts on the display screen of the electronic device (102), enabling the rider to monitor critical information.

8. A system (300) for assisting rider and road user safety while riding an electric vehicle (EV), the system comprising:
an electronic device (102);
a cloud server (110);
a hardware processor (210), wherein the hardware processor is configured to:
process at least one received media content from a media acquisition unit configured on the EV, wherein the media acquisition unit captures real-time media content of road conditions, lane boundaries, and vehicle surroundings;
analyze the at least one media content using an artificial intelligence (AI) module to identify potential obstacles, potholes, and lane boundaries;
generate a real-time alert to the rider based on the analysis, by determining whether a detected obstacle poses a risk to the rider; and
provide a lane change recommendation on identifying that the current lane is obstructed, or if a vehicle is approaching from the rear, wherein the electronic device (102) is configured as a fixed or removable device such as a mobile phone.

9. The system (300) as claimed in claim 8, wherein the media acquisition unit can be mounted on the frontside of the vehicle for capturing real-time video feeds of the road ahead and rear side of the vehicle to capture reverse maneuvers.

10. The system (300) as claimed in claim 8, wherein the AI module is configured to detect lane boundaries and for classifying obstacles, vehicles, pedestrians, pot-holes and road debris.

11. The system (300) as claimed in claim 8, wherein the generated real-time alert comprises at least one of a visual alert, an audio alert, or a haptic alert provided to the rider through a Bluetooth Low Energy (BLE)-enabled helmet.

12. The system (300) as claimed in claim 8, wherein the lane change recommendation is based on the speed, distance, and trajectory of nearby vehicles detected using side-mounted sensors and rear-view cameras.

13. The system (300) as claimed in claim 8, comprises updating the AI module in the electronic device (102) based on newly acquired media content and rider feedback, enabling to adapt to new road conditions and scenarios.

14. The system (300) as claimed in claim 8, wherein the electronic device (102) is configured as a mobile cluster, displaying real-time vehicle information such as speed, battery status, trip information, and real-time alerts on the display screen of the electronic device (102), enabling the rider to monitor critical information.

15. The system (300) as claimed in claim 8, wherein the electronic device (102) is mounted on the dashboard of the vehicle as a fixed or removable digital instrumental cluster and configured to display real-time speed, trip information, and AI-based alerts to the rider.

Documents

NameDate
202441082077-COMPLETE SPECIFICATION [28-10-2024(online)].pdf28/10/2024
202441082077-DRAWINGS [28-10-2024(online)].pdf28/10/2024
202441082077-EVIDENCE FOR REGISTRATION UNDER SSI [28-10-2024(online)].pdf28/10/2024
202441082077-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-10-2024(online)].pdf28/10/2024
202441082077-FORM 1 [28-10-2024(online)].pdf28/10/2024
202441082077-FORM 18A [28-10-2024(online)].pdf28/10/2024
202441082077-FORM 3 [28-10-2024(online)].pdf28/10/2024
202441082077-FORM FOR SMALL ENTITY(FORM-28) [28-10-2024(online)].pdf28/10/2024
202441082077-FORM FOR STARTUP [28-10-2024(online)].pdf28/10/2024
202441082077-FORM-5 [28-10-2024(online)].pdf28/10/2024
202441082077-FORM-9 [28-10-2024(online)].pdf28/10/2024
202441082077-FORM28 [28-10-2024(online)].pdf28/10/2024
202441082077-POWER OF AUTHORITY [28-10-2024(online)].pdf28/10/2024
202441082077-REQUEST FOR EARLY PUBLICATION(FORM-9) [28-10-2024(online)].pdf28/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.