Vakilsearch LogoIs NowZolvit Logo
close icon
image
image
user-login
Patent search/

Deep Learning-Powered Image Recognition and Path Planning System for Autonomous Vehicles in 5G-Enabled Smart Cities

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

Deep Learning-Powered Image Recognition and Path Planning System for Autonomous Vehicles in 5G-Enabled Smart Cities

ORDINARY APPLICATION

Published

date

Filed on 12 November 2024

Abstract

This invention presents a Deep Learning-Powered Image Recognition and Path Planning System for Autonomous Vehicles in 5G-Enabled Smart Cities. This invention introduces a deep learning-powered route planning system for autonomous vehicles, designed for efficient navigation in 5G-enabled smart cities. The system combines image recognition and multi-sensor fusion to detect and navigate around obstacles, dynamically adapting routes with the support of reinforcement learning algorithms. Real-time 5G connectivity ensures fast data exchange with cloud resources, enabling the AV to make instant adjustments in complex urban environments. This system is ideally suited for applications requiring high precision, such as ride-hailing and deliveries in densely populated areas. Accompanied Drawing [FIG. 1]

Patent Information

Application ID202441087355
Invention FieldPHYSICS
Date of Application12/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Dr. G. SharadaProfessor & HoD, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. N. PrameelaAssociate Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. B. Aruna KumariAssociate Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. T. ShilpaAssistant Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. P. SwethaAssistant Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Mr. P. V. NareshAssistant Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Mr. T. SrinidhiAssistant Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. V. Sudha RaniAssistant Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. G. Likitha ReddyAssistant Professor, Department of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia

Applicants

NameAddressCountryNationality
Malla Reddy College of Engineering & TechnologyDepartment of Information Technology, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia

Specification

Description:[001] This invention pertains to the field of autonomous vehicles (AVs), specifically addressing advanced navigation systems enabled by deep learning for image recognition and real-time processing within 5G-enabled smart cities. The system uses deep learning, multi-sensor fusion, and reinforcement learning to facilitate dynamic object detection, localization, and route planning. By leveraging 5G connectivity, the system operates with low-latency data processing, making it adaptable and precise for high-density urban environments.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] Autonomous vehicles have the potential to revolutionize urban transportation by providing safer, more efficient alternatives to human-driven vehicles. However, autonomous navigation in dense, unpredictable urban environments presents unique challenges. Traditional GPS navigation is often unreliable in urban areas due to signal interference from buildings and infrastructure. Furthermore, AVs need to accurately detect, identify, and localize objects in real-time, especially in crowded areas where obstacles like pedestrians, bicycles, and other vehicles frequently obstruct paths.
[004] While 5G technology provides the necessary bandwidth and latency improvements for AV communication, many AV systems lack the integration of advanced deep learning models to leverage these network capabilities fully. Current AV technologies struggle to process and act on vast amounts of sensory data rapidly, leading to delayed responses that can compromise safety and efficiency.
[005] The presented invention bridges this gap by combining deep learning-powered image recognition with high-speed data processing over 5G networks, ensuring that AVs can accurately detect, localize, and plan routes around dynamic obstacles in real time. This innovation is critical for deploying autonomous vehicles in 5G-enabled smart cities, where fast and precise navigation is essential.
[006] Accordingly, to overcome the prior art limitations based on aforesaid facts. The present invention provides a Deep Learning-Powered Image Recognition and Path Planning System for Autonomous Vehicles in 5G-Enabled Smart Cities. Therefore, it would be useful and desirable to have a system, method and apparatus to meet the above-mentioned needs.

SUMMARY OF THE PRESENT INVENTION
[007] This invention discloses a deep learning-based route planning system optimized for 5G-enabled smart cities, enabling autonomous vehicles to navigate densely populated environments efficiently. The system integrates several key components:
I. Image Acquisition and Deep Learning for Object Detection: The system captures high-resolution video data from multiple onboard cameras, processed by deep learning models such as convolutional neural networks (CNNs) and transformers. These models identify target objects, like pedestrians or landmarks, and ensure accurate real-time recognition.
II. 5G Network Connectivity for Low-Latency Data Exchange: Leveraging 5G technology, the system enables rapid data transfer between the AV and cloud computing resources, allowing the AV to make real-time navigation adjustments. This setup supports continuous, uninterrupted communication, essential for processing the dense data streams required in urban environments.
III. Multi-Sensor Fusion for Enhanced Localization: Data from multiple sensors, including GPS, LiDAR, radar, and visual cameras, are fused using algorithms like Kalman filtering and SLAM. This fusion provides robust, accurate positioning, allowing the AV to navigate with precision, even in GPS-denied areas.
IV. Adaptive Path Planning Using Reinforcement Learning: The system employs reinforcement learning algorithms and traditional graph-based algorithms like A* and Dijkstra to dynamically adjust routes based on real-time obstacle detection and environmental changes.
V. Autonomous Control Interface for Vehicle Navigation: The system communicates navigation instructions to the AV's control systems, enabling the vehicle to adjust speed, proximity, and direction in response to its environment.
[008] These combined technologies allow autonomous vehicles to operate safely and efficiently in high-density, high-traffic urban environments, supporting applications such as passenger pickup in crowded areas and precision deliveries in city centers.
[009] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[010] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
[012] Figure 1: System Architecture Diagram
[013] A schematic of the entire system architecture, illustrating the integration of various components, including the AV, sensors, 5G network, cloud computing resources, and deep learning models.
[014] Figure 2: Route Planning Process Flowchart
[015] A detailed flowchart showing the process of capturing, processing, and acting on image data, from initial image acquisition to the final execution of navigation commands by the AV.
[016] Figure 3: Block Diagram of System Modules
[017] A block diagram depicting each system module, including the image acquisition module, object recognition module, multi-sensor fusion module, path planning module, and control interface module.
[018] Figure 4: Urban Navigation Scenario
[019] An illustrative example of the AV navigating in a crowded urban area, highlighting how the system processes dynamic obstacles and calculates safe, efficient routes to the target.
[020] Figure 5: Sensor Fusion Process Diagram
[021] A diagram explaining the fusion of data from LiDAR, radar, and camera sensors, detailing the algorithms used for refining object localization and accurate environmental mapping., in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[022] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or are common general knowledge in the field relevant to the present invention.
[023] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[024] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
This invention presents an advanced image and video compression system that combines hybrid neural networks (Variational Autoencoders, Generative Adversarial Networks, and Transformers) with quantum computing and edge computing for enhanced efficiency and scalability. The system compresses data by encoding it into a latent space, reconstructing high-quality images, and capturing dependencies across video frames. Quantum processors handle intensive computations, while edge computing facilitates real-time compression closer to data sources. Auxiliary data and meta-learning optimize compression for varying content, and a reinforcement learning agent ensures adaptive data flow in fluctuating network conditions. This system is suited for applications requiring high-quality, low-latency compression, such as streaming, telemedicine, and AR/VR.
System Overview
[025] The system is designed to facilitate real-time object recognition, localization, and navigation in urban settings. It incorporates several advanced components, each contributing to accurate and responsive vehicle movement.
[026] Image Acquisition Module:
The AV is equipped with high-definition cameras positioned around the vehicle, providing a complete 360-degree view. These cameras capture high-resolution images, while LiDAR and radar sensors supplement the visual data. Thermal cameras are used for night-time or low-visibility conditions, enhancing safety by identifying pedestrians or other vehicles. The image acquisition module preprocesses data to reduce noise, normalize lighting, and optimize the image input for deep learning analysis.
[027] Deep Learning-Based Object Recognition:
The image data is processed by state-of-the-art deep learning models, including CNNs like YOLOv5 for rapid, high-accuracy object detection and transformer-based models for contextual understanding. These models are trained on urban datasets to identify common city objects such as traffic signs, pedestrians, vehicles, and static obstacles. In crowded areas, transformer-based models capture relationships between multiple objects, which allows the AV to predict potential movements and respond accordingly.
[028] 5G Network Connectivity:
The system utilizes 5G connectivity to transfer data to and from cloud resources. The edge computing infrastructure processes data, reducing computational load on the AV. This real-time data transfer is essential for scenarios where rapid response to changing environments is required, such as avoiding pedestrians or navigating through busy intersections.
[029] Multi-Sensor Fusion for Enhanced Localization:
To accurately localize the AV within complex cityscapes, the system integrates GPS with visual, LiDAR, and radar data. Algorithms like Kalman filtering and SLAM are employed to merge data from these sources, providing accurate positioning and orientation data. This multi-sensor approach ensures consistent localization, even in areas where GPS signals are obstructed or unreliable.
[030] Path Planning Module:
Using reinforcement learning, the path planning module evaluates multiple potential routes to identify the safest and most efficient path to the target. Reinforcement learning allows the system to adapt based on previous experiences, optimizing for both safety and efficiency. Graph-based algorithms, like A* and Dijkstra, are also used for specific route calculations when the AV needs to navigate around known obstacles or reach a destination efficiently.
[031] Autonomous Control Interface:
The control interface translates the route calculated by the path planning module into precise instructions for the AV's steering, acceleration, and braking systems. This interface enables the vehicle to respond dynamically to immediate obstacles, adjust its speed according to traffic conditions, and navigate through intersections or congested areas.
[032] Operational Workflow
Target Identification:
When a target object (e.g., a passenger waiting at a designated pickup spot) is identified, the system captures and processes images to match the detected object with pre-stored data. The deep learning models analyze specific features, such as human body contours or unique markers like clothing color, ensuring accurate identification even in crowded or low-visibility scenarios.
[033] Dynamic Route Calculation:
After identifying the target, the path planning module calculates the optimal route to approach the target. It considers factors such as real-time traffic, pedestrian density, and any temporary obstacles. By continuously monitoring the environment and recalculating the path as needed, the system ensures a safe, efficient route.
[034] Navigation Execution:
As the AV follows the calculated route, the system continuously refines its position and updates the route in response to detected obstacles. If a pedestrian suddenly enters the path, for example, the system reroutes or stops the vehicle to ensure safety.
[035] Adaptation in Crowded Environments:
In dense urban environments, the system adapts by choosing alternate routes or adjusting speed to accommodate surrounding objects and people. Reinforcement learning algorithms help improve these choices by learning from past navigation experiences.
[036] Communication with External Infrastructure:
Through 5G connectivity, the AV communicates with smart city infrastructure, receiving data on traffic signals, pedestrian crossings, and traffic updates. This external information improves the AV's situational awareness, supporting safer and more efficient navigation.
[037] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[038] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[039] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
, Claims:1. A route planning system for autonomous vehicles, comprising:
an image acquisition module that captures high-resolution video and image data of the environment surrounding the vehicle;
a deep learning-based object recognition module configured to identify and localize target objects within the captured image data using convolutional neural networks (CNNs) and transformer-based models;
a 5G-enabled communication module that facilitates real-time data exchange between the vehicle and edge computing resources for low-latency processing;
a multi-sensor fusion module that integrates data from GPS, LiDAR, radar, and visual sensors to enhance object localization and provide accurate positioning data for the vehicle;
a path planning module that calculates an optimal route to the target object based on real-time environmental data, using reinforcement learning algorithms and graph-based pathfinding algorithms;
and an autonomous control interface that converts the planned route into navigation commands for the vehicle's control systems.
2. The system of claim 1, wherein the deep learning-based object recognition module is configured to:
process the captured image data using pretrained and fine-tuned CNN models, including models from the YOLO and EfficientNet families, to detect and identify dynamic objects such as pedestrians, vehicles, and urban obstacles;
and use transformer-based models to analyze the spatial and contextual relationships between objects for enhanced object recognition in high-density environments.
3. The system of claim 1, wherein the 5G-enabled communication module is further configured to:
transmit real-time image and sensor data to edge computing nodes for processing, and
receive processed data from the edge nodes with low latency, enabling the system to make instantaneous route adjustments based on environmental changes.
4. The system of claim 1, wherein the multi-sensor fusion module includes:
a data integration component configured to merge positioning data from GPS, LiDAR, radar, and camera sensors using algorithms selected from the group consisting of Kalman filtering and simultaneous localization and mapping (SLAM) algorithms, to provide accurate positioning in GPS-denied environments.
5. The system of claim 1, wherein the path planning module is configured to:
dynamically adjust the route of the vehicle using reinforcement learning algorithms that learn from past navigation data to optimize route selection in response to detected obstacles;
and employ graph-based algorithms, selected from the group consisting of A* and Dijkstra's algorithms, to calculate the most efficient route to the target object under varying urban conditions.
6. The system of claim 1, further comprising:
an external communication interface configured to interact with smart city infrastructure, including traffic signals and pedestrian indicators, to enhance situational awareness and optimize the vehicle's navigation efficiency in urban environments.
7. A method for real-time object recognition and route planning in autonomous vehicles within a 5G-enabled smart city, the method comprising:
capturing high-resolution video and image data of the vehicle's surroundings using an array of visual and spatial sensors;
processing the image data using deep learning models, including convolutional neural networks and transformer-based architectures, to identify target objects and obstacles within the vehicle's vicinity;
transmitting the processed image data to edge computing nodes over a 5G network for additional processing and receiving updated navigation information from the edge nodes with low latency;
integrating data from GPS, LiDAR, radar, and visual sensors using a multi-sensor fusion module to provide accurate positioning data;
dynamically calculating an optimal route to the target object using reinforcement learning and graph-based algorithms, and adjusting the route in response to real-time environmental data;
and executing navigation commands through the vehicle's autonomous control interface to safely guide the vehicle towards the target object.
8. The method of claim 7, wherein the multi-sensor fusion step comprises:
merging positioning and object data from multiple sensors using Kalman filtering or SLAM algorithms to maintain accurate localization, even in environments where GPS signals are obstructed.
9. The method of claim 7, wherein the reinforcement learning algorithm is configured to:
continuously learn from previous navigation scenarios to improve future route selections and minimize the response time to dynamic changes in the environment.
10. The method of claim 7, further comprising:
communicating with external smart city infrastructure, including traffic lights and pedestrian crossings, to receive real-time updates on traffic flow and pedestrian activity, which enhances the vehicle's ability to adjust its route dynamically.

Documents

NameDate
202441087355-COMPLETE SPECIFICATION [12-11-2024(online)].pdf12/11/2024
202441087355-DECLARATION OF INVENTORSHIP (FORM 5) [12-11-2024(online)].pdf12/11/2024
202441087355-DRAWINGS [12-11-2024(online)].pdf12/11/2024
202441087355-FORM 1 [12-11-2024(online)].pdf12/11/2024
202441087355-FORM-9 [12-11-2024(online)].pdf12/11/2024
202441087355-REQUEST FOR EARLY PUBLICATION(FORM-9) [12-11-2024(online)].pdf12/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.