Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A Deep Learning based Adaptive Image Processing System for Autonomous Object Recognition and Classification
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 15 November 2024
Abstract
This invention presents a deep learning-based adaptive image processing system for real-time object recognition and classification, designed for autonomous applications. The system includes a dynamic preprocessing module for adjusting image quality based on environmental conditions, a multi-scale feature extraction module that uses a convolutional neural network (CNN) to capture essential features, and an adaptive classification module with region proposal networks (RPNs) and attention mechanisms for accurate and efficient classification. The system dynamically adjusts processing parameters based on available computational resources, making it suitable for real-time use in embedded and edge devices, such as in autonomous vehicles, drones, and robotic systems. Accompanied Drawing [FIG. 1]
Patent Information
Application ID | 202441088597 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 15/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. D.Sujatha | Professor & Dean, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. Balasani Venkata Ramudu | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. B.Priyanka | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. Laiphangbam Melinda | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. P.Harikrishan | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. Vonteru. L.Padam Latha | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. D. Chandra Sekhar Reddy | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Ms. B.Jyothi | Assistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. U.Rakesh | Assistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Malla Reddy College of Engineering & Technology | Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Specification
Description:[001] The present invention relates to the fields of artificial intelligence, machine learning, and image processing, specifically focused on deep learning-based adaptive image processing systems for autonomous object recognition and classification. This invention is particularly relevant in applications such as autonomous vehicles, drones, robotics, and security systems, where real-time and accurate object detection, recognition, and classification are crucial for operational safety and efficiency.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] Object recognition and classification are fundamental capabilities in autonomous systems, enabling them to perceive and respond to their environment. In applications like self-driving cars, drones, and industrial robots, accurate and real-time recognition of objects such as pedestrians, vehicles, traffic signs, and obstacles is essential for making informed decisions and ensuring safe operation.
[004] Traditional image processing and machine learning techniques have limitations in terms of speed, adaptability, and accuracy, especially in complex, dynamic environments with varying lighting conditions, object occlusions, and motion blur. Recent advancements in deep learning, particularly with convolutional neural networks (CNNs), have significantly improved object recognition accuracy. However, deploying these models in real-time autonomous applications remains challenging due to constraints in processing power, especially in embedded or edge devices.
This invention addresses these challenges by introducing an adaptive deep learning-based system that dynamically adjusts its processing parameters based on environmental conditions and computational resources. The system provides high accuracy in object recognition and classification, even in resource-constrained environments, by using an adaptable architecture that optimizes processing speed, power consumption, and memory usage.
[005] Accordingly, to overcome the prior art limitations based on aforesaid facts. The present invention provides a Deep Learning based Adaptive Image Processing System for Autonomous Object Recognition and Classification. Therefore, it would be useful and desirable to have a system, method and apparatus to meet the above-mentioned needs.
SUMMARY OF THE PRESENT INVENTION
[006] This invention presents a deep learning-based adaptive image processing system for real-time object recognition and classification in autonomous systems. The system combines multiple deep learning techniques, including convolutional neural networks (CNNs), region proposal networks (RPNs), and attention mechanisms, to accurately detect, recognize, and classify objects in a dynamic environment.
[007] The system comprises three main modules: (1) a dynamic preprocessing module that adjusts image quality and parameters based on lighting and environmental conditions, (2) a multi-scale feature extraction module that captures essential features at different resolutions using a CNN-based architecture, and (3) an adaptive classification module that uses attention mechanisms and region proposal networks to classify and localize objects efficiently. The system is designed to operate on embedded and edge devices with limited computational resources, making it suitable for real-time applications in autonomous vehicles, drones, and robotic systems.
[008] By dynamically adapting to environmental conditions and available computational resources, the system delivers accurate object recognition and classification with low latency, enhancing the reliability and efficiency of autonomous systems operating in complex, real-world scenarios.
[009] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[010] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
FIG. 1: Block diagram of the adaptive image processing system for autonomous object recognition and classification.
FIG. 2: Flowchart of the dynamic preprocessing module, detailing real-time adjustments based on environmental conditions.
FIG. 3: Diagram of the multi-scale feature extraction module, illustrating the CNN-based architecture and feature extraction at different resolutions.
FIG. 4: Flowchart of the adaptive classification module, showing the use of region proposal networks and attention mechanisms for object localization and classification.
FIG. 5: Example output demonstrating real-time object recognition and classification in an autonomous vehicle scenario.
DETAILED DESCRIPTION OF THE INVENTION
[012] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or are common general knowledge in the field relevant to the present invention.
[013] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[014] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
System Architecture (FIG. 1)
[015] The system architecture consists of three main modules: dynamic preprocessing, multi-scale feature extraction, and adaptive classification. Each module utilizes deep learning models optimized for real-time processing in embedded environments, enabling the system to perform object recognition and classification with high accuracy and low latency.
[016] Dynamic Preprocessing Module (FIG. 2): This module preprocesses input images by adjusting brightness, contrast, and resolution based on environmental conditions, such as lighting and motion. It uses a feedback loop to monitor these conditions and adapt the image quality accordingly. For instance, in low-light environments, the module enhances brightness and contrast to improve visibility. In high-motion scenarios, it may adjust the frame rate to capture clearer images.
[017] Multi-Scale Feature Extraction Module (FIG. 3): This module processes the preprocessed image to extract essential features at multiple scales using a CNN-based architecture. Multi-scale feature extraction is crucial for capturing both fine and coarse details, which improves the accuracy of object recognition in complex environments.
The module includes a series of convolutional layers that operate at different resolutions, creating feature maps that capture high-level semantic information as well as low-level spatial details. This hierarchical representation of features allows the system to recognize objects of various sizes and shapes.
[018] Adaptive Classification Module (FIG. 4): The adaptive classification module uses region proposal networks (RPNs) to identify regions of interest (ROIs) in the image. These regions are then processed using an attention mechanism to focus on relevant features, which enhances classification accuracy.
[019] The RPN generates bounding boxes for objects, while the attention mechanism prioritizes the most important features within each box. This approach reduces computational load by concentrating processing power on relevant regions, optimizing both accuracy and speed.
[020] Real-Time Processing and Adaptability: The system is designed to operate in real-time on edge and embedded devices by dynamically adjusting its processing parameters based on computational resources and environmental conditions. For example, if the system detects limited processing power, it can reduce the resolution or complexity of feature extraction without compromising recognition accuracy.
This adaptability enables the system to maintain high performance even in resource-constrained environments, making it suitable for applications in autonomous vehicles, drones, and robotics.
[021] Output and Object Classification (FIG. 5): The final output is a set of classified objects, each with a bounding box and a confidence score, displayed in real time. The system's low-latency processing ensures that it can deliver results promptly, supporting autonomous decision-making in real-world scenarios.
Workflow
[022] Image Acquisition and Preprocessing: The input image is acquired from an onboard camera or sensor. The dynamic preprocessing module adjusts the image quality based on environmental conditions, optimizing brightness, contrast, and resolution for subsequent processing.
[023] Multi-Scale Feature Extraction Process: The preprocessed image is passed to the multi-scale feature extraction module, which uses a CNN to create a series of feature maps at different resolutions. These maps capture high-level and low-level features essential for object recognition and classification.
[024] Region Proposal and Attention-Based Classification: The adaptive classification module applies a region proposal network (RPN) to identify potential object locations in the feature maps. Each region is then processed by an attention mechanism, which selectively focuses on relevant features within the region, improving classification accuracy and reducing computational requirements.
[025] Real-Time Output Generation: The recognized objects are classified and displayed with bounding boxes and confidence scores, providing real-time visual feedback for autonomous systems. This output enables the autonomous system to make informed decisions, such as navigation or obstacle avoidance, based on the recognized objects.
[026] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[027] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[028] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
, Claims:1. A deep learning-based adaptive image processing system for autonomous object recognition and classification, comprising a dynamic preprocessing module, a multi-scale feature extraction module, and an adaptive classification module.
2. The system of claim 1, wherein the dynamic preprocessing module adjusts image brightness, contrast, and resolution based on environmental conditions such as lighting and motion.
3. The system of claim 1, wherein the multi-scale feature extraction module uses a convolutional neural network (CNN) architecture to create feature maps at different resolutions for capturing both high-level and low-level features.
4. The system of claim 1, wherein the adaptive classification module uses a region proposal network (RPN) to identify regions of interest (ROIs) in the image, which are processed for classification.
5. The system of claim 1, wherein the adaptive classification module incorporates an attention mechanism to focus on relevant features within each identified region, improving classification accuracy.
6. The system of claim 1, wherein the adaptive classification module dynamically adjusts its processing parameters based on available computational resources, ensuring real-time performance in resource-constrained environments.
7. The system of claim 1, further comprising a feedback loop that allows the dynamic preprocessing module to monitor environmental conditions and adjust preprocessing parameters in real time.
8. The system of claim 1, wherein the recognized objects are displayed in real time with bounding boxes and confidence scores, enabling prompt decision-making by autonomous systems.
Documents
Name | Date |
---|---|
202441088597-COMPLETE SPECIFICATION [15-11-2024(online)].pdf | 15/11/2024 |
202441088597-DECLARATION OF INVENTORSHIP (FORM 5) [15-11-2024(online)].pdf | 15/11/2024 |
202441088597-DRAWINGS [15-11-2024(online)].pdf | 15/11/2024 |
202441088597-FORM 1 [15-11-2024(online)].pdf | 15/11/2024 |
202441088597-FORM-9 [15-11-2024(online)].pdf | 15/11/2024 |
202441088597-REQUEST FOR EARLY PUBLICATION(FORM-9) [15-11-2024(online)].pdf | 15/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.