Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
Novel Deep Learning-Based Method for Real-Time Image Restoration and Noise Reduction in Embedded Image Processing Systems
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 14 November 2024
Abstract
This invention describes a deep learning-based system for real-time image restoration and noise reduction in embedded image processing systems. The system includes a noise estimation and filtering module, a feature extraction and enhancement module, and a restoration refinement module, all optimized for low-latency processing in embedded environments. The noise estimation and filtering module uses a convolutional neural network to detect and categorize noise, while the feature extraction and enhancement module applies lightweight neural networks with attention mechanisms to enhance essential details. The restoration refinement module further refines the image quality using a denoising autoencoder. This invention enables real-time, high-quality image restoration on resource-constrained devices, supporting applications in areas such as surveillance, automotive, and IoT. Accompanied Drawing [FIG. 1]
Patent Information
Application ID | 202441088307 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 14/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. K. Mallikarjuna Lingam | Professor & HoD, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. S. Srinivasa Rao | Professor & Principal, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. Sadanand Yadav | Associate Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr.M. Ramanjaneyulu | Associate Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Ms. P. Swetha | Associate Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. V. Shiva Raj Kumar | Assistant Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Ms. Renju Panicker | Assistant Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Ms. K. Bhavana | Assistant Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Ms. Neha Thakur | Assistant Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. D Santhosh Kumar | Assistant Professor, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Malla Reddy College of Engineering & Technology | Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Specification
Description:[001] The present invention pertains to the fields of embedded systems, image processing, and deep learning. More specifically, it focuses on a method and system for real-time image restoration and noise reduction in embedded image processing systems, using deep learning techniques optimized for low-latency and low-power devices. This invention is relevant for applications in areas such as surveillance, autonomous vehicles, robotics, and IoT devices, where efficient and effective image quality enhancement is critical in resource-constrained environments.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] Embedded image processing systems are widely used in applications requiring real-time visual data analysis, including automotive, industrial automation, and remote monitoring. These systems often face challenges such as low-light conditions, sensor noise, and limited computational resources, which degrade image quality and impact the performance of subsequent image-based analytics. Traditional methods for noise reduction and image restoration can be computationally intensive and may not meet the speed or power constraints of embedded systems.
[004] Recent advancements in deep learning have shown promising results in image restoration and noise reduction tasks. However, most deep learning-based solutions are designed for high-performance computing environments and are not optimized for the constraints of embedded systems, where processing power, memory, and energy consumption are limited. This invention addresses these challenges by introducing a deep learning-based framework specifically designed for real-time image restoration and noise reduction in embedded image processing systems, delivering high-quality outputs with minimal computational overhead.
[005] Accordingly, to overcome the prior art limitations based on aforesaid facts. The present invention provides a Novel Deep Learning-Based Method for Real-Time Image Restoration and Noise Reduction in Embedded Image Processing Systems. Therefore, it would be useful and desirable to have a system, method and apparatus to meet the above-mentioned needs.
SUMMARY OF THE PRESENT INVENTION
[006] This invention provides a novel deep learning-based method for real-time image restoration and noise reduction tailored for embedded image processing systems. The system uses a lightweight neural network architecture, optimized for fast, efficient processing on low-power devices. The framework includes a multi-stage deep learning pipeline: (1) a noise estimation and filtering module, (2) a feature extraction and enhancement module, and (3) a restoration refinement module. These modules work together to restore images from noisy inputs in real time while minimizing memory and processing requirements.
[007] The noise estimation and filtering module initially analyzes the input image to detect and categorize noise types (e.g., Gaussian, salt-and-pepper, or sensor-specific noise). The feature extraction and enhancement module then identifies essential image details and enhances them using deep learning techniques that are computationally efficient. Finally, the restoration refinement module refines the image output by removing remaining artifacts and sharpening edges. This multi-stage approach allows the system to perform high-quality restoration while adhering to the strict latency and power constraints of embedded systems.
[008] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[009] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[010] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
FIG. 1: Block diagram illustrating the architecture of the deep learning-based image restoration and noise reduction system for embedded systems.
FIG. 2: Flowchart showing the steps involved in the noise estimation and filtering module.
FIG. 3: Diagram of the feature extraction and enhancement module, illustrating lightweight neural network layers optimized for embedded systems.
FIG. 4: Flowchart detailing the restoration refinement module's steps for final image enhancement.
DETAILED DESCRIPTION OF THE INVENTION
[011] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or are common general knowledge in the field relevant to the present invention.
[012] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[013] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[014] This invention presents an advanced image and video compression system that combines hybrid neural networks (Variational Autoencoders, Generative Adversarial Networks, and Transformers) with quantum computing and edge computing for enhanced efficiency and scalability. The system compresses data by encoding it into a latent space, reconstructing high-quality images, and capturing dependencies across video frames. Quantum processors handle intensive computations, while edge computing facilitates real-time compression closer to data sources. Auxiliary data and meta-learning optimize compression for varying content, and a reinforcement learning agent ensures adaptive data flow in fluctuating network conditions. This system is suited for applications requiring high-quality, low-latency compression, such as streaming, telemedicine, and AR/VR.
System Architecture (FIG. 1)
[015] The system architecture comprises three main modules: noise estimation and filtering, feature extraction and enhancement, and restoration refinement. Each module utilizes deep learning models that are optimized for the limited processing capabilities of embedded systems, ensuring real-time performance with minimal power consumption.
[016] Noise Estimation and Filtering Module (FIG. 2): This module uses a lightweight convolutional neural network (CNN) to detect and estimate noise levels in the input image. By categorizing the noise type, the module tailors its filtering approach to match the specific noise characteristics, enhancing the efficiency and effectiveness of noise reduction.
[017] The module applies an initial noise filtering step based on the identified noise type, reducing unnecessary data before further processing. For example, Gaussian noise is reduced using a learned Gaussian filter, while salt-and-pepper noise is removed using a median-based deep learning filter.
[018] Feature Extraction and Enhancement Module (FIG. 3): After initial noise reduction, the feature extraction and enhancement module processes the image to identify and enhance essential features. This module employs a shallow but effective neural network architecture, combining lightweight convolutional layers with attention mechanisms that selectively focus on image details.
[019] The module enhances edges, textures, and contrast within the image, preserving crucial visual information. This process ensures that the restored image retains fine details, even in low-light or high-noise conditions, without overloading the embedded system's processing resources.
[020] Restoration Refinement Module (FIG. 4): The final stage, the restoration refinement module, uses a denoising autoencoder to remove any remaining artifacts and refine the image output. The autoencoder architecture is specifically designed for efficiency, utilizing depthwise separable convolutions to reduce computational complexity.
This module performs final adjustments to enhance sharpness, remove artifacts, and ensure a natural appearance. The restoration refinement module also applies adaptive adjustments based on the image's overall quality, fine-tuning contrast and brightness to produce a visually clear output.
[021] Embedded System Compatibility: The entire framework is optimized for embedded systems, with each module designed to perform efficiently on low-power processors. The models are quantized and pruned, reducing memory usage and enabling deployment on resource-constrained devices like microcontrollers and edge computing units.
[022] Output and Real-Time Processing (FIG. 5): The restored and denoised image is available for real-time viewing and analysis, suitable for immediate use in applications such as object detection, pattern recognition, or visual monitoring. The system's low-latency processing ensures continuous operation in dynamic environments, where real-time data is crucial.
Workflow
[023] Image Acquisition and Preprocessing: The input image is acquired from the sensor and undergoes basic preprocessing, such as resizing and normalization, to prepare it for the deep learning pipeline. These steps optimize the image data format for efficient neural network processing in subsequent modules.
[024] Noise Estimation and Filtering Process: The noise estimation and filtering module detects noise type and level within the image, using a CNN trained on various noise patterns commonly found in embedded image processing applications. After identifying the noise, the system applies an appropriate filtering approach to reduce noise without blurring essential features.
[025] Feature Extraction and Enhancement Process: The feature extraction and enhancement module processes the filtered image, extracting crucial details such as edges and textures. This module uses a lightweight CNN with an attention mechanism that selectively focuses on high-information areas, enhancing image details in a resource-efficient manner.
[026] Restoration Refinement Process: The restoration refinement module uses an autoencoder to remove any remaining noise artifacts and sharpen the image further. This step refines the restored image's quality, making it suitable for real-time viewing and further processing.
[027] Real-Time Output Generation: The final, restored image is outputted for immediate use in real-time applications. This real-time processing capability allows the system to deliver continuous, high-quality visual data for various embedded applications, supporting prompt decision-making.
[028] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[029] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[030] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
, Claims:1. A deep learning-based system for real-time image restoration and noise reduction in embedded image processing systems, comprising a noise estimation and filtering module, a feature extraction and enhancement module, and a restoration refinement module.
2. The system of claim 1, wherein the noise estimation and filtering module detects and categorizes noise type using a convolutional neural network to optimize the filtering process for each noise type.
3. The system of claim 1, wherein the feature extraction and enhancement module uses a lightweight neural network architecture with attention mechanisms to enhance edges, textures, and contrast in the image.
4. The system of claim 1, wherein the restoration refinement module uses a denoising autoencoder to remove artifacts and refine the image, producing a natural appearance with sharp detail.
5. The system of claim 1, wherein the neural networks used in each module are optimized for embedded systems by quantizing and pruning, reducing memory and processing requirements.
6. The system of claim 1, wherein the noise estimation and filtering module applies specific noise reduction techniques based on detected noise type, including Gaussian filtering for Gaussian noise and median filtering for salt-and-pepper noise.
7. The system of claim 1, further comprising an adaptive adjustment mechanism in the restoration refinement module that fine-tunes contrast and brightness based on the image's overall quality.
8. The system of claim 1, wherein the entire image restoration and noise reduction process operates in real time, providing continuous visual data suitable for embedded applications.
Documents
Name | Date |
---|---|
202441088307-COMPLETE SPECIFICATION [14-11-2024(online)].pdf | 14/11/2024 |
202441088307-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf | 14/11/2024 |
202441088307-DRAWINGS [14-11-2024(online)].pdf | 14/11/2024 |
202441088307-FORM 1 [14-11-2024(online)].pdf | 14/11/2024 |
202441088307-FORM-9 [14-11-2024(online)].pdf | 14/11/2024 |
202441088307-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf | 14/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.