Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
Novel Machine Learning Approach for Real-Time Image Compression and Enhancement in Low-Power Devices
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 12 November 2024
Abstract
This invention introduces a machine learning-powered system for real-time image compression and enhancement, specifically designed to operate on low-power devices such as mobile devices, IoT sensors, and edge computing units. The system includes a lightweight neural network with reduced parameters to minimize computational load, along with an adaptive quantization module that dynamically adjusts compression levels based on image complexity. An edge-aware enhancement module further improves image quality by focusing processing power on critical details. The system is optimized for real-time performance, employing latency-reduction techniques such as model quantization and weight sharing to enable continuous operation within constrained power and memory limits. This invention addresses the challenges of balancing image quality, computational efficiency, and resource limitations, making it suitable for applications requiring efficient, high-quality image processing on low-power devices. Accompanied Drawing [FIG. 1]
Patent Information
Application ID | 202441087354 |
Invention Field | ELECTRONICS |
Date of Application | 12/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. M. Sharanya | Professor & HoD, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. Karimulla PSK | Associate Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. K.V.Ramana Reddy | Associate Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. G.Madhu Mohan | Associate Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. Ravi Bukya | Associate Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. N. Ramesh | Associate Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. T. Venkata Prasad | Associate Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. M. Kumaraswamy | Assistant Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. K. Aravinda Swamy | Assistant Professor, Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Malla Reddy College of Engineering & Technology | Department of Electrical & Electronics Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Specification
Description:[001] The present invention pertains to the field of image processing, with a particular focus on a machine learning-based approach for real-time image compression and enhancement. This system is specifically designed for low-power devices, which often have limited computational and power resources. These devices include, but are not limited to, smartphones, Internet of Things (IoT) sensors, and edge computing units. The invention aims to deliver efficient image processing that preserves both quality and resource utilization, enabling real-time application of advanced image processing techniques on resource-constrained devices.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] With the rapid expansion of mobile and IoT applications, the demand for image processing on low-power devices has surged. Applications such as surveillance, autonomous navigation, and remote medical diagnostics often require real-time image compression and enhancement capabilities. However, low-power devices face constraints in processing power, memory, and battery life, which challenge their ability to handle high-resolution image data efficiently. Traditional image processing methods and machine learning models are typically resource-intensive, which makes them impractical for deployment on such devices. Compression techniques are essential for reducing data size, which in turn lowers storage and transmission bandwidth requirements. Enhancement techniques are also needed to improve the quality of images captured under less-than-ideal conditions, such as low lighting or poor focus.
[004] The advent of neural networks and machine learning-based image processing has improved image compression and enhancement, offering solutions that can deliver high-quality results. However, many of these models are complex and computationally demanding, limiting their feasibility for real-time operation on low-power devices. To address these challenges, the present invention introduces a novel machine learning approach that balances image quality and computational efficiency. It leverages lightweight neural networks, adaptive quantization, and a training pipeline focused on efficiency, making it well-suited for low-power devices that need high-quality image processing without high energy consumption.
[005] Accordingly, to overcome the prior art limitations based on aforesaid facts. The present invention provides a Novel Machine Learning Approach for Real-Time Image Compression and Enhancement in Low-Power Devices. Therefore, it would be useful and desirable to have a system, method and apparatus to meet the above-mentioned needs.
SUMMARY OF THE PRESENT INVENTION
[006] This invention provides a system and method for real-time image compression and enhancement optimized for low-power devices, combining machine learning-based techniques with resource-efficient architecture. The system's core elements include a lightweight neural network tailored for low computational load, an adaptive quantization and encoding module that compresses images based on their complexity, and an edge-aware enhancement module that preserves crucial details in the image. Designed for real-time operation, the system uses model quantization, weight sharing, and other latency-reduction techniques to ensure it can process images continuously on devices with limited power and memory.
[007] The lightweight neural network is structured with a reduced number of parameters and layers to minimize processing demands, enabling high-quality image compression and enhancement within the computational constraints of low-power devices. The adaptive quantization module dynamically adjusts the compression rate across different regions of an image based on the complexity of content, allowing highly detailed areas to retain quality while low-detail areas are compressed more heavily. This module works in conjunction with an encoding algorithm that further reduces data size based on device limitations. The training pipeline for the neural network is designed to balance image quality and efficiency, using techniques such as dynamic layer pruning and precision tuning to enhance computational performance without sacrificing output quality. The edge-aware enhancement module focuses processing power on high-information areas within an image, improving visual clarity, especially under challenging conditions like low light or high noise. The real-time operation framework supports the continuous processing of images, maintaining a high frame rate through efficient handling of computational resources. This invention offers a robust solution for applications in surveillance, mobile imaging, and remote diagnostics where high-quality, real-time image processing is essential but power resources are limited.
[008] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[009] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[010] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
[011] Figure 1: System Architecture Diagram
A schematic representation of the architecture, showing the lightweight neural network, adaptive quantization module, and edge-aware enhancement module.
[012] Figure 2: Data Flow for Real-Time Compression and Enhancement
Flowchart illustrating the steps in capturing, compressing, enhancing, and outputting an image in real-time.
[013] Figure 3: Neural Network Structure for Image Compression and Enhancement
Diagram showing the specific layers used in the neural network, including convolutional layers, attention mechanisms, and quantization layers.
[014] Figure 4: Adaptive Quantization Process
Visualization of the adaptive quantization module, demonstrating how compression rates vary based on image content complexity.
[015] Figure 5: Comparison of Image Quality and Compression Rates
Graphical comparison of output quality versus compression rates across different levels of available device power and computational resources.
DETAILED DESCRIPTION OF THE INVENTION
[016] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or are common general knowledge in the field relevant to the present invention.
[017] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[018] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
This invention presents an advanced image and video compression system that combines hybrid neural networks (Variational Autoencoders, Generative Adversarial Networks, and Transformers) with quantum computing and edge computing for enhanced efficiency and scalability. The system compresses data by encoding it into a latent space, reconstructing high-quality images, and capturing dependencies across video frames. Quantum processors handle intensive computations, while edge computing facilitates real-time compression closer to data sources. Auxiliary data and meta-learning optimize compression for varying content, and a reinforcement learning agent ensures adaptive data flow in fluctuating network conditions. This system is suited for applications requiring high-quality, low-latency compression, such as streaming, telemedicine, and AR/VR.
System Overview
[019] The system described in this invention is designed to perform real-time image compression and enhancement on low-power devices by employing a machine learning-based architecture tailored for efficiency. The invention comprises a lightweight neural network, an adaptive quantization and encoding module, an edge-aware enhancement module, and a real-time operation framework, each contributing to optimal image processing on resource-constrained hardware.
[020] The lightweight neural network, a core component of this invention, is constructed with a minimized number of parameters and layers. This design prioritizes low computational demands, allowing the neural network to process images on devices with limited processing power and memory. The network incorporates both compression and enhancement capabilities. The compression layers use autoencoders to reduce the image size, ensuring that data storage and transmission requirements are minimized. The enhancement layers apply learned transformations to improve visual clarity, particularly in scenarios where images are captured in suboptimal conditions.
[021] To further reduce data size, the adaptive quantization and encoding module assesses the complexity of different image regions. Regions with minimal detail are compressed more heavily, while high-detail regions are given lower compression to preserve quality. The adaptive quantization process allows the system to optimize image quality across regions without unnecessary data retention. The encoded data is tailored to the specific resource availability of the device, whether in terms of processing power, memory, or battery life. The encoding process can apply both lossy and lossless compression techniques, depending on the target application requirements.
[022] The model's training pipeline balances efficiency and image quality by employing multi-objective optimization. This process involves dynamic layer pruning, which reduces the number of active layers according to the device's available resources, thereby maintaining computational efficiency without sacrificing output quality. Precision tuning adjusts the model's floating-point calculations from higher to lower precision, which reduces computational load without significant degradation in image quality.
[023] The edge-aware enhancement module further improves image quality by focusing computational power on important image regions. Edge detection algorithms identify key areas within the image, and attention mechanisms prioritize processing on these regions. This design ensures that details in high-information areas are preserved, producing a clear image output while conserving resources in lower-priority areas.
[024] For real-time performance, the system incorporates a latency-reduction framework, enabling continuous image processing on low-power devices. Techniques such as model quantization, weight sharing, and batch normalization improve inference speed and maintain high frame rates. Model quantization reduces the overall size of the model, allowing it to fit within constrained memory limits. Weight sharing minimizes memory use by reducing redundancy in the neural network's weights, and batch normalization stabilizes output distributions, enhancing processing speed.
The operational workflow begins with image capture and preprocessing. The image is captured by the device's onboard camera or sensor, then passed to the lightweight neural network for compression. After initial compression, the adaptive quantization and encoding module further reduces the data size, selectively compressing areas based on their content complexity. The image is then passed through the edge-aware enhancement module, which focuses processing on critical details and edges within the image. The processed image is outputted in real time, either displayed on the device screen or transmitted to a connected system. By employing latency-reduction techniques, the system can maintain high frame rates, enabling continuous operation in applications where real-time processing is crucial.
[025] This system addresses the demand for high-quality, real-time image processing on low-power devices, particularly in applications such as mobile imaging, surveillance, and medical diagnostics, where power constraints are significant. By balancing efficiency and image quality, the invention supports high-quality visual output while operating within the limits of available resources.
[026] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[027] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[028] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
, Claims:1. A real-time image compression and enhancement system for low-power devices, comprising:
I. a lightweight neural network configured to perform image compression and enhancement with reduced parameters for low computational load;
II. an adaptive quantization module that dynamically adjusts compression rates based on image content complexity;
III. an encoding module that further compresses the image, tailored to the device's power and storage capabilities;
IV. an edge-aware enhancement module that focuses on critical image details, preserving quality while conserving computational resources;
V. and a real-time operation framework implementing latency-reduction techniques for continuous operation on low-power devices.
2. The system of claim 1, wherein the lightweight neural network includes convolutional layers for feature extraction, autoencoder layers for data reduction, and enhancement layers for visual clarity improvement.
3. The system of claim 1, wherein the adaptive quantization module uses an assessment of image complexity to allocate compression levels dynamically across different regions, preserving detail in high-information areas.
4. The system of claim 1, wherein the encoding module applies lossy and lossless compression techniques based on device resource availability and user-defined quality requirements.
5. The system of claim 1, wherein the edge-aware enhancement module includes an attention mechanism that prioritizes high-information areas within the image for enhanced clarity.
6. The system of claim 1, wherein the real-time operation framework includes model quantization, batch normalization, and weight sharing techniques to reduce processing time within low-power device constraints.
Documents
Name | Date |
---|---|
202441087354-COMPLETE SPECIFICATION [12-11-2024(online)].pdf | 12/11/2024 |
202441087354-DECLARATION OF INVENTORSHIP (FORM 5) [12-11-2024(online)].pdf | 12/11/2024 |
202441087354-DRAWINGS [12-11-2024(online)].pdf | 12/11/2024 |
202441087354-FORM 1 [12-11-2024(online)].pdf | 12/11/2024 |
202441087354-FORM-9 [12-11-2024(online)].pdf | 12/11/2024 |
202441087354-REQUEST FOR EARLY PUBLICATION(FORM-9) [12-11-2024(online)].pdf | 12/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.