Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
Machine Learning-Enabled Dynamic Image Quality Enhancement for Augmented Reality Applications in 5G Networks
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 14 November 2024
Abstract
This invention presents a machine learning-enabled system for dynamically enhancing image quality in augmented reality (AR) applications over 5G networks. The system features adaptive image resolution, context-aware scaling, and real-time augmentation tailored to user interactions and environmental conditions. By leveraging machine learning and 5G capabilities, this system provides seamless, high-quality AR experiences optimized across various scenarios and network conditions, applicable in fields such as healthcare, gaming, and training. Accompanied Drawing [FIG. 1]
Patent Information
Application ID | 202441088305 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 14/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. M V Kamal | Professor & HoD, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. P Dileep | Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Dr. M Gayatri | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mrs. V Bala | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. P.Sreenivas | Associate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mrs. P.Satyavathi | Assistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mr. Mahendar Jinukala | Assistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mrs. P.Lavanya | Assistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Mrs. R Annapurna | Assistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Malla Reddy College of Engineering & Technology | Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100 | India | India |
Specification
Description:[001] The present invention pertains to the fields of augmented reality (AR), image processing, and machine learning, with a specific focus on image quality enhancement for augmented reality applications. This invention employs machine learning techniques to dynamically enhance image resolution, scale, and contextual relevance in real-time within augmented reality environments, leveraging the high-speed, low-latency capabilities of 5G networks.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] Augmented reality (AR) applications have gained prominence in fields ranging from entertainment and gaming to education and healthcare. However, high-quality AR experiences are limited by image resolution, stability, and contextual relevance. Current AR systems typically operate within fixed parameters for image scaling, positioning, and quality, which can result in suboptimal visual experiences, especially in dynamic environments with changing light, user movement, and network conditions.
[004] With the deployment of 5G networks, which offer substantial improvements in bandwidth and latency, there is an opportunity to improve AR quality dynamically. Machine learning (ML) can play a critical role by providing adaptive image quality adjustments based on real-time environmental factors and user interactions, thus enhancing the AR experience. However, current solutions lack an integrated system that adapts dynamically across all these dimensions.
[005] Accordingly, to overcome the prior art limitations based on aforesaid facts. The present invention provides a Machine Learning-Enabled Dynamic Image Quality Enhancement for Augmented Reality Applications in 5G Networks. Therefore, it would be useful and desirable to have a system, method and apparatus to meet the above-mentioned needs.
SUMMARY OF THE PRESENT INVENTION
[006] This invention aims to provide an adaptive AR enhancement system that utilizes machine learning and 5G network capabilities to create a high-quality, immersive user experience. It combines ML-driven image adjustments with real-time responsiveness to user and environmental factors, enhancing image clarity, contextual relevance, and object scaling as needed. Key components of the system include:
(1) a dynamic resolution adjustment module that enhances AR image quality through deep learning, based on factors like user proximity and environmental lighting;
(2) a context-aware scaling module that adjusts the size and perspective of AR objects based on spatial factors, ensuring realism and immersion in any setting;
(3) an adaptive augmentation module that personalizes AR effects in response to user interactions, minimizing visual clutter and maintaining focus on relevant content; and (4) a 5G network optimization layer that adapts image processing based on current network conditions to maintain a seamless experience under varying bandwidths and latencies.
[007] The integration of these components enables the system to deliver a contextually aware, highly responsive AR experience. For example, in applications like healthcare training simulations, the system can adjust image resolution and augment specific anatomical areas based on trainee interactions. In outdoor gaming scenarios, it can dynamically scale virtual characters and scenery to match user movements and maintain immersion. This adaptability ensures that the AR experience remains smooth, engaging, and practical across a wide range of devices and environments.
[008] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[009] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[010] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
FIG. 1 depicts a block diagram illustrating the architecture of the adaptive image quality enhancement system for AR over a 5G network.
FIG. 2 presents a flowchart that explains the process flow for ML-driven dynamic image resolution enhancement.
FIG. 3 provides a schematic representation of the context-aware scaling adjustments based on environmental conditions.
FIG. 4 shows the network optimization layers that adjust augmentation quality according to current 5G network bandwidth and latency.
FIG. 5 illustrates an adaptive augmentation example where ML tailors image clarity and scaling based on real-time user interactions and environmental changes..
DETAILED DESCRIPTION OF THE INVENTION
[011] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or are common general knowledge in the field relevant to the present invention.
[012] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[013] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
This invention presents an advanced image and video compression system that combines hybrid neural networks (Variational Autoencoders, Generative Adversarial Networks, and Transformers) with quantum computing and edge computing for enhanced efficiency and scalability. The system compresses data by encoding it into a latent space, reconstructing high-quality images, and capturing dependencies across video frames. Quantum processors handle intensive computations, while edge computing facilitates real-time compression closer to data sources. Auxiliary data and meta-learning optimize compression for varying content, and a reinforcement learning agent ensures adaptive data flow in fluctuating network conditions. This system is suited for applications requiring high-quality, low-latency compression, such as streaming, telemedicine, and AR/VR.
System Overview
[014] The system's architecture includes an AR processing unit, an ML-based image enhancement module, an interaction and environment feedback module, and a 5G network optimization layer (FIG. 1). The AR processing unit is responsible for generating and managing augmented objects that are displayed to the user, handling initial image rendering based on environmental data. The ML-based image enhancement module is then applied, utilizing deep learning models that dynamically adjust image resolution by analyzing user interactions and environmental factors. These deep learning algorithms refine image edges, adjust color balances, and reduce noise to improve clarity and resolution based on real-time inputs.
[015] The interaction and environment feedback module gathers data through sensors, such as cameras and accelerometers, which monitor user actions and environmental conditions. This module enables the system to adjust AR elements' scale and positioning, aligning with the user's perspective and enhancing realism. For example, the system can adjust the scale of AR objects in open spaces to appear smaller or larger based on user proximity, creating a more seamless experience.
[016] The dynamic image resolution enhancement (FIG. 2) uses a multi-step process. Initially, the system captures the baseline image within the AR application. Then, an ML model trained on diverse datasets analyzes the image's baseline resolution and clarity. Based on environmental factors, such as lighting and user proximity, the ML model dynamically upscales or downscales the resolution to enhance visual fidelity. Real-time feedback integration ensures continuous optimization, so that close-up images receive high-resolution detail, while distant objects are downscaled to conserve processing resources.
[017] Context-aware scaling (FIG. 3) is achieved through real-time adjustments that consider the user's position, movement, and environmental setting. The system monitors spatial conditions to assess the best scaling approach for AR objects, making large objects appear smaller in open spaces, and adjusting perspective to maintain visual consistency. This context-aware scaling module ensures that AR experiences remain realistic and immersive by fine-tuning visual elements according to the user's surroundings and movement.
[018] The adaptive augmentation module (FIG. 4) refines AR elements' relevance based on user interaction, simplifying the display and emphasizing important details. This module filters out redundant augmentations to create a clutter-free experience. By interpreting gaze tracking, gestures, and other interaction data, the ML algorithms tailor the AR content to reflect only the most contextually pertinent information, enhancing focus and usability.
[019] Finally, the 5G network optimization layer (FIG. 5) adapts image rendering quality based on available network bandwidth, ensuring a smooth user experience even under varying network conditions. The layer constantly monitors bandwidth and latency and adjusts rendering quality accordingly, allowing high-resolution and complex augmentations when the network is optimal, and simplifying augmentations under constrained network conditions.
[020] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[021] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[022] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
, Claims:1. A machine learning-enabled system for dynamic image quality enhancement in augmented reality applications, comprising a machine learning-based image processing module, a context-aware scaling module, an interaction feedback module, and a network optimization layer configured to operate over a 5G network.
2. The system of claim 1, wherein the machine learning-based image processing module dynamically adjusts image resolution in real-time based on user interactions and environmental conditions.
3. The system of claim 1, wherein the context-aware scaling module adjusts the scale and perspective of augmented reality objects according to user positioning and environmental factors.
4. The system of claim 1, further comprising an adaptive augmentation module that adjusts the display of augmented reality elements based on contextual relevance determined by machine learning algorithms.
5. The system of claim 1, wherein the network optimization layer modifies image rendering quality based on current 5G network conditions to ensure a continuous augmented reality experience.
6. The system of claim 2, wherein the machine learning algorithms are trained to enhance resolution by applying edge refinement, color correction, and noise reduction.
7. The system of claim 4, wherein the adaptive augmentation module tailors augmentation details based on user interaction data, including gaze tracking and gesture recognition.
Documents
Name | Date |
---|---|
202441088305-COMPLETE SPECIFICATION [14-11-2024(online)].pdf | 14/11/2024 |
202441088305-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf | 14/11/2024 |
202441088305-DRAWINGS [14-11-2024(online)].pdf | 14/11/2024 |
202441088305-FORM 1 [14-11-2024(online)].pdf | 14/11/2024 |
202441088305-FORM-9 [14-11-2024(online)].pdf | 14/11/2024 |
202441088305-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf | 14/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.