Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
ADAPTIVE LOAD BALANCING SYSTEM FOR CLOUD COMPUTING USING DEEP LEARNING AND EDGE ANALYTICS
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 14 November 2024
Abstract
The present invention introduces an adaptive load balancing method for cloud computing, incorporating Deep Learning models and Edge AI for effective job distribution. The Edge AI module is implemented on edge devices, analyzing real-time data locally to facilitate prompt decision-making and enhance resource allocation according to prevailing traffic conditions. The Deep Learning module is concurrently implemented in a centralized cloud environment, utilizing previous data from the Google Cluster Traces to forecast future traffic patterns and resource requirements. This hybrid methodology enables Edge AI to deliver instantaneous insights that improve the predictions of the Deep Learning model, establishing a feedback loop that perpetually boosts system performance. The system is emulated with the CloudSim tool to guarantee optimal resource usage, diminished latency, and enhanced cloud performance under variable workloads, rendering it highly efficient in adjusting to changing needs while preserving excellent service quality.
Patent Information
Application ID | 202441087946 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 14/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
P. Hari Shankar | Department of Computer Science & Engineering (DS), CVR COLLEGE OF ENGINEERING, Vastunagar, Mangalpalli (V), Ibrahimpatnam (M), Rangareddy (Dist), Telangana 501510, India. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
CVR COLLEGE OF ENGINEERING | CVR COLLEGE OF ENGINEERING, Vastunagar, Mangalpalli (V), Ibrahimpatnam (M), Rangareddy (Dist), Telangana 501510, India. | India | India |
Specification
Description:DESCRIPTION:
FIELD OF THE INVENTION
[001] The invention relates to cloud computing, emphasizing the effective administration of virtual resources to enhance workload distribution in cloud systems.
[002] It incorporates edge computing technologies, employing Edge AI to process data locally and facilitate real-time decision-making, thus minimizing latency and enhancing responsiveness in resource allocation.
[003] The system utilizes machine learning methodologies, particularly Deep Learning models, to examine historical data for forecasting workload trends and resource requirements, hence improving the overall efficiency and flexibility of cloud resource management.
BACKGROUND OF THE INVENTION
[004] The fast expansion of cloud computing services has rendered smart resource management essential. Cloud environments must dynamically assign resources to manage variable workloads across diverse applications, including e-commerce, healthcare, and large-scale business systems.
[005] Efficient load balancing is a crucial concern in cloud computing. Inadequate distribution may result in certain servers or virtual machines (VMs) becoming overwhelmed, while others remain underutilized, causing performance bottlenecks, increased energy usage, and diminished service quality.
[006] The patent US20220232423A1 focuses on edge computing within disaggregated radio access networks (RAN), enabling dynamic edge data extraction at intermediate stages of RAN processing. By processing data closer to the source, it significantly reduces latency and delay without needing changes to existing network protocols. However, the patent is limited to RAN environments and lacks machine learning integration, which could enhance adaptability in dynamic conditions. Additionally, it does not leverage hybrid edge-cloud models, limiting its scalability for broader applications.
[007] Traditional load balancing methods, both static and dynamic, are commonly employed but frequently struggle to adapt effectively to real-time traffic fluctuations and erratic workloads. These solutions lack the flexibility required to enhance performance in highly dynamic cloud environments.
[008] Machine learning, especially deep learning, has demonstrated significant efficacy in forecasting intricate patterns. Utilizing AI-driven methodologies for load balancing facilitates more astute and anticipatory resource distribution, informed by historical data and real-time inputs.
[009] The increasing prevalence of edge computing facilitates data processing near its source, hence diminishing latency and accelerating decision-making. Incorporating Edge AI into load balancing systems guarantees the timely management of real-time traffic variations.
[010] There is a distinct necessity for hybrid systems that integrate the prompt reactivity of edge computing with the anticipatory capabilities of deep learning models. This integration can adapt to fluctuating situations, enhance resource efficiency, and uphold superior service quality in cloud computing.
[011] Simulation tools such as CloudSim are crucial for modeling and evaluating cloud environments, allowing developers to analyze the performance of adaptive load balancing systems across diverse situations prior to real-world implementation.
OBJECTIVES OF THE INVENTION
[012] The invention aims to create a deep learning and Edge AI cloud load balancing system: Deep learning systems forecast workload patterns based on past data like Google Cluster Traces, improving cloud resource task distribution. Edge AI handles data locally in real time, speeding decision-making. This combination optimizes cloud virtual machine resource utilization and performance.
[013] The invention aims to Process data closer to the source with Edge AI to reduce latency and energy consumption: The system analyzes data at the network edge using Edge AI, decreasing data transfer to cloud servers. Making decisions at the edge reduces network latency and speeds job execution. Localized processing reduces data transit and processing time, saving energy.
[014] The invention aims to Develop a system that learns from data and adjusts to workloads to improve resource management: Deep learning models update forecasts and load balancing techniques using real-time workload data. The system automatically adjusts to traffic and resource demands to maintain performance and quality in shifting cloud settings. Self-learning maintains resource allocation efficiency.
SUMMARY OF THE INVENTION
[015] The invention introduces an adaptive load balancing system that dynamically allocates tasks across cloud resources by combining deep learning models for workload prediction and Edge AI for real-time decision-making. This ensures efficient distribution of tasks, preventing resource overloads or underutilization.
[016] The system employs deep learning algorithms trained on extensive datasets such as Google Cluster Traces to predict variations in traffic and workload. This forecast aids in anticipatory load management, enhancing resource distribution prior to the emergence of bottlenecks.
[017] The Edge AI module locally processes data at the network's periphery, minimizing latency and facilitating expedited, real-time load balancing choices. This method reduces the necessity for continual connection with the central cloud, hence enhancing task execution speed.
[018] By processing data at the edge, the system markedly decreases network latency and minimizes energy consumption. Localized processing mitigates superfluous data transmission to central cloud servers, enhancing overall energy efficiency.
[019] The system is evaluated with CloudSim, a prevalent cloud simulation tool, to ascertain its efficacy in job scheduling and resource allocation across diverse traffic scenarios. This guarantees the system operates efficiently in a simulated cloud environment prior to deployment.
[020] The system employs a hybrid cloud-edge strategy, integrating deep learning with Edge AI. Deep learning forecasts long-term traffic patterns, but Edge AI manages rapid, localized decisions, establishing a balanced load management system that improves both performance and scalability.
BRIEF DESCRIPTION OF THE DRAWING
[021] Figure 1 depicts the amalgamation of cloud computing resources (5) with the Edge AI module (10) for dynamic load balancing. This architecture facilitates the effective allocation of workloads across several contexts to enhance performance.
[022] The cloud-based deep learning module examines large datasets such as Google Cluster Traces. It forecasts forthcoming traffic and workload trends to guide load-balancing decisions.
[023] Predictions produced by the deep learning module (15) are transmitted to the Edge AI module, which functions in proximity to the user. This closeness facilitates immediate modifications in load distribution according to prevailing data conditions.
[024] The Edge AI module locally processes incoming data, guaranteeing swift responses to traffic variations. This feature improves the system's capacity to manage diverse workloads efficiently.
[025] The uninterrupted data exchange between the cloud and Edge AI module enables real-time and predictive workload adaptation (20). This integration guarantees optimal resource distribution throughout the system.
[026] Task allocation is performed across various virtual machines (VMs) (25) in both cloud and edge settings. This method ensures equitable workloads and enhanced resource use.
DETAILED DESCRIPTION OF THE INVENTION
[027] The innovation introduces an adaptive load balancing system that uses deep learning techniques to forecast traffic and workload variations in cloud computing settings. This predictive functionality allows the system to allocate resources dynamically, guaranteeing optimal performance and resource efficiency.
[028] The deep learning module is trained to identify patterns in workload demands by employing substantial historical datasets, including Google Cluster Traces. This training enables the model to anticipate anticipated fluctuations in resource utilization, hence improving its decision-making capabilities.
[029] The system incorporates an Edge AI module that functions near end-users, enabling swift data processing. This localized strategy diminishes latency and enhances response times, facilitating rapid modifications in load distribution according to real-time traffic conditions.
[030] The interaction between the cloud-based deep learning module and the Edge AI module is incessant, facilitating real-time data transmission. This guarantees that the Edge AI module possesses the most recent forecasts and can respond swiftly to efficiently manage incoming workloads.
[031] The dataset module is essential for the invention, supplying the requisite historical data for training the deep learning model. This module guarantees that the deep learning algorithm is guided by precise, pertinent data, enhancing the dependability of its predictions.
[032] The task allocation process is managed by a complex algorithm that considers existing workloads, anticipated demands, and the performance parameters of available virtual machines (VMs). This method ensures equitable distribution of workloads among all resources.
[033] The invention underscores the significance of scalability, allowing the system to adjust as the cloud environment expands. It can support a growing number of users and workloads without compromising performance or efficiency.
[034] The architecture integrates the CloudSim simulation tool, which is crucial for evaluating and validating the suggested load balancing solutions. This simulation facilitates the evaluation of multiple situations, guaranteeing the system's resilience prior to implementation in real-world settings.
[035] The system's hallmark is continuous learning, wherein the deep learning model is consistently updated with new data. This continuous training process enables the model to enhance its predictions, adjusting to changing usage patterns and increasing its accuracy over time.
[036] This innovation seeks to improve the quality of service (QoS) in cloud computing settings through optimized resource allocation and diminished latency. The system guarantees enhanced performance and user pleasure through the integration of powerful AI technologies and localized processing. , Claims:We Claim:
Claim 1: A cloud computing adaptive load balancing system consisting of: a deep learning module designed to analyze historical workload data and forecast future traffic patterns utilizing a model trained on Google Cluster Traces; an Edge AI module that processes real-time data to enable prompt task allocation decisions; and a dataset module that perpetually updates the deep learning model with new data to improve prediction accuracy.
Claim 2: The system of claim 1, in which the deep learning module interacts with the Edge AI module via a continuous data exchange protocol, enabling the Edge AI module to obtain real-time predictions and execute informed load distribution modifications across virtual machines, thus minimizing latency and enhancing overall system performance.
Claim 3: A technique for adaptive load balancing in cloud computing environments, consisting of the following steps: training a deep learning model on historical workload data to forecast resource requirements; employing edge analytics to collect real-time data from user interactions; and dynamically assigning tasks to virtual machines based on predictions from the deep learning model and real-time insights from the Edge AI module to enhance resource utilization and uphold quality of service.
Documents
Name | Date |
---|---|
202441087946-COMPLETE SPECIFICATION [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-DRAWINGS [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-EDUCATIONAL INSTITUTION(S) [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-EVIDENCE FOR REGISTRATION UNDER SSI [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-FIGURE OF ABSTRACT [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-FORM 1 [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-FORM FOR SMALL ENTITY(FORM-28) [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-FORM-9 [14-11-2024(online)].pdf | 14/11/2024 |
202441087946-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf | 14/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.