Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
METHOD AND SYSTEM FOR DYNAMIC LOAD BALANCING USING CLOUD INFRASTRUCTURE AS A SERVICE (IAAS) SERVERS
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 9 November 2024
Abstract
The present invention discloses a novel method and system for dynamic load balancing in cloud Infrastructure as a Service (IaaS) environments. The system intelligently distributes incoming traffic to virtual server instances based on real-time performance metrics such as CPU utilization, memory usage, and network traffic. A central load balancer, with the aid of monitoring agents and a rules engine, dynamically adjusts traffic allocation to optimize resource utilization, enhance application performance, and ensure fault tolerance. The system supports automatic scaling by provisioning additional server instances during traffic spikes and de-provisioning them during low-demand periods. Additionally, it includes fault tolerance mechanisms that redistribute traffic away from failing or underperforming servers, ensuring high availability and seamless scalability.
Patent Information
Application ID | 202411086471 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 09/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Prof (Dr.) Abhaya Nand | IIMT College of Management, Plot No. 20, Knowledge Park – III, Greater Noida, U.P., India | India | India |
Mr. Anshul Kumar | IIMT College of Management, Plot No. 20, Knowledge Park – III, Greater Noida, U.P., India | India | India |
Mr. Jitendra Kumar | IIMT College of Management, Plot No. 20, Knowledge Park – III, Greater Noida, U.P., India | India | India |
Mr. Sachin Kumar | IIMT College of Management, Plot No. 20, Knowledge Park – III, Greater Noida, U.P., India | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
IIMT College of Management | Plot No. 20, Knowledge Park – III, Greater Noida - 201310, U.P., India | India | India |
Specification
Description:
FIELD OF INVENTION
[001] The invention represents a dynamic load balancing in cloud Infrastructure as a Service (IaaS) environments, offering improved resource utilization, enhanced application performance, seamless scalability, and enhanced fault tolerance. The invention introduces a method and system for distributing incoming traffic across virtual server instances in real-time based on current server load, resource availability, and predefined performance metrics, providing optimized resource utilization, enhanced performance, and fault tolerance.
BACKGROUND OF THE INVENTION
[002] Load balancing in cloud computing environments is crucial for optimizing resource utilization, improving performance, and ensuring high availability of applications. Traditional load balancing methods may not fully leverage the flexibility and scalability offered by Infrastructure as a Service (IaaS) models in the cloud. There is a need for a dynamic load balancing system that efficiently distributes incoming traffic across virtualized IaaS servers based on real-time demand and resource availability.
[003] In cloud computing environments, efficient load balancing is essential for optimizing resource utilization, improving the performance of applications, and ensuring high availability. Traditional load balancing methods, often designed for on-premise or static server environments, do not fully exploit the flexibility and scalability offered by cloud-based IaaS models. These models provide dynamic provisioning of resources, and thus require a load balancing mechanism that can adapt in real-time to fluctuating traffic loads and varying resource demands.
[004] Conventional load balancing techniques typically distribute traffic using static or round-robin methods, without considering real-time server performance metrics. As a result, they may lead to inefficient resource utilization, application bottlenecks, and increased downtime. To address these challenges, a dynamic load balancing solution is needed to intelligently and adaptively allocate incoming traffic across virtualized servers based on real-time performance data, ensuring that resources are fully optimized, applications run smoothly, and traffic can scale seamlessly in response to changing demands.
[005] Patent literature 1, CN110753072B discloses load balancing system includes: the system comprises a gateway node, a first load balancing node cluster and a first back-end server; the gateway node is used for receiving an access data message which is sent by a public network client and has a destination address as an elastic public network address, determining that the access data message needs to be sent to the first load balancing node cluster according to the elastic public network address, packaging the access data message into a first access message by using a protocol of a first tunnel between the gateway node and the first load balancing node cluster, and sending the first access message through the first tunnel; the first load balancing node cluster is used for receiving a first access message through the first tunnel, obtaining an access data message from the first access message, generating a second access message according to the access data message, and sending the second access message to the first back-end server; and the first back-end server is used for receiving the second access message and carrying out service processing.
[006] This invention addresses these challenges by presenting a novel method and system for dynamic load balancing in IaaS environments. The system intelligently allocates traffic based on real-time server performance metrics, optimizes resource utilization, and ensures fault tolerance through automatic scaling and traffic redistribution mechanisms.
OBJECTS OF THE INVENTION
[007] Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
[008] It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative therapy for cloud infrastructure as a service environments.
[009] An object of the present disclosure is to provide dynamic load balancing system for cloud infrastructure as service environments.
[010] Another object of the present disclosure is to provide optimized resource utilization by dynamically allocating traffic based on real-time performance data, the system ensures efficient use of server resources, reducing the likelihood of overloading or under-utilizing any single server instance.
[011] Another object of the present disclosure is to provide improved application performance. The system's ability to monitor and respond to performance metrics in real-time helps improve the responsiveness and overall performance of applications running in the cloud environment.
[012] Another object of the present disclosure is to provide seamless scalability. Automatic scaling of server instances during peak traffic periods allows the system to handle variable loads without manual intervention, ensuring that applications remain preformat and available.
[013] Yet another object of the present disclosure is to provide enhanced fault tolerance. The system's fault tolerance features, such as traffic redistribution and health monitoring, ensure that applications continue to function smoothly, even in the case of server failures.
[014] Other objects and advantages of the present disclosure will be more apparent from the following description and accompanying drawing which is not intended to limit the scope of the present disclosure.
SUMMARY OF THE INVENTION
[015] The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the present invention. It is not intended to identify the key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concept of the invention in a simplified form as a prelude to a more detailed description of the invention presented later.
[016] The present invention relates to a dynamic load balancing system designed to distribute incoming traffic across virtual servers in cloud-based IaaS environments. The system dynamically adjusts the allocation of requests based on real-time performance metrics, such as CPU utilization, memory usage, and network traffic. It optimizes resource utilization, enhances application performance, and ensures high availability and fault tolerance through dynamic scaling and health checks.
[017] The invention employs a central load balancer that communicates with monitoring agents installed on virtualized server instances. These agents collect and relay real-time performance data, which is processed by a rules engine to distribute traffic based on current demand and resource availability. Additionally, the system supports automatic scaling, adding or removing server instances based on traffic changes, ensuring efficient resource use and cost savings.
[018] Furthermore, the system includes fault-tolerant features that isolate failing or underperforming server instances, redirecting traffic to healthier instances, thus enhancing the overall reliability of cloud-based applications.
BRIEF DESCRIPTION OF DRAWINGS
[019] Figure 1 shows the system diagram of dynamic load balancing system in a cloud Infrastructure as a Service (IaaS) environment.
DETAILED DESCRIPTION OF THE INVENTION
[020] The following description is of exemplary embodiments only and is not intended to limit the scope, applicability or configuration of the invention in any way. Rather, the following description provides a convenient illustration for implementing exemplary embodiments of the invention. Various changes to the described embodiments may be made in the function and arrangement of the elements described without departing from the scope of the invention.
1. System Architecture
The system comprises the following components:
Central Load Balancer: A load balancer positioned within the cloud management layer that receives incoming requests and distributes them among virtualized server instances based on real-time metrics.
Virtualized IaaS Servers: A set of virtualized server instances provisioned across multiple physical nodes within the cloud infrastructure.
Monitoring Agents: Agents installed on each virtual server instance that continuously collect real-time performance data, including CPU utilization, memory usage, and network traffic.
Rules Engine: A component responsible for evaluating the performance metrics from the monitoring agents and defining load balancing policies. The rules engine is configurable to incorporate a variety of parameters such as server load, resource availability, and performance thresholds.
2. Load Balancing Process
The dynamic load balancing process is as follows:
Incoming requests are routed to the central load balancer, which gathers real-time performance metrics from the monitoring agents on each virtual server instance.
The rules engine processes these metrics and determines the optimal server for each request based on predefined load balancing policies. These policies account for current server loads, available resources, and specific performance criteria.
The central load balancer dynamically allocates and distributes requests to virtual server instances according to the optimal allocation determined by the rules engine.
The system continuously monitors the performance of virtual server instances. If the load on any server instance exceeds a certain threshold, the system redistributes incoming traffic to balance the load more effectively.
3. Dynamic Scaling
The system is designed to handle dynamic scaling as follows:
Traffic Spikes: In response to sudden traffic spikes or increased resource demand, the system triggers the automatic provisioning of additional virtual server instances to manage the load. This ensures seamless scalability and prevents performance bottlenecks.
De-provisioning: When the traffic demand subsides, the system de-provisions excess server instances to minimize resource consumption and reduce operational costs.
The scaling process is automated and occurs without manual intervention, allowing the system to maintain optimal performance and resource efficiency.
4. Fault Tolerance
Fault tolerance is an integral part of the system:
Health Checks: Monitoring agents regularly perform health checks on each virtual server instance. If a server instance fails these checks, it is automatically isolated from the load balancing pool to prevent traffic from being routed to the failing server.
Traffic Redistribution: The system redistributes traffic away from underperforming or failing instances to healthy ones, ensuring high availability and preventing service disruption.
This fault-tolerant mechanism helps maintain application uptime even in the event of server failures.
Advantages of the Invention:
Optimized Resource Utilization: The system ensures efficient traffic distribution across virtualized server instances, preventing resource overloading or underutilization.
Improved Application Performance: By dynamically balancing traffic based on real-time performance metrics, the system enhances the performance and responsiveness of cloud-based applications.
Seamless Scalability: The automatic scaling capability ensures that the system can handle fluctuations in traffic demand by provisioning or de-provisioning server instances as required.
Enhanced Fault Tolerance: Regular health checks and traffic redistribution mechanisms provide robust fault tolerance, ensuring high availability even during server failures.
While considerable emphasis has been placed herein on the specific features of the preferred embodiment, it will be appreciated that many additional features can be added and that many changes can be made in the preferred embodiment without departing from the principles of the disclosure. These and other changes in the preferred embodiment of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
, Claims:We Claim:
1. A method for dynamic load balancing in a cloud Infrastructure as a Service (IaaS) environment, comprising:
i. Receiving incoming requests at a central load balancer;
ii. Gathering real-time performance metrics from monitoring agents on virtual server instances;
iii. Determining the optimal allocation of requests based on predefined load balancing policies; and
iv. Distributing incoming requests to virtual server instances based on the determined allocation.
2. The method of claim 1, further comprising:
i. Triggering automatic scaling of virtual server instances in response to increased traffic or resource demands;
ii. Dynamically provisioning new server instances to handle increased loads.
3. A system for dynamic load balancing in a cloud Infrastructure as a Service (IaaS) environment, comprising:
i. A central load balancer component;
ii. Virtualized IaaS servers provisioned across multiple physical nodes;
iii. Monitoring agents installed on each virtual server instance for real-time performance data collection; and
iv. A rules engine for defining load balancing policies based on performance metrics.
4. The system of claim 3, wherein the central load balancer dynamically adjusts the distribution of incoming requests based on current server loads and available resources.
Documents
Name | Date |
---|---|
202411086471-COMPLETE SPECIFICATION [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-DECLARATION OF INVENTORSHIP (FORM 5) [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-DRAWINGS [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-EDUCATIONAL INSTITUTION(S) [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-FORM 1 [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-FORM FOR SMALL ENTITY(FORM-28) [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-FORM-9 [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-OTHERS [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-POWER OF AUTHORITY [09-11-2024(online)].pdf | 09/11/2024 |
202411086471-REQUEST FOR EARLY PUBLICATION(FORM-9) [09-11-2024(online)].pdf | 09/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.