image
image
user-login
Patent search/

Performance Evaluation and Optimization of Federated Learning Algorithms in Edge Computing

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

Performance Evaluation and Optimization of Federated Learning Algorithms in Edge Computing

ORDINARY APPLICATION

Published

date

Filed on 5 November 2024

Abstract

In the era of data-driven decision-making, federated learning (FL) emerges as a transformative approach that facilitates collaborative machine learning across decentralized edge devices while preserving data privacy. This paper presents a novel federated learning optimization system specifically designed for edge computing environments, addressing the inherent challenges of data heterogeneity, communication efficiency, and model performance. The system allows multiple edge devices to train a shared model locally, significantly reducing the need for data transfer to a central server. Key features include adaptive optimization techniques that dynamically adjust learning rates based on local data characteristics, real-time performance evaluation for continuous monitoring, and advanced privacy-preserving methods such as secure aggregation and differential privacy. By minimizing communication overhead and efficiently managing resources, the system enhances model accuracy and convergence speed, ensuring robustness against the diverse capabilities of edge devices. Its scalable architecture supports a growing number of devices and larger datasets, making it suitable for various applications, including healthcare, finance, and smart cities. The framework also promotes interoperability with existing infrastructures, facilitating seamless integration into current workflows. Overall, this federated learning optimization system significantly contributes to efficient, secure, and real-time collaborative machine learning in edge computing environments.

Patent Information

Application ID202441084494
Invention FieldCOMPUTER SCIENCE
Date of Application05/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
P. Vinod babu Asst Professor, Dept. of AI, SVECW, APSVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, India - 534202IndiaIndia
T. Madhavi Asst Professor, Dept. of AI, SVECW, APSVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, IndiaIndiaIndia
R.Sarada Asst Professor, Dept. of AI, SVECW, APSVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, IndiaIndiaIndia
Dr.S Dileep Kumar Varma Professor, Dept. of EEE, SVECW, APSVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, IndiaIndiaIndia
S.RAVI CHANDRA Asst Professor, Dept. of IT, SVECW, APSVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, IndiaIndiaIndia
K Soni Sharmila Asst Professor, Dept. Of CSESVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, IndiaIndiaIndia
K. P. Swaroop Asst Professor, Dept. of EEE, SVECW, APSVECW(Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
Shri Vishnu Engineering College for Women (Autonomous)Shri Vishnu Engineering College for Women (Autonomous),Vishnupur, Bhimavaram, West Godhavari(Dist.,), Andhra Pradesh, India - 534202IndiaIndia

Specification

Description:The system is designed to facilitate the collaborative training of machine learning models across multiple edge devices using federated learning principles. It enables efficient data processing, enhances model performance, and ensures user privacy by keeping data localized on edge devices.
Components:
1. Edge Devices:
o Function: These are devices such as smartphones, IoT sensors, and edge servers that generate and hold local data.
o Role: Each device independently trains a local model using its data and shares model updates (e.g., gradients) rather than raw data with a central server.
2. Central Server:
o Function: Aggregates model updates from multiple edge devices.
o Role: Combines the updates using optimized aggregation methods (e.g., FedAvg) to create a global model that is then sent back to the edge devices for further training.
3. Performance Evaluation Module:
o Function: Monitors and assesses key performance metrics of the federated learning process.
o Metrics: Includes model accuracy, computational efficiency, communication overhead, and robustness.
4. Optimization Module:
o Function: Implements various optimization techniques to improve training efficiency and model performance.
o Strategies: Includes adaptive learning rates, resource management, and improved aggregation methods tailored to handle device heterogeneity.
5. Data Privacy Layer:
o Function: Ensures that sensitive data remains secure during the training process.
o Methods: Employs secure aggregation techniques and differential privacy measures to protect user data.
Workflow:
1. Local Training: Each edge device trains its local model using its data.
2. Model Update Sharing: After training, devices send model updates to the central server.
3. Aggregation: The central server aggregates the updates to refine the global model.
4. Distribution: The updated global model is sent back to the edge devices.
5. Evaluation and Optimization: The performance evaluation and optimization modules continuously assess and enhance the federated learning process based on real-time metrics.
The operational principle of the system is based on the collaborative training of machine learning models across decentralized edge devices while maintaining data privacy and optimizing performance through several key steps:
1. Data Localization:
o Each edge device retains its local data, which helps ensure privacy and reduces the need for data transfer to a central server.
o The devices can include smartphones, IoT sensors, and edge servers, each with unique data relevant to their environment.
2. Local Model Training:
o Each edge device trains a local machine learning model using its data.
o The training process can utilize algorithms suited for federated learning, allowing the model to learn from the local dataset while ensuring that no raw data leaves the device.
3. Model Update Generation:
o After completing local training, each edge device generates model updates (such as gradients or weights).
o These updates encapsulate the learned information from the local data without exposing any sensitive data.
4. Communication with Central Server:
o The edge devices send their model updates to the central server.
o This communication is typically designed to minimize bandwidth usage, utilizing techniques like compression or selective update transmission.
5. Model Aggregation:
o The central server receives model updates from multiple edge devices.
o It aggregates these updates using optimized aggregation methods (e.g., FedAvg), which combine the contributions from various devices into a single global model.
o The aggregation process considers the varying sizes of local datasets and adjusts the updates accordingly.
6. Global Model Distribution:
o Once the global model is updated, it is sent back to the edge devices.
o Each device then updates its local model with the new global parameters, improving its performance based on collective learning.
7. Performance Evaluation:
o The system continuously monitors key performance metrics, including model accuracy, efficiency, and communication overhead.
o A performance evaluation module assesses these metrics and identifies areas for improvement in the training process.
, C , Claims:
1. We claim that this method is scalable and robust.
2. We claim that the invention helps in reducing the errors and enhances efficiency throughout the federated learning process.
3. We claim that the invention reduces the numerous data exchange.
4. We claim that this work potentially lead to high throughput and better resource utilization with low cost.

Documents

NameDate
202441084494-COMPLETE SPECIFICATION [05-11-2024(online)].pdf05/11/2024
202441084494-DECLARATION OF INVENTORSHIP (FORM 5) [05-11-2024(online)].pdf05/11/2024
202441084494-DRAWINGS [05-11-2024(online)].pdf05/11/2024
202441084494-FORM 1 [05-11-2024(online)].pdf05/11/2024
202441084494-FORM-9 [05-11-2024(online)]-1.pdf05/11/2024
202441084494-FORM-9 [05-11-2024(online)].pdf05/11/2024
202441084494-REQUEST FOR EARLY PUBLICATION(FORM-9) [05-11-2024(online)].pdf05/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.