image
image
user-login
Patent search/

SYSTEM AND METHOD FOR MEDICAL IMAGE ANALYSIS USING FEDERATED EDGE LEARNING WITH GENERATIVE ADVERSARIAL NETWORKS (FEELGANs)

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

SYSTEM AND METHOD FOR MEDICAL IMAGE ANALYSIS USING FEDERATED EDGE LEARNING WITH GENERATIVE ADVERSARIAL NETWORKS (FEELGANs)

ORDINARY APPLICATION

Published

date

Filed on 30 October 2024

Abstract

ABSTRACT SYSTEM AND METHOD FOR MEDICAL IMAGE ANALYSIS USING FEDERATED EDGE LEARNING WITH GENERATIVE ADVERSARIAL NETWORKS (FEELGANs) The present disclosure relates to a system (100) for medical image analysis using federated edge learning with Generative Adversarial Networks (FEELGANs). Decentralized edge devices (102-1, 102-2…102-N) at medical institutions locally process medical images for disease detection. Each device includes a GAN module (104) with a data generator (104-1) to create synthetic images and a discriminator (104-2) to validate them. A deep learning classification module (106) categorizes images into diseases like COVID-19, Tuberculosis, and Optical Coherence Tomography (OCT). A federated learning module (108) trains local models and securely shares learned parameters with a central server (110) over a communication network (112). The central server (110) aggregates these parameters to update and refine a global GAN model. This system ensures continuous model improvement, enhanced diagnostic accuracy, and preserves patient privacy by sharing only model parameters across decentralized medical institutions.

Patent Information

Application ID202441083307
Invention FieldCOMPUTER SCIENCE
Date of Application30/10/2024
Publication Number45/2024

Inventors

NameAddressCountryNationality
BAVIKATI VENNELASRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
AREKANTI SIDDARTHASRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
SHAIK AKHIB USMANSRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
CHARISHMA POTHIREDDYSRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
MD MUZAKKIR HUSSAINSRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
FIROJ GAZISRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
SRM UNIVERSITYAmaravati, Mangalagiri, Andhra Pradesh-522502, IndiaIndiaIndia

Specification

Description:FIELD
The present disclosure generally belongs to the field of generative adversarial networks.
DEFINITIONS
As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used indicates otherwise.
The term "GAN" stands for generative adversarial network. It's a type of machine learning model called a neural network, specially designed to imitate the structure and function of a human brain. For this reason, neural networks in machine learning are sometimes referred to as artificial neural networks (ANNs). This technology is the basis of deep learning, a subcategory of machine learning (ML) capable of recognizing complex patterns in varying data types such as images, sounds, and text.
The term "VGG" refers to a family of deep convolutional neural networks (CNNs) developed by the Visual Geometry Group at the University of Oxford. VGG networks are known for their simple and uniform architecture, consisting of multiple convolutional layers followed by fully connected layers. The most famous model from this family is VGG16, which has 16 layers. VGG networks became popular because they demonstrated that increasing the depth of neural networks can lead to better performance in image classification tasks. VGG achieved excellent results in image recognition challenges like Image Net.
The term optical coherence tomography (OCT) is a non-invasive imaging technique used in medical diagnostics, particularly in ophthalmology. OCT generates high-resolution cross-sectional images of biological tissues, such as the retina in the eye. By using light waves, OCT can capture detailed structures within tissues, which helps in the diagnosis of conditions like macular degeneration, glaucoma, and diabetic retinopathy. In recent years, OCT imaging has also been applied to other fields, including cardiology and dermatology.
The term "ResNet" is a deep neural network architecture that introduced the concept of residual learning to solve the problem of vanishing gradients in very deep networks. Residual learning allows the network to skip layers using "shortcut connections," which helps preserve the flow of information and gradients during training. This innovation enables the construction of extremely deep networks, such as ResNet-50 (with 50 layers) and ResNet-101 (with 101 layers), without suffering from degradation in performance. ResNet has become one of the most widely used architectures in tasks like image classification, object detection, and medical image analysis due to its high accuracy and efficient training of deep networks.
The term "CNN" is a convolutional neural network (CNN) is a class of deep neural networks commonly used for analyzing visual data, such as images and videos. CNNs are designed to automatically and adaptively learn spatial hierarchies of features from input data. They use convolutional layers to detect local patterns, followed by pooling layers for down sampling, and finally fully connected layers for classification. CNNs are widely used in tasks like image recognition, object detection, and medical image analysis. They have proven to be highly effective in processing visual data due to their ability to capture spatial and hierarchical information.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
The traditional process of Medical image analysis plays a crucial role in diagnosing and treating various diseases. Advanced machine learning and deep learning techniques, such as Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have shown significant promise in automating and enhancing this process. However, a major challenge in developing accurate and robust medical image analysis models is the scarcity of high-quality labeled medical image data, particularly in decentralized medical institutions. This process may not have sufficient datasets for training effective models. Additionally, patient privacy is a critical concern when sharing medical data across institutions for collaborative model training, especially in light of strict healthcare privacy regulations such as HIPAA and GDPR. Conventional centralized training methods often require transferring sensitive data to a central server, increasing the risk of data breaches and privacy violations. Therefore, there is a need to transform these issues by providing a system and method that leverages Federated Edge Learning with Generative Adversarial Networks (FEELGANs) to enhance local datasets, improve model training and disease detection, and preserve the privacy and security of patient medical data across decentralized institutions.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
The main object of the present disclosure is to provide a system and method for medical image analysis that enables decentralized processing of medical images across multiple edge computing devices located at different medical institutions, thereby improving disease detection while preserving data privacy.
An object of the present disclosure is to address the challenge of insufficient medical image datasets in decentralized medical institutions by utilizing a generative adversarial network (GAN) module. The GAN module generates synthetic medical images to augment local datasets, improving the accuracy and robustness of disease detection models.
Another object of the present disclosure is to integrate federated learning into the system such that local models can be trained on augmented datasets at each edge computing device. This system allows the sharing of only learned model parameters with a central server, avoiding the need to transfer sensitive patient medical data.
Yet another object of the present disclosure is to enable the central server to aggregate the learned model parameters from the edge computing devices and update a global generative adversarial network (GAN) model for disease detection and image classification. The global model is continuously refined based on shared parameters, ensuring improved performance without compromising patient privacy.
Still another object of the present disclosure is to apply deep learning classifiers such as convolutional neural networks (CNNs), VGG, and ResNet to classify medical images into disease categories, including but not limited to COVID-19, optical coherence tomography (OCT), and tuberculosis (TB), thereby providing an accurate and scalable solution for medical image analysis.
Yet another object of the present disclosure is to ensure patient medical data privacy and security throughout the process by only transmitting learned model parameters in the federated learning framework, ensuring compliance with data privacy regulations and minimizing the risk of data breaches.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure provides a system and method for medical image analysis using federated edge learning with generative adversarial networks (FEELGANs). The system comprises a plurality of edge computing devices located at decentralized medical institutions, each edge device configured to locally process medical images for disease detection to create a local dataset, each edge computing device including a generative adversarial network (GAN) module configured to generate synthetical medical images based on the local dataset, wherein the GAN module further including a data generator configured to generate synthetic medical images dataset by means of local generative adversarial networks (GAN) model based on the local dataset to augment the local dataset that has insufficient medical image data for disease diagnosis, and a discriminator configured to evaluate the authenticity of the generated synthetic medical images dataset.
Furthermore, each edge computing device is equipped with a deep learning classification module comprising one or more deep learning classifiers selected from the group consisting of Convolutional Neural Networks (CNN), VGG, and ResNet, configured to classify medical images of the local dataset augmented with the synthetic medical images dataset, into disease categories, including COVID-19, Optical Coherence Tomography (OCT), and Tuberculosis (TB), and generate a classified data and each edge computing device also contains a federated learning module configured to train a local deep learning model by the classified data, and wherein the federated learning module configured to only share learned model parameters, thereby preserving patient medical data privacy.
Furthermore, a central server configured to receive the learned model parameters from each federated learning module of the plurality of edge computing devices over a communication network, further configured to aggregate the learned model parameters received from the plurality of edge computing devices, and further configured to update a global GAN model for disease detection and image classification, wherein the global GAN model being refined based on the shared learned model parameters by each federated learning module while preserving the privacy and security of patient medical data.
In an embodiment the system is configured to dynamically train the local GAN and classifier models at the decentralized edge computing devices and updates the global GAN model at the central server based on the model parameters learned from the local GAN and classifier models.
In an embodiment each decentralized edge computing device includes at least one processor, a memory storing instructions for the GAN module, the deep learning classification module, and the federated learning module and a network interface configured to communicate the learned model parameters to the central server over the communication network.
In an embodiment, the classification module is configured to perform intra-disease classification by categorizing medical images into subcategories within specific disease categories, such as classifying COVID-19 images into "COVID," "Normal," and "Pneumonia."
In an embodiment, the decentralized edge computing device is configured to utilize hardware accelerators, including graphical processing units (GPUs) and tensor processing units (TPUs), to enhance the performance of the deep learning classification module and accelerate the GAN module training for generating the synthetic medical images.
In an embodiment, the federated learning module employs secure aggregation techniques to ensure that the learned model parameters shared with the central server are encrypted, for ensuring the privacy and security of patient medical data.
In an embodiment, the central server updates the global GAN model by performing a weighted aggregation of the learned model parameters received from each edge computing devices, optimizing the model's performance across diverse datasets while preventing over fitting to any specific local dataset.
In an embodiment, the system, further comprising a dynamic training mechanism that alternates between a training local GAN models at the edge computing devices based on local dataset and refining the global GAN model at the central server based on aggregated learned model parameters from multiple edge computing devices, wherein the training dynamically adapts to the availability and quality of local datasets.
In an embodiment, the edge computing devices are integrated with existing medical imaging hardware, including X-ray, CT, MRI, or optical coherence tomography (OCT) scanners, enabling seamless acquisition and processing of medical images for real-time disease analysis.
The present disclosure further envisages a method for designing interior physical space using augmented reality. The method includes the following steps that:
• deploying, a plurality of decentralized edge computing nodes at medical institutions, where each node is configured to locally collect and store medical images, such as X-rays, CT scans, and optical coherence tomography (OCT) images, corresponding to diseases like COVID-19, Tuberculosis, and other medical conditions.
• training, local generative adversarial network (GAN) models at each edge computing node using the locally collected medical image datasets, wherein the local GAN models are used to generate synthetic medical images, augmenting the dataset while ensuring that actual patient data is not shared externally.
• aggregating, the received GAN model weights at the central server, and refining the global GAN model by applying weight averaging techniques, thereby improving the global model's accuracy and generalization capability until the global model achieves the desired accuracy.
• deploying. a deep learning-based classification module at each decentralized edge node, comprising convolutional neural networks (CNNs), (VGG), and residual ResNet models, to classify the locally stored medical images into specific disease categories, such as "COVID-19," "Normal," "Tuberculosis," or "Pneumonia," based on learned features.
• retraining, the classification module at each edge node using the synthetic medical images generated by the local GAN models, wherein the synthetic images augment the real datasets and help to improve the classifier's performance, especially in situations where real datasets are limited.
• refining, the global GAN model and the local classifiers periodically, based on feedback and performance metrics from the decentralized edge nodes, wherein the system dynamically adjusts to variations in disease manifestations and improves diagnostic accuracy over time.
• locating a plurality of edge computing devices at decentralized medical institutions, each edge computing device configured to locally process medical images for disease detection and create a local dataset generating synthetic medical images by a generative adversarial network (GAN) module (104) based on the local dataset, the generating synthetic including generating, by a data generator (104-1), synthetic medical images based on the local dataset by employing a local generative adversarial networks (GAN) model to augment the dataset when insufficient medical image data for disease diagnosis is available, and evaluating, by a discriminator (104-2), the authenticity of the generated synthetic medical images;
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
A system and a method of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates a block diagram of a system
Figure 2 illustrates architecture of proposed FEELGAN framework.
Figure 3 illustrates a training accuracy plot for GAN-based COVID-19, Tuberculosis, and OCT image generation
Figure 4A-4D illustrates a method for medical image analysis using federated edge learning with generative adversarial networks (FEELGANs).
LIST OF REFERENCE NUMERALS
100 - System
102-1,102-2,……102-N - plurality of edge computing devices
104 - GAN module
106 - Deep learning classification module
108 - Federated learning module
110 - Central server
112 - Communication network
112-1 - Model evaluation module
114 - Processor
116 - A memory
118 - A network interface
200 - Method
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "including," and "having," are open-ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
Traditional methods often involve (background in rephrased manner) process of medical image analysis is essential for diagnosing and managing a wide range of diseases. Recent advancements in machine learning and deep learning, particularly using techniques like convolutional neural networks (CNNs) and generative adversarial networks (GANs), have demonstrated great potential to automate and improve the accuracy of this process. However, the development of robust and reliable medical image analysis models faces significant challenges due to the limited availability of high-quality, labeled medical image datasets, especially in decentralized medical institutions. These institutions often lack access to the large datasets required for effective model training. Furthermore, patient privacy is a major concern, as sharing medical data between institutions for collaborative model development can violate stringent privacy regulations, such as HIPAA and GDPR. Traditional centralized approaches, which rely on transferring sensitive medical data to central servers, increase the risk of data breaches and non-compliance with privacy laws.
Therefore, there is a need to transform a system and method that integrates federated edge learning with generative adversarial networks (FEELGANs) to augment local datasets, enhance model training and disease classification, and ensure data privacy and security across decentralized institutions. This approach addresses the limitations of centralized systems while improving the overall accuracy and privacy of medical image analysis
The present disclosure relates a system 100 and method 400 to medical image analysis using federated edge learning with generative adversarial networks (FEELGANs). The system introduces a dynamic training mechanism that ensures continuous and synchronized improvement of models across both local edge devices 102 and a central server 110. Each edge device independently trains its local Generative Adversarial Network (GAN) models and classifiers using the medical image data collected at that specific institution. As new or synthetic data is generated, the local models are updated to reflect these enhanced datasets. These learned parameters are then securely transmitted to the central server 110, where they are aggregated to refine the global GAN model. This process ensures that local models remain relevant while the global model benefits from the collective knowledge of all participating institutions. As the system adapts to new data and trends, it continuously improves diagnostic accuracy across decentralized nodes.
Each edge device 102 is equipped with key hardware components to support this functionality. These include a processor 114 for executing tasks, memory 116 for storing instructions related to the GAN and deep learning modules, and a network interface 118 for secure communication with the central server 110. This setup allows the edge devices to process medical images, generate synthetic datasets, and contribute to the global learning framework without requiring centralized access to sensitive patient data.
The system also enhances medical image classification by introducing intra-disease classification capabilities within the deep learning classification module 106. This means that it not only distinguishes between broad disease categories but also provides more granular classifications within each category. For example, in the case of COVID-19, the system can classify images into subcategories like "COVID," "Normal," and "Pneumonia." This level of detail offers healthcare practitioners nuanced insights into disease progression, aiding in more informed treatment decisions.
To ensure high performance, the system leverages hardware accelerators such as GPUs and TPUs within the edge devices 102. These accelerators significantly enhance computational power, allowing the system to quickly process large volumes of medical images and generate synthetic images efficiently. This capability is particularly critical in clinical settings where timely diagnoses can directly impact patient outcomes, providing near real-time insights.

Privacy is a core concern in the system's design. The federated learning module 108 ensures robust privacy protections. When edge devices 102 share their learned model parameters with the central server 110, these parameters are encrypted, ensuring that sensitive patient information remains secure. Patient medical data stays on the local devices, while only encrypted model parameters are transmitted, ensuring compliance with stringent healthcare regulations such as HIPAA and GDPR.
At the central server 110, a model evaluation module 112-1 regularly assesses the performance of the global GAN model, incorporating feedback from the decentralized edge devices 102. This feedback allows continuous refinement of the global model, ensuring that it remains accurate and effective across the diverse datasets collected from various institutions.
To optimize the updating of the global GAN model, the system uses a weighted aggregation technique. When aggregating model parameters from different edge devices 102, it assigns different weights based on the size and diversity of the dataset from which the parameters were derived. This approach prevents overfitting to any particular dataset, ensuring that the global model generalizes well across all institutions involved.
The system operates dynamically, alternating between training local GAN models at the edge devices 102 and refining the global model at the central server 110. This allows it to adapt to varying dataset availability and quality across institutions, maintaining effectiveness even in environments with limited data. The system's flexibility ensures that it can quickly respond to new medical data, evolving as new diseases emerge or medical image datasets grow over time.
Finally, the system is designed to seamlessly integrate with existing medical imaging hardware, such as X-ray, CT, MRI, and Optical Coherence Tomography (OCT) scanners. Edge devices 102 can be incorporated into the workflows of these imaging systems, enabling real-time acquisition and processing of medical images. The captured images are immediately processed by the edge devices and classified into relevant disease categories using the deep learning classification module 106, greatly improving the speed and accuracy of disease detection in clinical environments. This integration allows the system to be easily deployed in hospitals without requiring major changes to existing infrastructure, providing real-time diagnostic support
Figure 1 provides a high-level view of the system 100, which includes a network of decentralized edge computing devices 102 located at various medical institutions. These edge devices are responsible for locally processing medical images to detect diseases and generate a local dataset. Importantly, only the learned model parameters are shared with the central server 110 over the communication network 112, ensuring that patient data remains private.
The central server 110 aggregates the model parameters from all edge devices to update a global GAN model, which is refined continuously based on the aggregated data 110-1 from multiple institutions and a central repository 110-2. This global model improves over time and is deployed back to the edge devices for further disease detection and classification, all while maintaining the security of patient data.
Figure 2 zooms into the architecture of the individual edge computing devices 102. Each device is shown to include key components like a processor 114 and a memory 116, which stores the necessary instructions for the GAN module, deep learning classification module, and federated learning module. A network interface 118 enables secure communication between the edge device and the central server 110 and each edge device is equipped with several modules, such as the GAN module 104, which consists of the data generator 104-1 and the discriminator 104-2. The data generator creates synthetic medical images to augment datasets that may lack sufficient data, while the discriminator evaluates the authenticity of the generated synthetic images.
This figure also illustrates the deep learning classification module 106, which processes both real and synthetic datasets to classify images into disease categories, such as COVID-19, Tuberculosis, or Optical Coherence Tomography (OCT) images. The federated learning module 108 is depicted as the component that trains a local deep learning model on each edge device using the classified data
This figure also highlights the integration of hardware accelerators, such as GPUs or TPUs, which significantly enhance the performance of both the GAN module 104 and the deep learning classification module 106. The hardware accelerators allow the system to handle large datasets more efficiently, speeding up the generation of synthetic images and the classification of medical images. This real-time processing capability is particularly crucial in clinical settings where prompt diagnosis can greatly affect patient outcomes.
Figure 3 illustrates the federated learning framework, focusing on how learned model parameters are securely aggregated and shared between the edge devices and the central server. This figure emphasizes the use of secure aggregation techniques to ensure that sensitive patient data is not transmitted during the model training process. Only the learned model parameters from each decentralized edge device are communicated to the central server over the network, with all medical image data remaining securely stored at the local nodes.
The secure transmission of model parameters is depicted as an essential component of the system's privacy-preserving architecture. The central server aggregates the model parameters received from multiple edge devices and updates the global GAN model accordingly, while ensuring that no individual patient data is compromised during this process
Figure 4A-4D the order in which method 400 is described is not intended to be construed as a limitation, and any number of the described method steps may be combined in any order to implement method 400, or an alternative method. Furthermore, method 400 may be implemented by processing resource or computing device(s) through any suitable hardware, non-transitory machine-readable medium/instructions, or a combination thereof. The method 300 comprises the following steps:
In method step 402, the method 400 includes deploying, a plurality of decentralized edge computing nodes at medical institutions, where each node is configured to locally collect and store medical images, such as X-rays, CT scans, and optical coherence tomography (OCT) images, corresponding to diseases like COVID-19, Tuberculosis, and other medical conditions.
In method step 404, the method 400 includes, training, local generative adversarial network (GAN) models at each edge computing node using the locally collected medical image datasets, wherein the local GAN models are used to generate synthetic medical images, augmenting the dataset while ensuring that actual patient data is not shared externally.
In method step 406, the method 400 includes transmitting; the trained GAN model parameters from each decentralized edge node to a central server through a secure federated learning framework, wherein the medical image data remains locally stored, and only model parameters are shared to maintain patient privacy.
In method step 408, the method 400 includes, aggregating, the received GAN model weights at the central server, and refining the global GAN model by applying weight averaging techniques, thereby improving the global model's accuracy and generalization capability until the global model achieves the desired accuracy.
In method step 410, the method 400 includes, deploying. a deep learning-based classification module at each decentralized edge node, comprising convolutional neural networks (CNNs), (VGG), and residual ResNet models, to classify the locally stored medical images into specific disease categories, such as "COVID-19," "Normal," "Tuberculosis," or "Pneumonia," based on learned features.
In method step 412, the method 400 includes, retraining, the classification module at each edge node using the synthetic medical images generated by the local GAN models, wherein the synthetic images augment the real datasets and help to improve the classifier's performance, especially in situations where real datasets are limited.
In method step 414, the method 400 includes, refining, the global GAN model and the local classifiers periodically, based on feedback and performance metrics from the decentralized edge nodes, wherein the system dynamically adjusts to variations in disease manifestations and improves diagnostic accuracy over time.
In method step 416, the method 400 includes, locating a plurality of edge computing devices (102-1, 102-2,…, 102-N) at decentralized medical institutions, each edge computing device (102-1, 102-2,…, 102-N) configured to locally process medical images for disease detection and create a local dataset:
In method step 418, the method 400 includes, generating synthetic medical images by a generative adversarial network (GAN) module (104) based on the local dataset, the generating synthetic including generating, by a data generator (104-1), synthetic medical images based on the local dataset by employing a local generative adversarial networks (GAN) model to augment the dataset when insufficient medical image data for disease diagnosis is available, and evaluating, by a discriminator (104-2), the authenticity of the generated synthetic medical images;
In method step 420, the method 400 includes, classifying, by a deep learning classification module (106), medical images through one or more deep learning classifiers selected from the group consisting of Convolutional Neural Networks (CNN), VGG, and ResNet, the classifiers classifying the local dataset, augmented with the synthetic medical images, into disease categories, including COVID-19, Optical Coherence Tomography (OCT), and Tuberculosis (TB), and generating classified data;
In method step 422, the method 400 includes, training, by a federated learning module (108), a local deep learning model based on the classified data, and sharing only the learned model parameters with a central server to preserve patient medical data privacy;
In method step 424, the method 400 includes, receiving the learned model parameters at the central server (110) from the federated learning modules (108) of the plurality of edge computing devices (102-1, 102-2,…, 102-N) over a communication network;
In method step 426, the method 400 includes, aggregating, the learned model parameters received from the plurality of edge computing devices (102-1, 102-2,…, 102-N) at the central server (110); and
In method step 428, the method 400 includes, updating, a global GAN model for disease detection and image classification using the aggregated learned model parameters, wherein the global GAN model is continuously refined while preserving the privacy and security of patient medical data.
TABLES
a) Analytical results for classification on Tuberculosis (TB)
Precision recall f1-score support
TUBERCULOSIS 0.97 0.97 0.97 597
NORMAL 0.99 0.99 0.99 2704
Accuracy 0.99 3301
macro avg 0.98 0.98 0.98 3301
weighted avg 0.99 0.99 0.99 3301
Table 1: Classification Report Analysis for a model to detect TB.
TUBERCULOSIS NORMAL
TUBERCULOSIS 581 16
NORMAL 16 2688





Table 2: Confusion matrix for a model to detect TB.
b) Analytical results for classification on Optical Coherence Tomography (OCT)
Precision recall f1-score support
CNV 0.93 0.81 0.87 5538
DME 0.68 0.46 0.55 1703
NORMAL 0.68 0.89 0.77 3900
DRUSEN 0.60 0.31 0.31 1235
accuracy 0.74 12376
macro avg 0.65 0.62 0.62 12376
weighted avg 0.75 0.74 0.74 12376









Table 3: Classification Report Analysis for a model to detect OCT classes.

CNV DME DRUSEN NORMAL
CNV 4483 210 541 304
DME 186 782 156 579
DRUSEN 64 40 388 743
NORMAL 90 122 200 3488






Table 4: Confusion matrix for a model to detect OCT
c) Analytical results for classification on Covid -19
Precision recall f1-score support
COVID-19 0.87 0.85 0.91 460
PNEUMONIA 0.84 0.89 0.87 2896
NORMAL 0.90 0.82 0.85 1266
accuracy 0.89 4622
macro avg 0.86 0.89 0.87 4622
weighted avg 0.90 0.87 0.85 4622
Table 5: Classification Report Analysis for a model to detect COVID-19 classes.
COVID-19 PNEUMONIA NORMAL
COVID-19 98 19 21
PNEUMONIA 31 754 33
NORMAL 24 32 265


Table 6: Confusion matrix for a model to detect COVID-19.

TECHNICAL ADVANCEMENTS AND ECONOMIC SIGNIFICANCE
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and method for designing interior physical space using augmented reality that:
• provides the decentralized nature of the system that allows each edge device to process medical images locally, reducing the need for centralized data storage and transmission. This enhances the system's ability to function efficiently in distributed healthcare environments, improving scalability.
• provides scalability across multiple institutions which are inherently scalable, allowing more medical institutions to be added to the network without compromising performance or security. As new edge devices join the system, they contribute to the global model's learning process, improving the accuracy and relevance of disease detection models across different regions and patient demographics.
• provides data diversity and generalization by aggregating model parameters from diverse datasets collected at different institutions, the global GAN model gains exposure to a wide variety of medical images. This leads to better generalization across disease categories and enhances the model's ability to detect rare or region-specific conditions that might not be well-represented in centralized datasets.
• provides real-time processing and decision support, the integration of hardware accelerators and the decentralized nature of the edge devices enable near-real-time image analysis and disease classification. This rapid processing capability is essential in clinical settings, where timely diagnosis can significantly impact patient outcomes.
• provides ability to generate high-quality synthetic medical images using GANs fills this gap, allowing the classification models to function accurately even with limited real datasets. This feature is crucial for improving diagnostic capabilities in remote or underserved regions.
The economic significance details requirement may be called during the examination. Only after filing this patent application, the applicant can work publicly related to the present disclosure product/process/method. The applicant will disclose all the details related to the economic significance contribution after the protection of the invention.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, or group of elements, but not the exclusion of any other element, or group of elements.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation. , Claims:WE CLAIM:
1. A system (100) for medical image analysis using federated edge learning with generative adversarial networks (FEELGANs), comprising:
a plurality of edge computing devices (102-1, 102-2,…, 102-N) located at decentralized medical institutions, each edge device (102-1, 102-2,…, 102-N) configured to locally process medical images for disease detection to create a local dataset, each edge computing device (102-1, 102-2,…, 102-N) including:
- a generative adversarial network (GAN) module (104) configured to generate synthetical medical images based on the local dataset, wherein the GAN module (102A) including:
i. a data generator (104-1) configured to generate synthetic medical images dataset by means of local generative adversarial networks (GAN) model based on the local dataset to augment the local dataset that has insufficient medical image data for disease diagnosis, and
ii. a discriminator (104-2) configured to evaluate the authenticity of the generated synthetic medical images dataset;
- a deep learning classification module (106) comprising one or more deep learning classifiers selected from the group consisting of Convolutional Neural Networks (CNN), VGG, and ResNet, configured to classify medical images of the local dataset augmented with the synthetic medical images dataset, into disease categories, including COVID-19, Optical Coherence Tomography (OCT), and Tuberculosis (TB), and generate a classified data;
- a federated learning module (108) configured to train a local deep learning model by the classified data, and wherein the federated learning module (108) configured to only share learned model parameters, thereby preserving patient medical data privacy;
a central server (110) configured to receive the learned model parameters from each federated learning module (108) of the plurality of edge computing devices (102-1, 102-2,…, 102-N) over a communication network (112), further configured to aggregate the learned model parameters received from the plurality of edge computing devices (102-1, 102-2,…, 102-N), and further configured to update a global GAN model for disease detection and image classification, wherein the global GAN model being refined based on the shared learned model parameters by each federated learning module (108) while preserving the privacy and security of patient medical data.
2. The system (100) as claimed in claim 1, wherein the system (100) is configured to dynamically train the local GAN and classifier models at the decentralized edge computing devices (102-1, 102-2,…, 102-N) and updates the global GAN model at the central server (110) based on the model parameters learned from the local GAN and classifier models.
3. The system (100) as claimed in claim 1, wherein each decentralized edge computing device (102-1, 102-2,…, 102-N) includes:
• at least one processor (114) ;
• a memory (116) storing instructions for the GAN module (104), the deep learning classification module (106), and the federated learning module (108);
• a network interface (118) configured to communicate the learned model parameters to the central server (110) over the communication network.
4. The system (100) as claimed in claim 1, wherein the classification module (106) is configured to perform intra-disease classification by categorizing medical images into subcategories within specific disease categories, such as classifying COVID-19 images into "COVID," "Normal," and "Pneumonia."
5. The system (100) as claimed in claim 1, wherein the decentralized edge computing device (102-1, 102-2,…, 102-N) is configured to utilize hardware accelerators, including graphical processing units (GPUs) and tensor processing units (TPUs), to enhance the performance of the deep learning classification module (106) and accelerate the GAN module (104) training for generating the synthetic medical images.
6. The system (100) as claimed in claim 1, wherein the federated learning module (108) employs secure aggregation techniques to ensure that the learned model parameters shared with the central server (110) are encrypted, for ensuring the privacy and security of patient medical data.
7. The system (100) as claimed in claim 1, wherein the central server (110) further includes a model evaluation module (112-1) configured to assess the accuracy and performance of the global GAN model based on feedback from the edge computing devices (102-1, 102-2,…, 102-N) and update the global GAN model accordingly.
8. The system (100) as claimed in claim 1, wherein the central server (110) updates the global GAN model by performing a weighted aggregation of the learned model parameters received from each edge computing devices (102-1, 102-2,…, 102-N), optimizing the model's performance across diverse datasets while preventing overfitting to any specific local dataset.
9. The system (100) as claimed in claim 1, further comprising a dynamic training mechanism that alternates between:
- training local GAN models at the edge computing devices (102-1, 102-2,…, 102-N) based on local dataset; and
- refining the global GAN model at the central server (110) based on aggregated learned model parameters from multiple edge computing devices (102-1, 102-2,…, 102-N), wherein the training dynamically adapts to the availability and quality of local datasets.
10. The system (100) as claimed in claim 1, wherein the edge computing devices (102-1, 102-2,…, 102-N) are integrated with existing medical imaging hardware, including X-ray, CT, MRI, or Optical Coherence Tomography (OCT) scanners, enabling seamless acquisition and processing of medical images for real-time disease analysis.
11. A method (400) for medical image analysis using federated edge learning and generative adversarial networks (FEELGANs), comprising the steps of:
deploying, a plurality of decentralized edge computing nodes at medical institutions, where each node is configured to locally collect and store medical images, such as X-rays, CT scans, and optical coherence tomography (OCT) images, corresponding to diseases like COVID-19, Tuberculosis, and other medical conditions.
training, local generative adversarial network (GAN) models at each edge computing node using the locally collected medical image datasets, wherein the local GAN models are used to generate synthetic medical images, augmenting the dataset while ensuring that actual patient data is not shared externally.
transmitting, the trained GAN model parameters from each decentralized edge node to a central server through a secure federated learning framework, wherein the medical image data remains locally stored, and only model parameters are shared to maintain patient privacy.
aggregating, the received GAN model weights at the central server, and refining the global GAN model by applying weight averaging techniques, thereby improving the global model's accuracy and generalization capability.
deploying. a deep learning-based classification module at each decentralized edge node, comprising convolutional neural networks (CNNs), (VGG), and residual ResNet models, to classify the locally stored medical images into specific disease categories, such as "COVID-19," "Normal," "Tuberculosis," or "Pneumonia," based on learned features.
retraining, the classification module at each edge node using the synthetic medical images generated by the local GAN models, wherein the synthetic images augment the real datasets and help to improve the classifier's performance, especially in situations where real datasets are limited.
refining, the global GAN model and the local classifiers periodically, based on feedback and performance metrics from the decentralized edge nodes, wherein the system dynamically adjusts to variations in disease manifestations and improves diagnostic accuracy over time.
locating a plurality of edge computing devices (102-1, 102-2,…, 102-N) at decentralized medical institutions, each edge computing device (102-1, 102-2,…, 102-N) configured to locally process medical images for disease detection and create a local dataset:
generating synthetic medical images by a generative adversarial network (GAN) module (104) based on the local dataset, the generating synthetic including:
- generating, by a data generator (104-1), synthetic medical images based on the local dataset by employing a local generative adversarial networks (GAN) model to augment the dataset when insufficient medical image data for disease diagnosis is available, and
- evaluating, by a discriminator (104-2), the authenticity of the generated synthetic medical images;
classifying, by a deep learning classification module (106), medical images through one or more deep learning classifiers selected from the group consisting of Convolutional Neural Networks (CNN), VGG, and ResNet, the classifiers classifying the local dataset, augmented with the synthetic medical images, into disease categories, including COVID-19, Optical Coherence Tomography (OCT), and Tuberculosis (TB), and generating classified data;
training, by a federated learning module (108), a local deep learning model based on the classified data, and sharing only the learned model parameters with a central server to preserve patient medical data privacy;
receiving the learned model parameters at the central server (110) from the federated learning modules (108) of the plurality of edge computing devices (102-1, 102-2,…, 102-N) over a communication network;
aggregating the learned model parameters received from the plurality of edge computing devices (102-1, 102-2,…, 102-N) at the central server (110); and
updating a global GAN model for disease detection and image classification using the aggregated learned model parameters, wherein the global GAN model is continuously refined while preserving the privacy and security of patient medical data.


Dated this 30th Day of October, 2024

_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA - 25
OF R. K. DEWAN & CO.
AUTHORIZED AGENT OF APPLICANT

TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, AT CHENNAI

Documents

NameDate
202441083307-FORM-26 [05-11-2024(online)].pdf05/11/2024
202441083307-COMPLETE SPECIFICATION [30-10-2024(online)].pdf30/10/2024
202441083307-DECLARATION OF INVENTORSHIP (FORM 5) [30-10-2024(online)].pdf30/10/2024
202441083307-DRAWINGS [30-10-2024(online)].pdf30/10/2024
202441083307-EDUCATIONAL INSTITUTION(S) [30-10-2024(online)].pdf30/10/2024
202441083307-EVIDENCE FOR REGISTRATION UNDER SSI [30-10-2024(online)].pdf30/10/2024
202441083307-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-10-2024(online)].pdf30/10/2024
202441083307-FORM 1 [30-10-2024(online)].pdf30/10/2024
202441083307-FORM 18 [30-10-2024(online)].pdf30/10/2024
202441083307-FORM FOR SMALL ENTITY(FORM-28) [30-10-2024(online)].pdf30/10/2024
202441083307-FORM-9 [30-10-2024(online)].pdf30/10/2024
202441083307-PROOF OF RIGHT [30-10-2024(online)].pdf30/10/2024
202441083307-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-10-2024(online)].pdf30/10/2024
202441083307-REQUEST FOR EXAMINATION (FORM-18) [30-10-2024(online)].pdf30/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.