Vakilsearch LogoIs NowZolvit Logo
close icon
image
image
user-login
Patent search/

GENERATIVE ADVERSARIAL NETWORK-BASED SYSTEM FOR ENHANCED ANALYSIS AND DETECTION OF RARE BRAIN TUMOR CASES

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

GENERATIVE ADVERSARIAL NETWORK-BASED SYSTEM FOR ENHANCED ANALYSIS AND DETECTION OF RARE BRAIN TUMOR CASES

ORDINARY APPLICATION

Published

date

Filed on 14 November 2024

Abstract

ABSTRACT “GENERATIVE ADVERSARIAL NETWORK-BASED SYSTEM FOR ENHANCED ANALYSIS AND DETECTION OF RARE BRAIN TUMOR CASES” The present invention provides generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases. This comprehensive project combines advanced image processing and deep learning techniques for brain tumor detection and classification in MRI images. Beginning with robust preprocessing methods to extract relevant brain regions, the project employs a ResNet50-based Convolutional Neural Network (CNN) for initial classification. Furthermore, it introduces a Generative Adversarial Network (GAN) for effective data augmentation, creating synthetic images to enhance model performance. The approach extends to the identification of rare tumor cases, including irregular tumors, enhanced patterns, calcifications, cystic components, and those with hemorrhage or necrosis. The proposed CNN model achieves a test accuracy of approximately 90% after training with augmented data. Figure 1

Patent Information

Application ID202431088190
Invention FieldCOMPUTER SCIENCE
Date of Application14/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Mansheel AgarwalSchool of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024IndiaIndia
Anouska AbhisiktaSchool of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024IndiaIndia
Pradeep Kumar MallickSchool of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024IndiaIndia
Rajani Kanta SahuDept. of CSE, Gandhi Institute of Excellent Technocrats, Ghangapatna Bhubaneswar Odisha India 752054IndiaIndia
Achinta Kumar PalitDept. of CSE, Gandhi Institute of Excellent Technocrats, Ghangapatna Bhubaneswar Odisha India 752054IndiaIndia

Applicants

NameAddressCountryNationality
Kalinga Institute of Industrial Technology (Deemed to be University)Patia Bhubaneswar Odisha India 751024IndiaIndia

Specification

Description:TECHNICAL FIELD
[0001] The present invention relates to the field of medical science, and more particularly, the present invention relates to the generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases.
BACKGROUND ART
[0002] The following discussion of the background of the invention is intended to facilitate an understanding of the present invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known, or part of the common general knowledge in any jurisdiction as of the application's priority date. The details provided herein the background if belongs to any publication is taken only as a reference for describing the problems, in general terminologies or principles or both of science and technology in the associated prior art.
[0003] Medical imaging, particularly magnetic resonance imaging (MRI), has revolutionized the field of brain tumor detection, providing detailed insights for accurate diagnosis. Despite significant advancements, challenges persist in achieving precise classification and detection of rare cases, leading to a research gap in developing robust methodologies. This project aims to address these challenges through an innovative approach that integrates image preprocessing, convolutional neural networks (CNNs), and generative adversarial networks (GANs). The research community recognizes the need for enhanced techniques that not only preprocess images effectively but also leverage advanced deep learning models for improved brain tumor classification. By combining traditional image processing methods with state-of-the-art neural network architectures, we bridge the existing research gap and contribute to the evolving landscape of medical image analysis. Our comprehensive methodology seeks to overcome limitations in rare case detection, including irregular tumors, enhancement patterns, calcifications, cystic components, and hemorrhage or necrosis, ultimately enhancing the accuracy of brain tumor diagnosis. The proposed approach serves as a significant step towards improving medical image analysis and supporting clinicians in making informed decisions for patient care.
[0004] In light of the foregoing, there is a need for Generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases that overcomes problems prevalent in the prior art associated with the traditionally available method or system, of the above-mentioned inventions that can be used with the presented disclosed technique with or without modification.
[0005] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies, and the definition of that term in the reference does not apply.
OBJECTS OF THE INVENTION
[0006] The principal object of the present invention is to overcome the disadvantages of the prior art by providing generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases.
[0007] Another object of the present invention is to provide generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases that integrates advanced image processing and deep learning for brain tumor detection in MRI images, achieving notable 90% test accuracy with a ResNet50-based CNN.
[0008] Another object of the present invention is to provide generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases that wherein the GAN-driven data augmentation contributes to model robustness.
[0009] Another object of the present invention is to provide generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases that explores alternative neural network architectures and optimization of the GAN.
[0010] Another object of the present invention is to provide generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases that lays a strong foundation for future research, promising improved brain tumor detection systems for clinical applications.
[0011] Another object of the present invention is to provide generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases that holds the potential to positively impact medical imaging and patient care.
[0012] The foregoing and other objects of the present invention will become readily apparent upon further review of the following detailed description of the embodiments as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0013] The present invention relates to generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases. This comprehensive project combines advanced image processing and deep learning techniques for brain tumor detection and classification in MRI images. Beginning with robust preprocessing methods to extract relevant brain regions, the project employs a ResNet50-based Convolutional Neural Network (CNN) for initial classification. Furthermore, it introduces a Generative Adversarial Network (GAN) for effective data augmentation, creating synthetic images to enhance model performance. The approach extends to the identification of rare tumor cases, including irregular tumors, enhanced patterns, calcifications, cystic components, and those with hemorrhage or necrosis. The proposed CNN model achieves a test accuracy of approximately 90% after training with augmented data. The project provides a holistic framework for accurate brain tumor detection, leveraging a diverse set of image processing and machine learning methodologies.
[0014] Our research leverages a meticulously curated dataset amalgamated from three key sources: figshare, SARTAJ dataset, and Br35H. The dataset encompasses a diverse collection of 7023 human brain MRI images, thoughtfully categorized into four classes: glioma, meningioma, no tumor, and pituitary. Notably, the "no tumor" class exclusively draws from the Br35H dataset, providing a baseline for healthy brain representations. The dataset's architectural richness lies in its multifaceted composition, offering a comprehensive representation of pathological conditions crucial for training and validating robust machine learning models. The inclusion of glioma and meningioma classes addresses the spectrum of brain tumors, while the "no tumor" class ensures a balanced representation for accurate classification. This dataset's significance in our project lies in its ability to simulate real-world scenarios, providing a nuanced understanding of brain pathology through varied MRI images. The variability in image dimensions necessitates a preprocessing step for uniformity, a vital consideration for optimizing model accuracy. By utilizing this diverse dataset, our project aims to enhance the efficacy of deep learning models in the early detection and classification of brain tumors, contributing to advancements in medical imaging for improved patient outcomes.
[0015] In the preprocessing step, we first import necessary libraries and set up the environment. It employs the pre-installed libraries, such as NumPy, Pandas, and OpenCV. Additionally, it showcases the directory structure, listing the available files in the input directory.
[0016] The core of the preprocessing is the crop function, which efficiently extracts the brain region from MRI images. This function involves converting the image to grayscale, applying Gaussian blur, and utilizing thresholding, erosion, and dilation techniques to isolate the tumor. The identified contours help in determining the extreme points, ultimately resulting in a cropped image focused on the relevant brain region. The script iteratively processes images from the training and testing datasets, saving the cleaned images in a structured directory format for subsequent use in training and evaluation.
[0017] While the invention has been described and shown with reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF DRAWINGS
[0018] So that the manner in which the above-recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may have been referred by embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0019] These and other features, benefits, and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
[0020] Figure 1 shows a Irregular Tumor;
[0021] Figure 2 shows Enhanced Tumor;
[0022] Figure 3 shows Calcified Tumor;
[0023] Figure 4 shows Cystic Tumor;
[0024] Figure 5 shows Hemorrhage/ Necrosis Tumor; and
[0025] Figure 6 shows Classification of Rare cases of Tumor.
DETAILED DESCRIPTION OF THE INVENTION
[0026] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and the detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim.
[0027] As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein are solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers, or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles, and the like are included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[0028] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element, or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[0029] The present invention is described hereinafter by various embodiments with reference to the accompanying drawing, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, several materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[0030] The present invention relates to generative adversarial network-based system for enhanced analysis and detection of rare brain tumor cases.
[0031] Data Preprocessing: In the preprocessing step, we first import necessary libraries and set up the environment. It employs the pre-installed libraries, such as NumPy, Pandas, and OpenCV. Additionally, it showcases the directory structure, listing the available files in the input directory.
[0032] The core of the preprocessing is the crop function, which efficiently extracts the brain region from MRI images. This function involves converting the image to grayscale, applying Gaussian blur, and utilizing thresholding, erosion, and dilation techniques to isolate the tumor. The identified contours help in determining the extreme points, ultimately resulting in a cropped image focused on the relevant brain region. The script iteratively processes images from the training and testing datasets, saving the cleaned images in a structured directory format for subsequent use in training and evaluation.
[0033] Neural Network Architecture and Training: The neural network architecture and training for brain tumor classification utilize the ResNet50 model with its pre-trained weights on ImageNet, excluding the ImageNet classifier. The model is adapted to our task with additional layers, including Global Average Pooling, Dropout, and a Dense layer with softmax activation for multi-class classification. The model is compiled using the Adam optimizer with a learning rate of 0.0001 and categorical cross-entropy loss.
[0034] The training process incorporates data augmentation using an ImageDataGenerator, introducing random rotations, shifts, and horizontal flips to enhance model robustness. Training is monitored using various callbacks, including EarlyStopping, ReduceLROnPlateau, and ModelCheckpoint for optimized performance. The training history, including loss and accuracy curves, is visualized using TensorBoard. After training completion, the model is evaluated on the test set, and performance metrics, such as confusion matrix and classification report, are presented for comprehensive assessment.
[0035] Preprocessing the No Tumor dataset: The preprocessing of the 'no tumor' dataset for GAN training involves several essential steps. The provided code utilizes the OpenCV library to read, filter, and apply color mapping to the grayscale images. The images are then resized to a specified dimension, in this case, 200x200 pixels. Additional preprocessing steps specific to GAN training, such as normalization, are mentioned in the comments, indicating that further adjustments may be necessary based on the GAN architecture.
[0036] The processed images are saved in a specified directory, with the resulting images serving as the input for GAN training. The 'no tumor' class images from the training dataset are processed to ensure compatibility with the GAN model's requirements. It's important to note that the preprocessing steps may vary depending on the specific characteristics and requirements of the GAN architecture being employed.
[0037] Training GAN for Data Augmentation: This section focuses on leveraging a Generative Adversarial Network (GAN) to augment the dataset, specifically targeting the 'no tumor' class images. The GAN architecture comprises a generator, discriminator, and the GAN model itself. The generator is designed to transform random noise into synthetic images resembling the 'no tumor' class, employing dense layers for this purpose. On the other hand, the discriminator serves as a binary classifier to distinguish between real and generated images, using dense layers for binary classification.
[0038] In the GAN model, the discriminator's weights are frozen during training, enabling the generator to learn and produce realistic images that may deceive the discriminator. The entire GAN is compiled using the Adam optimizer and binary cross-entropy loss. The training process involves iteratively generating synthetic images using random noise and comparing them with real images from the 'no tumor' class. The discriminator is trained to differentiate between real and generated images, while the generator strives to create images closely resembling the 'no tumor' class.
[0039] The training loop executes over multiple epochs, and the progress is monitored by printing the discriminator and generator losses at intervals. Following training, the GAN model is saved for subsequent use. This approach significantly contributes to dataset augmentation, enhancing both the diversity and quantity of the dataset. The augmented dataset, enriched by synthetic images generated by the GAN, aims to improve the neural network model's robustness and generalization capabilities in subsequent stages of the project.
[0040] Comprehensive Tumor Characterization: The primary objective of this section is to identify and understand unique and rare cases within the dataset. This involves a multi-step process aimed at gaining insights into various aspects of tumor characteristics. We focus on tumors with an average color below a specified threshold. This allows us to pinpoint cases that deviate from the norm, aiding in the analysis of less common tumor types. The following features are checked for:
[0041] Irregular Tumor Detection: Here, we employ advanced computer techniques to accentuate the edges of tumors and analyze their shapes. The goal is to identify tumors with irregular shapes or those located in atypical positions, providing a deeper understanding of the structural diversity among tumors.
[0042] Irregular Tumor: Enhancement Pattern Recognition: involves identifying tumors with specific visual patterns. Employing image processing methods, we enhance these patterns to recognize variations in how tumors appear. This step contributes valuable insights into the nuanced characteristics of different tumor types.
[0043] Enhanced Tumor: Calcification Identification : focuses on determining which tumors exhibit calcifications, characterized by hard deposits. Techniques are applied to identify these hardened areas within the images, aiding in the classification of tumors based on their structural features.
[0044] Calcified Tumor: Cystic Component Detection : aims to identify tumors with cysts, fluid-filled sacs. Using image analysis techniques, cystic regions within tumors are highlighted, offering insights into the prevalence of cysts within the dataset.
[0045] Cystic Tumor: Hemorrhage or Necrosis Identification : targets tumors displaying signs of bleeding or cell death. Simple rules, such as examining brightness levels in specific areas of the images, are employed to identify tumors with critical features. This step enhances the understanding of tumors exhibiting these particular characteristics.
[0046] Hemorrhage/ Necrosis Tumor: This enables a detailed exploration of various tumor characteristics, ranging from shapes and patterns to specific features like calcifications and cysts. Through these steps, a thorough understanding of the diverse characteristics present in tumor images is achieved.
[0047] Creating Final Labelled Dataset: The common indices from the above features corresponding to the images from our No Tumor dataset were extracted into a list and a new labeled dataset was created, assigning the images of these indices to "Rare Tumor" and the others to "No Tumor" dataset.
[0048] Training and Evaluating the Binary Classification Model: In this section, we build and train a Convolutional Neural Network (CNN) Model for Image Classification. The labeled dataset is split into training and testing for Model Evaluation. Labels are determined based on common indices indicating which images belong to rare cases. Images are read from the directory and resized to a consistent dimension (128x128) to ensure uniformity in input data.
[0049] Classification of Rare cases of Tumor: Data augmentation is employed using Keras' ImageDataGenerator, which applies various transformations like rotation, shifting, and flipping to artificially increase the size of the training dataset. This helps improve the model's generalization ability by exposing it to diverse variations of the training data.
[0050] The CNN model is defined using Keras' Sequential API, comprising convolutional layers, max-pooling layers, flattening layers, and dense layers. The model architecture is kept relatively simple for this binary classification task, with ReLU activation functions in intermediate layers and a sigmoid activation function in the output layer for binary classification.
[0051] After defining the model, it's compiled with appropriate loss and optimization functions. Binary crossentropy loss and the Adam optimizer are chosen, and accuracy is selected as the metric for evaluation. The model is then trained on the augmented training data for a specified number of epochs, with a batch size of 32.
[0052] Finally, the trained model is evaluated on the test set, and its performance metrics, particularly test accuracy, are printed to assess how well it generalizes to unseen data.
[0053] Result Analysis: The trained Convolutional Neural Network (CNN) model for binary image classification, achieving an impressive test accuracy of 90%. The primary objective is to distinguish between rare and non-rare cases within our prepared dataset, with rare cases identified based on specific indices mentioned above. The CNN architecture comprises convolutional layers, max-pooling layers, flattening layers, and dense layers. Rectified linear unit (ReLU) activation functions are applied in intermediate layers, and a sigmoid activation function is employed in the output layer for binary classification. Binary crossentropy serves as the loss function, and the Adam optimizer is utilized-a common configuration for binary classification tasks.
[0054] The training process involves 10 epochs with a batch size of 32, during which the model learns meaningful patterns and features from the training data. Data augmentation, encompassing techniques such as rotation, shifting, and flipping, contributes to the model's ability to generalize effectively to new and unseen examples.
[0055] Following training, the model undergoes evaluation on a separate test set. The impressive test accuracy of 90% signifies the model's proficiency in distinguishing between rare and non-rare cases. However, it is essential to contextualize this accuracy within the specific requirements of the research or application. If the application demands high precision and recall for rare cases, further analysis and potential adjustments to the model or training process may be warranted.
[0056] Conclusion and Future work: This project integrates advanced image processing and deep learning for brain tumor detection in MRI images, achieving a notable 90% test accuracy with a ResNet50-based CNN. The GAN-driven data augmentation contributes to model robustness. Future work could explore alternative neural network architectures and optimization of the GAN. Further refinement in rare tumor case identification and incorporation of interpretability techniques can enhance model transparency. This project lays a strong foundation for future research, promising improved brain tumor detection systems for clinical applications. Ongoing exploration and refinement hold the potential to positively impact medical imaging and patient care.
[0057] Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the 5 embodiments shown along with the accompanying drawings but is to be providing the broadest scope consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and appended claims. , Claims:CLAIMS
We Claim:
1) A system for enhanced analysis and detection of rare brain tumors in MRI images, the system comprising:
- a preprocessing module configured to:
- load MRI image data, apply grayscale conversion, Gaussian blur, and thresholding;
- identify tumor contours and crop images to the brain region for subsequent analysis;
- a neural network architecture employing a pre-trained convolutional neural network model, the model configured for multi-class classification with modified output layers and optimized using the Adam optimizer;
- a Generative Adversarial Network (GAN) module designed to augment the dataset by generating synthetic images of the "no tumor" class using a generator and discriminator network, wherein the discriminator distinguishes between real and generated images;
- an analysis module to characterize and classify rare brain tumors based on features including irregular shapes, enhancement patterns, calcifications, cystic components, and necrotic or hemorrhagic regions;
- wherein the neural network and GAN modules are trained using an augmented dataset to improve classification accuracy of rare tumor cases.
2) The system as claimed in claim 1, wherein the pre-processing module further comprises steps for:
- resizing images to a specified dimension,
- isolating tumors based on brightness levels, and
- saving preprocessed images in a structured directory format for training and evaluation.
3) The system as claimed in claim 1, wherein the neural network architecture is based on the ResNet50 model and includes additional layers for classification, wherein the model uses a categorical cross-entropy loss function.
4) The system as claimed in claim 1, wherein the GAN module generates synthetic images through:
- a generator network that converts random noise into synthetic "no tumor" images, and
- a discriminator network trained to differentiate between real and generated images.

Documents

NameDate
202431088190-COMPLETE SPECIFICATION [14-11-2024(online)].pdf14/11/2024
202431088190-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf14/11/2024
202431088190-DRAWINGS [14-11-2024(online)].pdf14/11/2024
202431088190-EDUCATIONAL INSTITUTION(S) [14-11-2024(online)].pdf14/11/2024
202431088190-EVIDENCE FOR REGISTRATION UNDER SSI [14-11-2024(online)].pdf14/11/2024
202431088190-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-11-2024(online)].pdf14/11/2024
202431088190-FORM 1 [14-11-2024(online)].pdf14/11/2024
202431088190-FORM FOR SMALL ENTITY(FORM-28) [14-11-2024(online)].pdf14/11/2024
202431088190-FORM-9 [14-11-2024(online)].pdf14/11/2024
202431088190-POWER OF AUTHORITY [14-11-2024(online)].pdf14/11/2024
202431088190-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf14/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.