image
image
user-login
Patent search/

AUTOMATED BRAIN TUMOUR DIAGNOSIS SYSTEM AND METHOD THEREOF

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

AUTOMATED BRAIN TUMOUR DIAGNOSIS SYSTEM AND METHOD THEREOF

ORDINARY APPLICATION

Published

date

Filed on 20 November 2024

Abstract

Disclosed herein is an automated brain tumour diagnosis system and method thereof (100) that comprises a magnetic resonance imaging (MRI) input unit (102) and a preprocessing unit (104) for image normalization and augmentation. A hybrid deep learning architecture (106) captures structural patterns, enabling feature extraction and classification, while a zero-shot and few-shot learning module (108) supports detection of rare tumour types. Multi-modal data integration (110) combines various MRI modalities, with a real-time prediction and visualization module (112) providing tumour mapping. A secure cloud-based fire store network (114) handles data storage and encryption, while an adaptive learning unit (116) retrains the deep learning model periodically. Additionally, a predictive analytics module (118) forecasts tumour progression, and a user interface (120) facilitates interactions, delivering clear diagnostic insights for healthcare professionals.

Patent Information

Application ID202441089819
Invention FieldCOMPUTER SCIENCE
Date of Application20/11/2024
Publication Number48/2024

Inventors

NameAddressCountryNationality
SARITHA SHETTYDEPARTMENT OF MASTER OF COMPUTER APPLICATIONS, NMAM INSTITUTE OF TECHNOLOGY NITTE (DEEMED TO BE UNIVERSITY), NITTE - 574110, KARNATAKA, INDIAIndiaIndia
JOYSTON MENEZESUNIVERSITY OF SOUTHERN CALIFORNIA, LOS ANGELES, CALIFORNIAIndiaIndia
MANJUNATH MDEPARTMENT OF CIVIL ENGINEERING, NMAM INSTITUTE OF TECHNOLOGY, NITTE (DEEMED TO BE UNIVERSITY), NITTE - 574110, KARNATAKA, INDIAIndiaIndia
UMA RDEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY, BANGALOREIndiaIndia

Applicants

NameAddressCountryNationality
NITTE (DEEMED TO BE UNIVERSITY)6TH FLOOR, UNIVERSITY ENCLAVE, MEDICAL SCIENCES COMPLEX, DERALAKATTE, MANGALURU, KARNATAKA 575018IndiaIndia

Specification

Description:FIELD OF DISCLOSURE
[0001] The present disclosure generally relates to the field of medical image analysis and computer-aided diagnosis, more specifically, relates to automated brain tumour diagnosis system and method thereof.
BACKGROUND OF THE DISCLOSURE
[0002] Firstly, the system enhances diagnostic accuracy and reliability through advanced deep learning models, incorporating both Convolutional Neural Networks and Transformer layers. By utilizing a hybrid architecture that integrates self-attention mechanisms and zero-shot learning capabilities, the system processes and analyses complex patterns in Magnetic Resonance Imaging images to distinguish between various tumour types with high precision. This approach reduces the chances of misdiagnosis, supports early-stage detection, and allows medical professionals to make timely, informed treatment decisions.
[0003] Secondly, the system supports multi-modal data integration, allowing it to utilize information from multiple Magnetic Resonance Imaging modalities for a comprehensive analysis. Unlike traditional models that focus on a single imaging modality, this invention combines data from T1-weighted, T2-weighted, and other relevant Magnetic Resonance Imaging scans to construct a robust dataset. The multi-modal integration enables a more detailed and accurate diagnosis, enhancing the system's ability to detect complex tumour structures that may not appear clearly on a single imaging modality, thus improving the overall reliability of diagnostic results.
[0004] Finally, the invention offers a practical solution with real-time prediction and visualization modules. The system not only diagnoses but also provides detailed, interpretable visual mappings of detected tumour areas within Magnetic Resonance Imaging scans, making results easily understandable for non-specialists as well. The user-friendly interface further allows healthcare professionals, even with limited technical knowledge, to efficiently use the system. The added advantage of predictive analytics enables the system to assess potential tumour growth over time, which proves valuable for continuous monitoring and treatment planning. This predictive aspect allows healthcare providers to anticipate disease progression and adjust treatment strategies accordingly, improving patient care and outcome.
[0005] Firstly, conventional brain tumour diagnostic systems relying primarily on standard convolutional neural networks often lack the ability to handle complex tumour structures with high accuracy. These systems tend to depend on single-modal Magnetic Resonance Imaging data, missing out on the comprehensive insights available through multi-modal analysis. As a result, such systems may overlook intricate patterns or variations in tumour structure, leading to increased diagnostic uncertainty and higher chances of misclassification, particularly in cases involving atypical or rare tumour types.
[0006] Secondly, traditional diagnostic systems often require extensive labelled datasets for training, limiting their adaptability to new or less common brain tumour cases. Many existing models lack the zero-shot or few-shot learning capabilities necessary for generalizing to rare or novel tumour types without substantial labelled examples. This limitation reduces the system's flexibility and restricts its application in scenarios where data availability is limited, hindering effective diagnosis across diverse patient demographics and tumour types.
[0007] Lastly, existing systems often do not incorporate real-time prediction and visualization, making their diagnostic results less accessible and actionable for medical professionals. The absence of a user-friendly interface and real-time visual mapping tools results in prolonged timeframes for interpreting results, requiring additional specialist input to accurately assess and communicate findings. This lack of interpretability slows down clinical workflows and reduces the efficiency of patient diagnosis and treatment planning, impacting overall healthcare delivery quality.
[0008] Thus, in light of the above-stated discussion, there exists a need for an automated brain tumour diagnosis system and method thereof.
SUMMARY OF THE DISCLOSURE
[0009] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0010] According to illustrative embodiments, the present disclosure focuses on an automated brain tumour diagnosis system and method thereof which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0011] An objective of the present disclosure is to enable early and accurate detection of brain tumours by integrating advanced deep learning models, providing a reliable diagnosis to enhance patient outcomes.
[0012] Another objective of the present disclosure is to improve diagnostic accuracy by incorporating multi-modal Magnetic Resonance Imaging data, allowing for a more comprehensive and nuanced analysis of brain tumour structures.
[0013] Another objective of the present disclosure is to reduce diagnostic time by automating the process, helping medical professionals make timely decisions and optimize treatment planning.
[0014] Another objective of the present disclosure is to increase accessibility by offering a user-friendly interface that allows non-specialist users to interpret diagnostic results accurately and intuitively.
[0015] Another objective of the present disclosure is to enable real-time predictions and visualizations, allowing healthcare providers to assess results immediately and implement effective patient management strategies without delay.
[0016] Another objective of the present disclosure is to support zero-shot and few-shot learning capabilities, enhancing the model's adaptability and enabling it to generalize effectively to rare or newly discovered tumour types without requiring large labelled datasets.
[0017] Another objective of the present disclosure is to improve the interpretability of diagnostic results by including visualization and mapping tools, enabling healthcare professionals to understand tumour locations and patterns more effectively.
[0018] Another objective of the present disclosure is to secure patient data through cloud-based encryption and storage solutions, maintaining data privacy and compliance with medical data protection standards.
[0019] Yet another objective of the present disclosure is to enable continuous model improvement by incorporating adaptive learning, allowing the system to periodically retrain on new data to increase diagnostic accuracy and remain up-to-date with recent advances.
[0020] Yet another objective of the present disclosure is to assist radiologists and medical professionals in decision-making by providing predictive analytics for tumour growth estimation, supporting proactive treatment planning and enhancing patient care outcomes.
[0021] In light of the above, in one aspect of the present disclosure, an automated brain tumour diagnosis system is disclosed herein. The system comprises a magnetic resonance imaging (MRI) input unit configured to acquire comprehensive multi-modal magnetic resonance imaging (MRI) data of a patient's brain. The system includesa preprocessing unit, operatively connected to the magnetic resonance imaging (MRI) input unit, configured to perform image normalization, resizing, and augmentation on the acquired magnetic resonance imaging (MRI) data. The system also includesa hybrid deep learning architecture integrated within the preprocessing unit, configured for capturing complex spatial hierarchies and structural patterns within the magnetic resonance imaging (MRI) data and supporting detailed feature extraction and classification of brain tumour types based on intricate tissue characteristics. The system also includes a zero-shot and few-shot learning module integrated within the preprocessing unit, configured to detect and classify rare or newly observed tumour types by leveraging minimal labelled data. The system also includes a multi-modal data integration unit connected to the preprocessing unit, configured to synthesize data across multiple magnetic resonance imaging (MRI) modalities. The system also includesa real-time prediction and visualization module integrated within the multi-modal data integration unit, configured to provide immediate tumour mapping and interpretation visualizations, offering clear, interpretable diagnostic insights. The system also includesa secure cloud-based fire store network operatively connected to the preprocessing unit and the multi-modal data integration unit, configured to securely store, retrieve, and encrypt diagnostic data and patient magnetic resonance imaging (MRI) images. The system also includesan adaptive learning unit, operatively connected to both the secure cloud-based fire store network and the preprocessing unit, configured to periodically retrain the hybrid deep learning architecture based on newly available diagnostic data. The system also includes a predictive analytics module integrated within the adaptive learning unit, configured to apply historical patient data analysis to forecast tumour progression. The system also includesa user interface configured to facilitate user interactions with the real-time prediction and visualization module, the secure cloud-based fire store network, and the predictive analytics module, and provide an accessible and user-friendly interface for healthcare professionals to monitor, interpret, and interact with diagnostic results.
[0022] In one embodiment, the magnetic resonance imaging (MRI) input unit is configured to acquire images from multiple magnetic resonance imaging (MRI) modalities, including T1-weighted, T2-weighted, and flair sequences, to provide a comprehensive data set.
[0023] In one embodiment, the preprocessing unit further comprises an anomaly detection sub-module, configured to identify and exclude low-quality or corrupted images prior to processing.
[0024] In one embodiment, the hybrid deep learning architecture includes a transformer module in combination with convolutional neural networks (CNNs).
[0025] In one embodiment, the zero-shot and few-shot learning module further configured to perform context-based classification by analysing minimal examples of rare tumour types.
[0026] In one embodiment, the multi-modal data integration unit further includes a data weighting mechanism, configured to assign importance to each magnetic resonance imaging (MRI) modality based on diagnostic relevance.
[0027] In one embodiment, the real-time prediction and visualization module is further configured to overlay detected tumour regions on the original magnetic resonance imaging (MRI) images.
[0028] In one embodiment, the secure cloud-based fire store network comprises role-based access control, allowing only authorized healthcare professionals to access or modify stored diagnostic data.
[0029] In one embodiment, the user interface further comprises multi-language support.
[0030] In light of the above, in one aspect of the present disclosure, a method for automated brain tumour diagnosis from magnetic resonance imaging (MRI) data is disclosed herein. The method comprising acquiring multi-modal magnetic resonance imaging (MRI) data of a patient's brain through a magnetic resonance imaging (MRI) input unit. The method includes preprocessing the acquired magnetic resonance imaging (MRI) data by utilizing a preprocessing unit. The method also includes applying the pre-processed magnetic resonance imaging (MRI) data to a hybrid deep learning architecture integrated within the preprocessing unit. The method also includes analysing the pre-processed magnetic resonance imaging (MRI) data with a zero-shot and few-shot learning module embedded within the preprocessing unit. The method also includes integrating the output of the hybrid deep learning architecture through a multi-modal data integration unit connected to the preprocessing unit. The method also includes providing immediate diagnostic results and visualization of tumour mapping through a real-time prediction and visualization module integrated within the multi-modal data integration unit. The method also includes storing and retrieving diagnostic data through a secure cloud-based fire store network operatively connected to the preprocessing unit and the multi-modal data integration unit. The method also includes retraining the hybrid deep learning architecture periodically by utilizing an adaptive learning unit connected to both the secure cloud-based fire store network and preprocessing unit. The method also includes performing predictive analytics on historical patient data through a predictive analytics module integrated within the adaptive learning unit.
[0031] These and other advantages will be apparent from the present application of the embodiments described herein.
[0032] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0033] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0034] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0035] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0036] FIG. 1 illustrates a block diagram of an automated brain tumour diagnosis system and method thereof, in accordance with an exemplary embodiment of the present disclosure;
[0037] FIG. 2 illustrates a flowchart of an automated brain tumour diagnosis system, in accordance with an exemplary embodiment of the present disclosure;
[0038] FIG. 3 illustrates a flowchart of method for automated brain tumour diagnosis from magnetic resonance imaging (MRI) data, in accordance with an exemplary embodiment of the present disclosure;
[0039] FIG. 4 illustrates a perspective view of the flowchart of the process, in accordance with an exemplary embodiment of the present disclosure;
[0040] FIG. 5 illustrates a perspective view of the input visuals, in accordance with an exemplary embodiment of the present disclosure;
[0041] FIG. 6 illustrates a perspective view of the accuracies of various models, in accordance with an exemplary embodiment of the present disclosure;
[0042] Like reference, numerals refer to like parts throughout the description of several views of the drawing.
[0043] The automated brain tumour diagnosis system and method thereof is illustrated in the accompanying drawings, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0044] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0045] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0046] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0047] The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0048] The terms "having", "comprising", "including", and variations thereof signify the presence of a component.
[0049] Referring now to FIG. 1 to FIG. 6 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a perspective view of an automated brain tumour diagnosis system and method thereof 100, in accordance with an exemplary embodiment of the present disclosure.
[0050] The system 100 may includea magnetic resonance imaging (MRI) input unit 102 configured to acquire comprehensive multi-modal magnetic resonance imaging (MRI) data of a patient's brain, a preprocessing unit 104, operatively connected to the magnetic resonance imaging (MRI) input unit 102, configured to perform image normalization, resizing, and augmentation on the acquired magnetic resonance imaging (MRI) data, a hybrid deep learning architecture 106 integrated within the preprocessing unit 104, configured for capturing complex spatial hierarchies and structural patterns within the magnetic resonance imaging (MRI) data and supporting detailed feature extraction and classification of brain tumour types based on intricate tissue characteristics, a zero-shot and few-shot learning module 108 integrated within the preprocessing unit 104, configured to detect and classify rare or newly observed tumour types by leveraging minimal labelled data, a multi-modal data integration unit 110 connected to the preprocessing unit 104, configured to synthesize data across multiple magnetic resonance imaging (MRI) modalities, a real-time prediction and visualization module 112 integrated within the multi-modal data integration unit 110, configured to provide immediate tumour mapping and interpretation visualizations, offering clear, interpretable diagnostic insights, a secure cloud-based fire store network 114 operatively connected to the preprocessing unit 104 and the multi-modal data integration unit 110, configured to securely store, retrieve, and encrypt diagnostic data and patient magnetic resonance imaging (MRI) images, an adaptive learning unit 116, operatively connected to both the secure cloud-based fire store network 114 and the preprocessing unit 104, configured to periodically retrain the hybrid deep learning architecture 106 based on newly available diagnostic data, a predictive analytics module 118 integrated within the adaptive learning unit 116, configured to apply historical patient data analysis to forecast tumour progression, a user interface 120 connected to the multi-modal data integration unit 110, the secure cloud-based fire store network 114 and the adaptive learning unit 116 and configured to facilitate user interactions with the real-time prediction and visualization module 112, the secure cloud-based fire store network 114, and the predictive analytics module 118, and provide an accessible and user-friendly interface for healthcare professionals to monitor, interpret, and interact with diagnostic results.
[0051] The magnetic resonance imaging (MRI) input unit 102 is configured to acquire images from multiple magnetic resonance imaging (MRI) modalities, including T1-weighted, T2-weighted, and flair sequences, to provide a comprehensive data set.
[0052] The preprocessing unit 104, connected to the magnetic resonance imaging (MRI) input unit 102, further comprises an anomaly detection sub-module, configured to identify and exclude low-quality or corrupted images prior to processing.
[0053] The hybrid deep learning architecture 106 includes a transformer module in combination with convolutional neural networks (CNNs).
[0054] The zero-shot and few-shot learning module 108 further configured to perform context-based classification by analysing minimal examples of rare tumour types.
[0055] The multi-modal data integration unit 110 further includes a data weighting mechanism, configured to assign importance to each magnetic resonance imaging (MRI) modality based on diagnostic relevance.
[0056] The real-time prediction and visualization module 112 is further configured to overlay detected tumour regions on the original magnetic resonance imaging (MRI) images.
[0057] The secure cloud-based fire store network 114 comprises role-based access control, allowing only authorized healthcare professionals to access or modify stored diagnostic data.
[0058] The user interface 120 further comprises multi-language support.
[0059] The method 100 may include acquiring multi-modal magnetic resonance imaging (MRI) data of a patient's brain through a magnetic resonance imaging (MRI) input unit 102, preprocessing the acquired magnetic resonance imaging (MRI) data by utilizing a preprocessing unit 104, applying the pre-processed magnetic resonance imaging (MRI) data to a hybrid deep learning architecture 106 integrated within the preprocessing unit 104, analysing the pre-processed magnetic resonance imaging (MRI) data with a zero-shot and few-shot learning module 108 embedded within the preprocessing unit 104, integrating the output of the hybrid deep learning architecture 106 through a multi-modal data integration unit 110 connected to the preprocessing unit 104, providing immediate diagnostic results and visualization of tumour mapping through a real-time prediction and visualization module 112 integrated within the multi-modal data integration unit 110, storing and retrieving diagnostic data through a secure cloud-based fire store network 114 operatively connected to the preprocessing unit 104 and the multi-modal data integration unit 110, retraining the hybrid deep learning architecture 106 periodically by utilizing an adaptive learning unit 116 connected to both the secure cloud-based fire store network 114 and preprocessing unit 104, performing predictive analytics on historical patient data through a predictive analytics module 118 integrated within the adaptive learning unit 116.
[0060] The magnetic resonance imaging (MRI) input unit 102 plays an essential role in the automated brain tumour diagnosis system 100, focusing on the acquisition of high-quality, comprehensive magnetic resonance imaging (MRI) data from multiple imaging modalities. By gathering a diverse set of magnetic resonance imaging (MRI) sequences, including T1-weighted, T2-weighted, and flair sequences, the magnetic resonance imaging (MRI) input unit 102 captures detailed structural and textural information of a patient's brain. This multi-modal approach provides a broad, integrative view of potential brain anomalies, allowing the system 100 to analyse various tissue contrasts and anatomical details critical for detecting complex tumour structures.
[0061] The magnetic resonance imaging (MRI) input unit 102 interfaces directly with the preprocessing unit 104, ensuring smooth and uninterrupted data flow between components. By securing and transmitting multi-modal images with minimal latency, the magnetic resonance imaging (MRI) input unit 102 supports efficient preprocessing, where images undergo normalization, resizing, and data augmentation. The high-definition input from the magnetic resonance imaging (MRI) input unit 102 aids in the accurate analysis and processing within the hybrid deep learning architecture 106, enhancing diagnostic reliability.
[0062] Additionally, the magnetic resonance imaging (MRI) input unit 102 maintains adaptability for various imaging conditions and can capture data from multiple imaging environments, making the automated brain tumour diagnosis system 100 versatile across different clinical settings. By ensuring robust, multi-modal data acquisition, the magnetic resonance imaging (MRI) input unit 102 reinforces the system's ability to process intricate brain imaging details, facilitating precise and timely tumour detection.
[0063] Ultimately, the magnetic resonance imaging (MRI) input unit 102 stands as a foundational component within the system 100, establishing a high standard for data quality and consistency across all subsequent processes, from preprocessing and feature extraction to final diagnosis and visualization. This integrative function solidifies the magnetic resonance imaging (MRI) input unit 102's role as a cornerstone in the automation of accurate and efficient brain tumour diagnosis.
[0064] The preprocessing unit 104 in the automated brain tumour diagnosis system 100 is responsible for enhancing the magnetic resonance imaging (MRI) data quality, optimizing images for advanced analysis and diagnostic accuracy. Connected directly to the magnetic resonance imaging (MRI) input unit 102, the preprocessing unit 104 executes essential tasks, including image normalization, resizing, and augmentation, ensuring that each image achieves a consistent quality standard. This preprocessing phase supports the hybrid deep learning architecture 106 by establishing a uniform input that improves feature extraction and pattern recognition during subsequent stages of analysis.
[0065] Within the preprocessing unit 104, various transformations standardize the magnetic resonance imaging (MRI) data, allowing the automated brain tumour diagnosis system 100 to process images across different dimensions, contrasts, and brightness levels. By resizing the magnetic resonance imaging (MRI) data, the preprocessing unit 104 ensures that each image aligns with the required input dimensions for the hybrid deep learning architecture 106, maintaining diagnostic consistency across different patients and imaging scenarios. Additionally, data augmentation performed within the preprocessing unit 104 expands the dataset by generating modified copies of images, which assists in training the deep learning model to recognize tumours across a range of variations, thereby improving the model's accuracy and robustness.
[0066] Overall, the preprocessing unit 104 establishes an essential foundation for high-quality diagnostic analysis by preparing reliable, standardized, and enhanced magnetic resonance imaging (MRI) data for the subsequent hybrid deep learning architecture 106 and associated components within the automated brain tumour diagnosis system 100.
[0067] The hybrid deep learning architecture 106 in the automated brain tumour diagnosis system 100 is configured to capture intricate spatial hierarchies and complex patterns within magnetic resonance imaging (MRI) data, facilitating precise feature extraction and classification of brain tumour types. This architecture integrates convolutional neural networks (CNNs) with transformer modules to optimize the analytical power and adaptability of the system, harnessing both spatial and contextual data for enhanced tumour detection and classification. By combining CNNs' localized feature detection abilities with the transformers' capacity for global contextual awareness, the hybrid deep learning architecture 106 improves detection sensitivity and enables comprehensive analysis of each magnetic resonance imaging (MRI) image.
[0068] Operating within the preprocessing unit 104, the hybrid deep learning architecture 106 transforms standardized magnetic resonance imaging (MRI) data into high-dimensional feature maps that convey specific information about tissue structures, density variations, and other critical tumour indicators. Convolutional neural networks (CNNs) within the hybrid deep learning architecture 106 extract features from different regions of the magnetic resonance imaging (MRI) data, identifying localized patterns such as edges, textures, and intensity gradients. This detailed examination captures structural nuances in tumour morphology, helping to identify different types of tumours based on their unique attributes and spatial distributions within the patient's brain.
[0069] Simultaneously, the transformer modules within the hybrid deep learning architecture 106 apply self-attention mechanisms, allowing each section of magnetic resonance imaging (MRI) data to be contextualized relative to other regions within the same scan. By incorporating global dependencies and relational insights across different image regions, the transformer modules identify patterns that convolutional neural networks (CNNs) alone cannot capture, such as diffuse or irregular tumour growth. This dual capability enhances the overall diagnostic accuracy of the automated brain tumour diagnosis system 100 by combining fine-grained spatial analysis with a broader interpretive capacity, supporting accurate and robust tumour classification.
[0070] Furthermore, the hybrid deep learning architecture 106 incorporates several advanced processing layers that sequentially refine the extracted features, gradually progressing from simpler to more complex representations. This layered structure not only aids in distinguishing tumour-related features from healthy tissue but also supports the classification of rare or atypical tumour types, especially when integrated with the zero-shot and few-shot learning module 108. This approach enables the hybrid deep learning architecture 106 to adapt to variations in tumour characteristics, even when limited examples of specific tumour types exist within the training data.
[0071] Integrated tightly within the preprocessing unit 104, the hybrid deep learning architecture 106 relies on its multi-layered framework to perform in-depth learning on the provided magnetic resonance imaging (MRI) data. Each layer contributes to refining diagnostic accuracy, with initial layers capturing basic spatial and structural information while deeper layers focus on more abstract, disease-specific features. The adaptive nature of the hybrid deep learning architecture 106 enables dynamic learning adjustments based on new data acquired by the adaptive learning unit 116, supporting continual improvement in tumour recognition capabilities.
[0072] In essence, the hybrid deep learning architecture 106 serves as the core analytical engine of the automated brain tumour diagnosis system 100, bridging conventional convolutional neural network capabilities with transformer-based contextual understanding. This integration facilitates comprehensive, multi-dimensional analysis of MRI images, capturing both localized and distributed patterns critical for effective brain tumour diagnosis. By leveraging convolutional neural networks (CNNs) for localized feature extraction and transformers for contextual understanding, the hybrid deep learning architecture 106 ensures high accuracy in identifying and classifying brain tumours, offering valuable diagnostic support to medical professionals.
[0073] This advanced hybrid framework positions the automated brain tumour diagnosis system 100 as a powerful tool in medical imaging, demonstrating a robust approach to brain tumour analysis that effectively combines the strengths of convolutional neural networks (CNNs) and transformers. Through continuous data-driven learning facilitated by its adaptive layers, the hybrid deep learning architecture 106 stands as a resilient and adaptive diagnostic model, ultimately contributing to more precise and reliable medical diagnoses for brain tumour patients.
[0074] The zero-shot and few-shot learning module 108 in the automated brain tumour diagnosis system 100 operates as an essential component designed to handle the detection and classification of rare or newly observed brain tumour types within the magnetic resonance imaging (MRI) data. By leveraging minimal labelled data, the zero-shot and few-shot learning module 108 demonstrates the ability to classify tumours that lack substantial training examples. This module serves as a flexible solution that enhances the automated brain tumour diagnosis system 100's adaptability, addressing the challenge of limited datasets, especially for uncommon tumour types, and achieving accurate classification despite scarce examples.
[0075] The zero-shot and few-shot learning module 108 applies a sophisticated approach that draws on pre-trained models and general knowledge embedded in the hybrid deep learning architecture 106. Through feature transfer learning, the zero-shot and few-shot learning module 108 utilizes learned representations from previously observed tumour types, applying them to unfamiliar or rare tumour types by identifying structural or morphological similarities within the magnetic resonance imaging (MRI) data. This process facilitates an in-depth analysis of atypical cases, allowing the system to generalize from previously seen examples and classify unique tumour types accurately without extensive re-training.
[0076] In its approach, the zero-shot and few-shot learning module 108 incorporates a context-based classification method that enables the automated brain tumour diagnosis system 100 to consider even subtle variations in tumour presentation. This context-based method aids in the differentiation of tumour subtypes, often presenting with overlapping or ambiguous features, by applying a set of refined diagnostic criteria established through the module's generalization capabilities. By leveraging these contextual insights, the zero-shot and few-shot learning module 108 enhances the sensitivity and specificity of tumour classification, promoting reliable identification of rare tumours that may not have been encountered in the initial training data.
[0077] Additionally, the zero-shot and few-shot learning module 108 is embedded within the preprocessing unit 104, allowing it to seamlessly integrate with the hybrid deep learning architecture 106 to classify magnetic resonance imaging (MRI) data. This integration ensures that the system applies consistent standards across all imaging modalities, including T1-weighted, T2-weighted, and flair sequences, enhancing the module's diagnostic reliability and ensuring that the multi-modal data from the magnetic resonance imaging (MRI) input unit 102 receives thorough analysis. Consequently, the zero-shot and few-shot learning module 108 extends the system's ability to process a wide array of data types while retaining high levels of accuracy and adaptability.
[0078] The zero-shot and few-shot learning module 108 also enhances its diagnostic potential through dynamic learning processes, which allow continuous improvements based on newly observed tumour instances. As the automated brain tumour diagnosis system 100 encounters new examples of tumour types in clinical use, the zero-shot and few-shot learning module 108 adapts its classification model to reflect these updated patterns, retaining its effectiveness even as tumour characteristics evolve over time. By performing incremental updates to its diagnostic framework, this module optimizes its classification strategies to support long-term diagnostic relevance and reliability.
[0079] Through its unique capabilities, the zero-shot and few-shot learning module 108 provides a comprehensive solution for real-time identification and classification of rare brain tumour types. By integrating advanced learning mechanisms and context-based analysis, the zero-shot and few-shot learning module 108 significantly elevates the performance of the automated brain tumour diagnosis system 100, offering precise diagnostic capabilities where traditional methods may lack efficacy. This design aligns with the system's broader goal of providing accessible, robust, and precise medical imaging support, ultimately contributing to enhanced patient outcomes by enabling earlier, more accurate diagnoses of complex or rare brain tumours.
[0080] The multi-modal data integration unit 110 in the automated brain tumour diagnosis system 100 functions as a central hub for processing diverse data sources, enabling a comprehensive analysis of magnetic resonance imaging (MRI) and other relevant patient data. This integration component gathers and synchronizes information from various imaging sequences, including T1-weighted, T2-weighted, and flair sequences, along with additional medical data inputs that may include clinical history, genetic markers, or biopsy results. By combining these distinct data streams, the multi-modal data integration unit 110 provides a holistic view of each case, significantly enhancing the automated brain tumour diagnosis system 100's diagnostic accuracy and robustness.
[0081] The multi-modal data integration unit 110 aligns each data type according to spatial and temporal characteristics, creating a unified representation of the brain tumour within the patient's anatomy. This alignment ensures that relevant features from different imaging sequences remain consistent, enabling the automated brain tumour diagnosis system 100 to perform precise localization and characterization of tumour regions. Through this alignment, the multi-modal data integration unit 110 enhances diagnostic reliability by reducing errors arising from image distortions or misalignments, thereby improving the overall quality of data fed into the hybrid deep learning architecture 106.
[0082] The multi-modal data integration unit 110 utilizes feature extraction algorithms designed to recognize tumour-specific characteristics within each imaging modality, which strengthens the diagnostic insights provided by the automated brain tumour diagnosis system 100. The unit extracts texture, shape, and intensity patterns across imaging modalities, capturing complex, multi-dimensional representations of tumour attributes. This approach enables the system to consider diverse tumour attributes within a single analysis framework, which is essential for distinguishing between similar or overlapping tumour types and improving classification specificity. The multi-modal data integration unit 110, therefore, plays a pivotal role in the early and accurate identification of various brain tumour subtypes.
[0083] Moreover, the multi-modal data integration unit 110 interacts seamlessly with the zero-shot and few-shot learning module 108, supplying it with relevant contextual data from the multiple imaging and medical data sources. By enriching the classification process, the multi-modal data integration unit 110 enables the automated brain tumour diagnosis system 100 to utilize broader contextual information, which aids in classifying rare or atypical tumours. This integration of information across modalities reduces dependency on large datasets for training and enables robust performance in real-world applications, even with limited instances of rare tumour types.
[0084] In addition to its role in imaging analysis, the multi-modal data integration unit 110 incorporates non-imaging data, such as patient demographics and clinical notes, providing the automated brain tumour diagnosis system 100 with a broader clinical perspective. This non-imaging data enhances the accuracy of predictions by contextualizing imaging results within relevant patient-specific information. By leveraging this diverse range of data, the multi-modal data integration unit 110 improves the system's ability to make informed diagnostic assessments, contributing to more tailored and effective patient care strategies.
[0085] The multi-modal data integration unit 110 further ensures that the automated brain tumour diagnosis system 100 remains adaptable and scalable by supporting continuous updates in data types and sources. This adaptability enables the unit to integrate newer imaging techniques and diagnostic data as they become available, keeping the system aligned with advances in medical imaging and diagnostics. The multi-modal data integration unit 110 thus serves as a future-proof component within the system, enhancing its diagnostic capabilities and ensuring the automated brain tumour diagnosis system 100 remains relevant and efficient over time.
[0086] Overall, the multi-modal data integration unit 110 establishes a robust foundation for comprehensive analysis within the automated brain tumour diagnosis system 100. By integrating various data sources and aligning imaging features with patient-specific information, this unit supports accurate, multi-faceted diagnostic outcomes, addressing the challenges of tumour heterogeneity and supporting clinical decision-making with precision and depth.
[0087] The real-time prediction and visualization module 112 in the automated brain tumour diagnosis system 100 provides immediate, interactive insights into the diagnostic process, presenting results in a visually accessible format for healthcare providers. This module facilitates real-time predictions on brain tumour detection and classification by processing imaging data through the hybrid deep learning architecture 106 and displaying outputs in a clear and interpretable manner. The real-time prediction and visualization module 112 supports clinical decision-making by offering instant feedback, minimizing diagnostic delays, and improving patient care efficiency.
[0088] The real-time prediction and visualization module 112 operates through advanced visualization algorithms designed to map identified tumour regions directly onto magnetic resonance imaging scans. By overlaying these predictions onto the anatomical structure of the brain, the real-time prediction and visualization module 112 enhances the interpretability of diagnostic results, allowing medical professionals to assess the tumour's location, shape, and size with precision. The module's visualization capabilities include customizable color-coding schemes and annotation tools that highlight critical regions, supporting in-depth analysis and facilitating clearer communication of diagnostic findings between healthcare teams.
[0089] Further, the real-time prediction and visualization module 112 incorporates an interactive interface, allowing users to navigate through different imaging layers and perspectives, such as axial, coronal, and sagittal views. This functionality enables clinicians to examine the tumour from multiple angles, ensuring a comprehensive understanding of its spatial characteristics. The interactive interface of the real-time prediction and visualization module 112 also enables zooming, panning, and rotation, allowing users to scrutinize specific areas in detail. This level of customization enables medical practitioners to tailor the analysis to individual cases, enhancing the diagnostic accuracy and effectiveness of the automated brain tumour diagnosis system 100.
[0090] Additionally, the real-time prediction and visualization module 112 supports temporal tracking of tumour progression by comparing current imaging data with historical scans stored within the system's secure database. This tracking functionality enables the module to generate predictive insights about tumour growth trends and potential treatment impacts, contributing to long-term patient monitoring and management. The module accomplishes this by utilizing predictive analytics to estimate future tumour behaviour, offering essential information for planning treatment strategies and evaluating response to therapy.
[0091] The real-time prediction and visualization module 112 integrates with the multi-modal data integration unit 110 to incorporate multiple data sources, including genetic markers and patient demographics, into the visualization output. By combining these data sources, the real-time prediction and visualization module 112 creates an enriched diagnostic environment that provides a full clinical profile, allowing healthcare providers to contextualize tumour characteristics alongside relevant patient information. This integration enhances diagnostic depth and supports the development of personalized treatment plans by identifying individual risk factors and predicting potential outcomes.
[0092] Moreover, the real-time prediction and visualization module 112 includes a reporting function, enabling the automated generation of comprehensive diagnostic reports. These reports summarize key findings, including tumour location, classification, and predicted growth rates, providing medical professionals with a concise reference for clinical documentation. The reporting function of the real-time prediction and visualization module 112 also facilitates communication with patients by translating complex diagnostic information into simplified visual summaries, fostering understanding and engagement in the treatment process.
[0093] Finally, the real-time prediction and visualization module 112 ensures a high standard of data security, protecting patient information throughout the prediction and visualization processes. The module employs encryption and access control measures to safeguard sensitive data, ensuring compliance with regulatory requirements and maintaining patient confidentiality. This security infrastructure ensures that the real-time prediction and visualization module 112 functions as a reliable and secure component within the automated brain tumour diagnosis system 100.
[0094] Overall, the real-time prediction and visualization module 112 plays an essential role in transforming diagnostic insights into actionable, real-time visual representations. By integrating prediction capabilities with advanced visualization and reporting tools, the real-time prediction and visualization module 112 supports accurate, informed decision-making, enhancing the efficiency, reliability, and security of the automated brain tumour diagnosis system 100.
[0095] The secure cloud-based fire store network 114 in the automated brain tumour diagnosis system 100 provides a robust and encrypted platform for storing, managing, and retrieving extensive diagnostic data. This secure cloud-based fire store network 114 enables seamless access to imaging data, diagnostic reports, and patient histories, facilitating efficient data flow across healthcare teams. By offering a centralized and secure storage solution, the secure cloud-based fire store network 114 ensures that healthcare professionals' access accurate, up-to-date information, supporting informed decision-making and enhancing overall patient care.
[0096] The secure cloud-based fire store network 114 integrates with various components within the automated brain tumour diagnosis system 100, such as the real-time prediction and visualization module 112, to securely archive real-time diagnostic results and imaging data. This integration allows the secure cloud-based fire store network 114 to maintain a historical record of each patient's diagnostic journey, enabling clinicians to review past results, monitor tumour progression, and adjust treatment strategies as necessary. The secure cloud-based fire store network 114's ability to store data continuously supports long-term patient monitoring and facilitates comprehensive analysis over time.
[0097] Additionally, the secure cloud-based fire store network 114 employs advanced encryption techniques and access control protocols to ensure that sensitive patient data remains confidential and protected from unauthorized access. These security measures ensure compliance with healthcare regulations, safeguarding patient privacy and reinforcing trust in the automated brain tumour diagnosis system 100.
[0098] Furthermore, the secure cloud-based fire store network 114 supports data-sharing capabilities, allowing authorized users to access diagnostic data remotely. This feature promotes collaboration among healthcare teams, enabling specialists to review and contribute to diagnostic processes regardless of location. By facilitating secure and efficient data management, the secure cloud-based fire store network 114 enhances the operational reliability and scalability of the automated brain tumour diagnosis system 100.
[0099] The adaptive learning unit 116 in the automated brain tumour diagnosis system 100 continuously enhances the system's diagnostic accuracy and adaptability by periodically retraining the underlying deep learning models on newly acquired data. This retraining process allows the adaptive learning unit 116 to respond to emerging patterns and variations in brain tumour characteristics, refining the models' predictive capabilities over time. By integrating new data, the adaptive learning unit 116 ensures that the automated brain tumour diagnosis system 100 evolves in its ability to detect and classify tumours accurately, even as diagnostic requirements change.
[0100] The adaptive learning unit 116 also monitors performance metrics to identify areas where the deep learning models benefit from additional training. By focusing on these specific areas, the adaptive learning unit 116 enhances the models' sensitivity to subtle diagnostic markers that may otherwise be missed. This targeted improvement process ensures that the adaptive learning unit 116 contributes to the system's high level of diagnostic precision and responsiveness, supporting clinicians in providing accurate diagnoses for diverse tumour types and variations.
[0101] Furthermore, the adaptive learning unit 116 enables the automated brain tumour diagnosis system 100 to handle cases involving rare tumour types, ensuring that diagnostic performance remains robust and reliable across a wide range of patient cases. By automatically incorporating feedback and diagnostic results into future training, the adaptive learning unit 116 fosters a self-improving system, allowing the automated brain tumour diagnosis system 100 to retain relevance and accuracy in dynamic clinical environments. Through its ongoing adaptive processes, the adaptive learning unit 116 enhances the system's overall diagnostic efficacy, contributing to the reliability and longevity of the automated brain tumour diagnosis system 100.
[0102] The predictive analytics module 118 in the automated brain tumour diagnosis system 100 continuously provides valuable insights into potential tumour growth patterns, using historical and real-time diagnostic data to project possible future changes in tumour characteristics. By analysing trends in tumour size, shape, and spread, the predictive analytics module 118 assists clinicians in planning treatment approaches that align with anticipated disease progression. Through detailed data-driven projections, the predictive analytics module 118 serves as a critical component in facilitating proactive medical decisions and effective patient management.
[0103] The predictive analytics module 118 in the automated brain tumour diagnosis system 100 employs advanced machine learning algorithms that assess multiple factors influencing tumour development, such as patient demographics, genetic profiles, and previous treatment responses. This analysis enables the predictive analytics module 118 to make highly personalized predictions, thereby tailoring its projections to individual cases rather than relying on generalized data patterns. By refining its output with case-specific data, the predictive analytics module 118 ensures that the automated brain tumour diagnosis system 100 offers a high degree of accuracy in its forecasted outcomes, enhancing the reliability of its recommendations.
[0104] Additionally, the predictive analytics module 118 incorporates data from multi-modal sources, such as magnetic resonance imaging scans, pathology reports, and historical treatment records, allowing it to create comprehensive and nuanced projections of tumour behaviour. The module's integration of diverse data types gives clinicians a multi-dimensional view of the tumour's likely progression and equips them with valuable context for formulating responsive treatment strategies. By continuously synthesizing diverse diagnostic information, the predictive analytics module 118 contributes to a holistic and actionable understanding of each patient's condition.
[0105] The predictive analytics module 118 in the automated brain tumour diagnosis system 100 also supports real-time updates to its projections, utilizing new information as it becomes available to refine its future growth estimates. This dynamic adaptation process ensures that predictions remain relevant and accurate, reflecting the latest developments in a patient's condition and treatment effects. Through its real-time responsiveness, the predictive analytics module 118 helps clinicians adjust treatment plans quickly, enabling timely interventions that may impact patient outcomes positively.
[0106] Moreover, the predictive analytics module 118 in the automated brain tumour diagnosis system 100 enhances collaboration between clinical teams by presenting its insights in a clear and accessible format. Visualizations and summaries generated by the predictive analytics module 118 allow clinicians to interpret data projections easily, fostering effective communication and informed decision-making within the medical team. The module's visual outputs help present complex information in an interpretable way, making it easier for clinicians and specialists to discuss prognosis and treatment options with patients and their families.
[0107] Finally, the predictive analytics module 118 ensures that the automated brain tumour diagnosis system 100 aligns with the latest advancements in medical research and evolving treatment methodologies. By learning from extensive clinical data and integrating scientific discoveries, the predictive analytics module 118 in the automated brain tumour diagnosis system 100 maintains its relevance and efficacy. Through its continuous adaptability and integration with the broader diagnostic framework, the predictive analytics module 118 advances the goals of personalized medicine and long-term patient care, strengthening the automated brain tumour diagnosis system 100's role in delivering accurate and actionable diagnostic insights.
[0108] The user interface 120 in the automated brain tumour diagnosis system 100 provides an accessible, intuitive platform for medical professionals, allowing seamless interaction with the system's diagnostic insights and predictive tools. Through a visually appealing and user-friendly layout, the user interface 120 supports healthcare providers in navigating through detailed information, making it easier to understand and analyse complex diagnostic data. The user interface 120 in the automated brain tumour diagnosis system 100 ensures that users interact directly with essential modules, including the predictive analytics module 118, in an efficient and cohesive manner.
[0109] The user interface 120 in the automated brain tumour diagnosis system 100 also prioritizes accessibility and ease of use, integrating with mobile and desktop devices to accommodate the varied needs of medical teams. With its cross-platform adaptability, the user interface 120 allows clinicians to access patient data and diagnostic insights anytime, promoting continuous monitoring and timely decision-making. In addition to cross-platform compatibility, the user interface 120 supports multi-language accessibility, enhancing its usability across different regions and ensuring accessibility for a diverse set of healthcare providers.
[0110] Further, the user interface 120 in the automated brain tumour diagnosis system 100 includes dynamic visualization capabilities, offering clear, high-quality representations of diagnostic images, predictive insights, and progression charts. This visual presentation assists clinicians in interpreting patient-specific data more effectively and aids in communicating complex diagnostic findings. The user interface 120, by displaying images, charts, and analytics in an organized format, promotes better understanding and interaction with the system's predictive models.
[0111] The user interface 120 in the automated brain tumour diagnosis system 100 ultimately strengthens the system's clinical utility by integrating real-time functionality, cross-platform adaptability, and clear data visualizations. Through its emphasis on clarity and accessibility, the user interface 120 contributes significantly to enhancing diagnostic efficiency and informed decision-making within the healthcare environment.
[0112] FIG. 2 illustrates a flowchart of an automated brain tumour diagnosis system, in accordance with an exemplary embodiment of the present disclosure.
[0113] At 202, the system acquires multi-modal magnetic resonance imaging (MRI) images of the patient's brain using a magnetic resonance imaging (MRI) scanner.
[0114] At 204, the acquired magnetic resonance imaging (MRI) images undergo preprocessing, which involves noise reduction, normalization, and data augmentation to enhance image quality.
[0115] At 206, the hybrid deep learning architecture extracts relevant features from the pre-processed images, capturing complex spatial hierarchies and structural patterns.
[0116] At 208, multi-modal data from different magnetic resonance imaging (MRI) modalities is integrated to further enhance the accuracy of the diagnosis.
[0117] At 210, the extracted features are then fed into a classifier that categorizes the tumour into different types and grades.
[0118] At 212, the system also incorporates zero-shot and few-shot learning capabilities to enable the detection and classification of rare or novel tumour types.
[0119] At 214, the system provides real-time visualization of tumour mapping and interpretation, aiding in quick and accurate diagnosis.
[0120] At 216, diagnostic data and magnetic resonance imaging (MRI) images are securely stored in the cloud-based fire store network for future reference and analysis.
[0121] At 218, the adaptive learning unit periodically retrains the model using new data to improve its performance over time.
[0122] At 220, the predictive analytics module analyses historical data to forecast the potential progression of the tumour.
[0123] At 222, the user interface allows healthcare professionals to interact with the system, view diagnostic results, and access historical data.
[0124] FIG. 3 illustrates a flowchart of method for automated brain tumour diagnosis from magnetic resonance imaging (MRI) data, in accordance with an exemplary embodiment of the present disclosure.
[0125] At 302, acquire multi-modal magnetic resonance imaging (MRI) data of a patient's brain through a magnetic resonance imaging (MRI) input unit.
[0126] At 304, preprocess the acquired magnetic resonance imaging (MRI) data by utilizing a preprocessing unit.
[0127] At 306, apply the pre-processed magnetic resonance imaging (MRI) data to a hybrid deep learning architecture integrated within the preprocessing unit.
[0128] At 308, analyse the pre-processed magnetic resonance imaging (MRI) data with a zero-shot and few-shot learning module embedded within the preprocessing unit
[0129] At 310, Integrate the output of the hybrid deep learning architecture through a multi-modal data integration unit connected to the preprocessing unit.
[0130] At 312, provide immediate diagnostic results and visualization of tumour mapping through a real-time prediction and visualization module integrated within the multi-modal data integration unit.
[0131] At 314, store and retrieving diagnostic data through a secure cloud-based fire store network operatively connected to the preprocessing unit and the multi-modal data integration unit.
[0132] At 316, retrain the hybrid deep learning architecture periodically by utilizing an adaptive learning unit connected to both the secure cloud-based fire store network and preprocessing unit.
[0133] At 318, perform predictive analytics on historical patient data through a predictive analytics module integrated within the adaptive learning unit.
[0134] FIG. 4 illustrates a perspective view of the flowchart of the process, in accordance with an exemplary embodiment of the present disclosure.
[0135] The step collect dataset 402 involves gathering a comprehensive and diverse set of magnetic resonance imaging (MRI) data, ensuring that the dataset represents various brain tumour types and healthy samples. The collected dataset covers multiple magnetic resonance imaging (MRI) modalities, such as T1-weighted, T2-weighted, and flair sequences, to provide a robust and representative input for model training.
[0136] During collect dataset 402, the data includes a range of patient demographics and medical conditions, facilitating the model's ability to generalize across different cases. Additionally, the dataset is curated to maintain high-quality imaging standards, enabling accurate diagnostic insights for the system.
[0137] The step data augmentation 404 involves expanding the collected dataset by creating modified versions of the existing magnetic resonance imaging (MRI) images. Data augmentation 404 includes applying transformations such as rotations, flips, shifts, and brightness adjustments, which enhance the model's robustness to variations in imaging conditions.
[0138] Through data augmentation 404, diverse representations of magnetic resonance imaging (MRI) images are generated, improving the model's ability to generalize across different real-world cases. This step ensures that minor variations in the magnetic resonance imaging (MRI) data do not negatively impact the classification accuracy, ultimately supporting the system's overall diagnostic performance and reliability.
[0139] The step data pre-processing 406 focuses on preparing the magnetic resonance imaging (MRI) images by enhancing their quality and ensuring consistency across the dataset. Data pre-processing 406 includes normalizing image intensities to a standard range, which minimizes variability between images and improves compatibility for subsequent steps.
[0140] Additionally, data pre-processing 406 involves resizing the images to a fixed resolution, allowing for uniform input dimensions that facilitate smoother processing by the hybrid deep learning architecture 106. Noise reduction techniques are also applied during data pre-processing 406 to eliminate irrelevant information, ensuring that only essential image features contribute to the system's diagnostic accuracy and reliability.
[0141] The data split 408 step involves dividing the dataset into distinct subsets to ensure balanced training, validation, and testing phases. In data split 408, the collected dataset 402 is carefully partitioned, typically into a training set for model learning, a validation set for fine-tuning parameters, and a testing set for evaluating final performance.
[0142] Data split 408 ensures that the hybrid deep learning architecture 106 generalizes well, preventing overfitting to specific data patterns. The controlled separation of images across these subsets in data Split 408 helps the system 100 maintain high accuracy 416, specificity 418, and sensitivity 420, strengthening the diagnostic reliability and robustness of the automated brain tumour diagnosis system 100.
[0143] The data preparation 410 step finalizes the processed data before feeding it into the subsequent stages of the automated brain tumour diagnosis system 100. In data preparation 410, the data undergoes alignment to ensure uniformity across various magnetic resonance imaging (MRI) modalities obtained through magnetic resonance imaging (MRI) input unit 102. This step organizes and structures the data into a compatible format, optimizing the quality and consistency of input for the hybrid deep learning architecture 106.
[0144] By implementing data preparation 410 effectively, the system 100 enhances its ability to accurately interpret magnetic resonance imaging (MRI) data, allowing convolutional neural network (CNN) classification 412 and other processing units to perform with greater precision.
[0145] The convolutional neural network (CNN) classification 412 step involves applying a specialized deep learning model to the pre-processed magnetic resonance imaging (MRI) data, focusing on accurately detecting and classifying brain tumour types. In convolutional neural network (CNN) classification 412, the system 100 utilizes convolutional layers that extract and interpret intricate features within the MRI images, such as tumour shapes, textures, and contrasts.
[0146] The convolutional neural network (CNN) classification 412 step relies on multiple layers, including pooling and fully connected layers, within the hybrid deep learning architecture 106, enhancing its capability to recognize complex spatial hierarchies. This step is crucial for achieving high diagnostic precision, as it translates the visual data into meaningful tumour classification outputs based on the subtle variations identified within the brain structures. Through convolutional neural network (CNN) classification 412, the system 100 processes these extracted features to deliver insights on tumour characteristics, providing foundational data for further analyses in estimation 414 and subsequent evaluation metrics.
[0147] The estimation 414 step involves analysing the classification outputs from convolutional neural network (CNN) classification 412 to provide quantitative insights into tumour detection outcomes. In this step, the system 100 evaluates the probability scores generated by the hybrid deep learning architecture 106, translating these scores into estimations that indicate the likelihood of tumour presence and type.
[0148] Through estimation 414, the system 100 interprets classification results, mapping them into metrics that assess detection reliability. This involves calculating the degree of certainty in identifying specific tumour characteristics, facilitating a deeper understanding of potential abnormalities. By analysing these estimations, estimation 414 offers foundational data for further refinement in subsequent stages, such as Accuracy 416, Specificity 418, and Sensitivity 420. Through this structured estimation process, estimation 414 directly enhances the diagnostic capability, ensuring reliable and accurate tumour assessment.
[0149] The accuracy 416 step evaluates the performance of the convolutional neural network (CNN) classification 412 by calculating the proportion of correctly identified tumour cases relative to all cases examined. This step involves systematically comparing the predicted classifications from estimation 414 with actual, known tumour conditions within the dataset, measuring the accuracy rate.
[0150] Within accuracy 416, the system 100 reviews instances where the hybrid deep learning architecture 106 has correctly identified both presence and absence of tumours, establishing the reliability of the detection approach. High accuracy values signify effective training and robust model performance, while lower values highlight areas needing further refinement. Through Accuracy 416, the system 100 ensures that diagnostic results maintain a high level of precision, supporting dependable outcomes in clinical or research applications.
[0151] The specificity 418 step evaluates the ability of the convolutional neural network (CNN) classification 412 to correctly identify cases without tumours, confirming that the model avoids false positives. During Specificity 418, the system 100 assesses how well the classification model differentiates true negatives from other results, ensuring it only identifies actual tumour cases without mislabelling healthy conditions.
[0152] By analysing predictions from estimation 414 against known non-tumour cases, Specificity 418 validates the model's selectivity and precision. Achieving high specificity ensures the model performs accurately in clinical environments by minimizing incorrect diagnoses and reducing unnecessary follow-up tests. Specificity 418 thus contributes to the reliability and efficiency of the diagnostic system 100, optimizing its practical application and clinical trustworthiness.
[0153] The Sensitivity 420 step measures the ability of the convolutional neural network (CNN) classification 412 to accurately detect positive cases of brain tumours. During Sensitivity 420, the system 100 evaluates how effectively the model identifies true positive instances, focusing on correctly detecting actual tumour cases without missing them.
[0154] By comparing the results from estimation 414 against known tumour cases, Sensitivity 420 helps ensure the model reliably captures tumour presence, even in cases where the indicators may be subtle. This step strengthens the diagnostic model's application in clinical environments by minimizing the risk of missed diagnoses, which is crucial for effective patient treatment. Sensitivity 420 therefore plays a significant role in establishing the diagnostic system 100's reliability and maximizing patient outcomes.
[0155] FIG. 5 illustrates a perspective view of the input visuals utilized within the automated brain tumour diagnosis system 100. The magnetic resonance imaging (MRI) input unit 102 acquires comprehensive multi-modal magnetic resonance imaging (MRI) data of a patient's brain. This data serves as the initial input, forming the basis for further analysis. The preprocessing unit 104, operatively connected to the magnetic resonance imaging (MRI) input unit 102, performs image normalization, resizing, and augmentation on the acquired magnetic resonance imaging (MRI) data, enhancing the quality and readiness for advanced processing.
[0156] Once the data is pre-processed, the hybrid deep learning architecture 106 integrates complex algorithms to capture intricate patterns and hierarchies within the data. This architecture supports detailed feature extraction, ensuring accurate classification and tumour identification. The zero-shot and few-shot learning module 108 embedded within the preprocessing unit 104 aids in identifying rare tumour types, even with minimal labelled data.
[0157] The multi-modal data integration unit 110 then synthesizes the data across multiple magnetic resonance imaging (MRI) modalities, combining various imaging sequences for a holistic view. Finally, the real-time prediction and visualization module 112 offers immediate, interpretable tumour mapping and diagnostic insights, allowing healthcare professionals to visualize the results clearly. This modular setup ensures a streamlined, efficient process for automated brain tumour diagnosis using magnetic resonance imaging (MRI) data.
[0158] FIG. 6 illustrates a perspective view of the accuracies 416 of various models, showcasing their performance in detecting brain tumours from the acquired magnetic resonance imaging (MRI) data 102. The figure compares multiple deep learning models based on their ability to accurately classify the presence of tumours, indicating their effectiveness in a clinical setting. The accuracies 416 of each model reflect their ability to process and interpret magnetic resonance imaging (MRI) data 102 after it undergoes preprocessing by the preprocessing unit 104, which handles tasks such as normalization, resizing, and augmentation.
[0159] The models analysed include various architectures like convolutional neural networks (CNNs) and transformer-based models, which are part of the hybrid deep learning architecture 106. The results allow for a detailed comparison of the models' classification capabilities after the integration of multi-modal data integration unit 110, which synthesizes different magnetic resonance imaging (MRI) modalities to create a comprehensive diagnostic view. The accuracies 416 also provide insights into the performance of the zero-shot and few-shot learning module 108, which classifies rare tumour types based on minimal labelled data.
[0160] The accuracies 416 helps to identify the best performing model, guiding further improvements and the selection of optimal models for real-time tumour detection and prediction in medical applications. This comparison is critical for determining which model yields the most reliable diagnostic results for healthcare professionals using the system 100.
[0161] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0162] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0163] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0164] Disjunctive language such as the phrase "at least one of X, Y, Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0165] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. An automated brain tumour diagnosis system (100), the bag (100) comprises:
a magnetic resonance imaging (MRI) input unit (102) configured to acquire comprehensive multi-modal magnetic resonance imaging (MRI) data of a patient's brain;
a preprocessing unit (104), operatively connected to the magnetic resonance imaging (MRI) input unit (102), configured to perform image normalization, resizing, and augmentation on the acquired magnetic resonance imaging (MRI) data;
a hybrid deep learning architecture (106) integrated within the preprocessing unit (104), configured for capturing complex spatial hierarchies and structural patterns within the magnetic resonance imaging (MRI) data and supporting detailed feature extraction and classification of brain tumour types based on intricate tissue characteristics;
a zero-shot and few-shot learning module (108) integrated within the preprocessing unit (104), configured to detect and classify rare or newly observed tumour types by leveraging minimal labelled data;
a multi-modal data integration unit (110) connected to the preprocessing unit (104), configured to synthesize data across multiple magnetic resonance imaging (MRI) modalities;
a real-time prediction and visualization module (112) integrated within the multi-modal data integration unit (110), configured to provide immediate tumour mapping and interpretation visualizations, offering clear, interpretable diagnostic insights;
a secure cloud-based fire store network (114) operatively connected to the preprocessing unit (104) and the multi-modal data integration unit (110), configured to securely store, retrieve, and encrypt diagnostic data and patient magnetic resonance imaging (MRI) images;
an adaptive learning unit (116), operatively connected to both the secure cloud-based fire store network (114) and the preprocessing unit (104), configured to periodically retrain the hybrid deep learning architecture (106) based on newly available diagnostic data;
a predictive analytics module (118) integrated within the adaptive learning unit (116), configured to apply historical patient data analysis to forecast tumour progression;
a user interface (120) connected to the multi-modal data integration unit (110), the secure cloud-based fire store network (114) and the adaptive learning unit (116) and configured to facilitate user interactions with the real-time prediction and visualization module (112), the secure cloud-based fire store network (114), and the predictive analytics module (118), and provide an accessible and user-friendly interface for healthcare professionals to monitor, interpret, and interact with diagnostic results.
2. The system (100) as claimed in claim 1, wherein the magnetic resonance imaging (MRI) input unit (102) is configured to acquire images from multiple magnetic resonance imaging (MRI) modalities, including T1-weighted, T2-weighted, and flair sequences, to provide a comprehensive data set.
3. The system (100) as claimed in claim 1, wherein the preprocessing unit (104), connected to the magnetic resonance imaging (MRI) input unit (102), further comprises an anomaly detection sub-module, configured to identify and exclude low-quality or corrupted images prior to processing.
4. The system (100) as claimed in claim 1, wherein the hybrid deep learning architecture (106) integrated within the preprocessing unit (104), includes a transformer module in combination with convolutional neural networks (CNNs).
5. The system (100) as claimed in claim 1, wherein the zero-shot and few-shot learning module (108) integrated within the preprocessing unit (104), further configured to perform context-based classification by analysing minimal examples of rare tumour types.
6. The system (100) as claimed in claim 1, wherein the multi-modal data integration unit (110) connected to the preprocessing unit (104), further includes a data weighting mechanism, configured to assign importance to each magnetic resonance imaging (MRI) modality based on diagnostic relevance.
7. The system (100) as claimed in claim 1, wherein the real-time prediction and visualization module (112) integrated within the multi-modal data integration unit (110), is further configured to overlay detected tumour regions on the original magnetic resonance imaging (MRI) images.
8. The system (100) claimed in claim 1, wherein the secure cloud-based fire store network (114) operatively connected to the preprocessing unit (104) and the multi-modal data integration unit (110), configured to securely store, further comprises role-based access control, allowing only authorized healthcare professionals to access or modify stored diagnostic data.
9. The system (100) as claimed in claim 1, wherein the user interface (120) connected to the multi-modal data integration unit (110), the secure cloud-based fire store network (114) and the adaptive learning unit (116), further comprises multi-language support.
10. A method for automated brain tumour diagnosis from magnetic resonance imaging (MRI) data (100), the method (100) comprising:
acquiring multi-modal magnetic resonance imaging (MRI) data of a patient's brain through a magnetic resonance imaging (MRI) input unit (102);
preprocessing the acquired magnetic resonance imaging (MRI) data by utilizing a preprocessing unit (104);
applying the pre-processed magnetic resonance imaging (MRI) data to a hybrid deep learning architecture (106) integrated within the preprocessing unit (104);
analysing the pre-processed magnetic resonance imaging (MRI) data with a zero-shot and few-shot learning module (108) embedded within the preprocessing unit (104);
integrating the output of the hybrid deep learning architecture (106) through a multi-modal data integration unit (110) connected to the preprocessing unit (104);
providing immediate diagnostic results and visualization of tumour mapping through a real-time prediction and visualization module (112) integrated within the multi-modal data integration unit (110);
storing and retrieving diagnostic data through a secure cloud-based fire store network (114) operatively connected to the preprocessing unit (104) and the multi-modal data integration unit (110);
retraining the hybrid deep learning architecture (106) periodically by utilizing an adaptive learning unit (116) connected to both the secure cloud-based fire store network (114) and preprocessing unit (104);
performing predictive analytics on historical patient data through a predictive analytics module (118) integrated within the adaptive learning unit (116)

Documents

NameDate
202441089819-COMPLETE SPECIFICATION [20-11-2024(online)].pdf20/11/2024
202441089819-DECLARATION OF INVENTORSHIP (FORM 5) [20-11-2024(online)].pdf20/11/2024
202441089819-DRAWINGS [20-11-2024(online)].pdf20/11/2024
202441089819-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-11-2024(online)].pdf20/11/2024
202441089819-FORM 1 [20-11-2024(online)].pdf20/11/2024
202441089819-FORM FOR SMALL ENTITY(FORM-28) [20-11-2024(online)].pdf20/11/2024
202441089819-REQUEST FOR EARLY PUBLICATION(FORM-9) [20-11-2024(online)].pdf20/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.