Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A System And Method For Brain Tumor Detection Using MRI Image Segmentation.
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 16 November 2024
Abstract
The present invention is related to a system and method for brain tumor detection using MRI image segmentation. The method begins with MRI image acquisition and preprocessing, where noise and artifacts are removed to enhance clarity. A Graph Cut segmentation algorithm is then employed to accurately identify tumor regions. Feature extraction techniques, such as Local Binary Patterns (LBP) and Scale-Invariant Feature Transform (SIFT), capture key tumor characteristics. These features are then classified as benign or malignant using a deep convolutional neural network (CNN). This system offers enhanced sensitivity and specificity in identifying brain tumors, supporting early diagnosis and precise localization, ultimately improving patient outcomes. The system is designed to integrate seamlessly into clinical workflows, providing neurosurgeons and specialists with a user-friendly interface for efficient analysis and decision-making.
Patent Information
Application ID | 202441088675 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 16/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. M. DHARANI | Associate Professor, Department of ECE School of Engineering, Mohan Babu University (Erstwhile Sree Vidyanikethan Engineering College), A. Rangampet, Tirupati-517102, INDIA | India | India |
Ms. R. CHANDANA | UG Scholar, Department of ECE School of Engineering, Mohan Babu University (Erstwhile Sree Vidyanikethan Engineering College), A. Rangampet, Tirupati-517102, INDIA | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
MOHAN BABU UNIVERSITY | IPR Cell, Mohan Babu University (Erstwhile Sree Vidyanikethan Engineering College), Tirupati, Andhra Pradesh, India - 517102 | India | India |
Specification
Description:Fig:1 illustrates a system for brain tumor detection in MRI images. The system for brain tumor detection in MRI images is composed of several interconnected modules that work together to process the images and classify the presence and type of brain tumors.
Image Acquisition Module: This module is responsible for obtaining the MRI scans of the brain. It ensures that the images are standardized, meaning they are uniformly processed to ensure consistency in size, orientation, and resolution. The scans are converted to grayscale, which simplifies the analysis by reducing the complexity associated with color information. The images are then stored with dimensions suitable for subsequent analysis, ensuring they meet the requirements of the system for processing and feature extraction.
Pre-processing Module: Before the image can be analyzed in more detail, the pre-processing module cleans the image by removing any noise that could interfere with the accuracy of the analysis. The module employs a weighted median filter, which is effective in retaining the key features of the MRI image while removing irrelevant or distracting noise. This step enhances the clarity of the image, allowing the following stages to operate on higher-quality data, which is essential for detecting subtle features of the tumor and improving the accuracy of classification.
Segmentation Module: The segmentation module is designed to isolate the region of interest in the image, particularly the brain tumor. This is achieved through a Graph Cut algorithm, a sophisticated technique for image segmentation that involves constructing a graphical model. The image is represented as a graph, where each pixel is a node, and the edges represent the relationships between adjacent pixels. The algorithm uses an s-t (source-to-sink) graph-based segmentation approach, where the algorithm minimizes an energy function to separate the tumor regions from the surrounding healthy tissues. This segmentation process is crucial for focusing subsequent analysis on the relevant tumor regions.
Feature Extraction Module: After segmentation, the feature extraction module identifies key characteristics of the tumor and surrounding tissues. These features include shape, texture, and intensity metrics, which are essential for distinguishing different types of tumors and understanding their behavior. Shape features might describe the tumor's size, boundaries, and geometry, while texture features capture patterns in the intensity of pixel values, which can be indicative of the tumor's nature. Intensity metrics provide information about the brightness or contrast of the tumor relative to the surrounding tissues, which is useful in differentiating between benign and malignant growths.
CNN Classifier: The final module is the CNN classifier, which uses the extracted features to classify the brain tumor as benign or malignant. The classifier is based on a convolutional neural network (CNN), a deep learning model particularly well-suited for image analysis tasks. The CNN consists of multiple layers: convolution layers that learn to detect various features in the image, pooling layers that reduce the spatial dimensions of the image while preserving important information, and dense layers that process the extracted features to make a final classification decision. By training the CNN on labeled MRI scans, the model learns to recognize patterns that differentiate between benign and malignant tumors, helping doctors make more accurate diagnoses and treatment decisions.
Together, these modules form a comprehensive system for brain tumor detection, with each step playing a vital role in processing the MRI images and classifying the tumor type. This system aids medical professionals in diagnosing brain tumors more accurately and efficiently, ultimately supporting better treatment planning and patient care.
The present invention describes a comprehensive method for detecting, segmenting, and classifying brain tumors from MRI scan images using advanced image processing, machine learning, and segmentation techniques. The process begins with the acquisition of MRI images, which are stored as grayscale or intensity images with a resolution of 256 x 256 pixels. These images are retrieved from a patient database, typically stored in a network, and are prepared for further processing by removing noise and artifacts using preprocessing techniques. The preprocessing stage uses a weighted median filter, which is capable of preserving edges while reducing noise, thereby ensuring that the segmentation process is not adversely affected by any image imperfections.
Once the image is preprocessed, the system applies the Graph Cut algorithm for segmentation. Graph Cut is a method that utilizes a graph-based approach to partition an image into distinct regions. The image is treated as a graph where pixels are represented as nodes, and the edges represent the relationships between adjacent pixels. The algorithm then seeks to find an optimal cut between regions representing healthy brain tissue and tumor regions based on energy minimization, ensuring accurate delineation of the tumor boundaries. This segmentation method is particularly effective for isolating the tumor from the surrounding tissue. After segmentation, the method involves extracting key features from the tumor region. Feature extraction techniques such as Local Binary Patterns (LBP) and Scale-Invariant Feature Transform (SIFT) are used to capture crucial information about the tumor's texture, shape, and intensity. These features serve as the input for a deep convolutional neural network (CNN), which is trained to classify the tumor as either benign or malignant. The CNN uses a series of convolutional and pooling layers to learn complex patterns and spatial hierarchies in the image, followed by a fully connected layer to classify the tumor based on the extracted features. The output layer employs a sigmoid activation function to generate binary predictions.
This automated method enhances the sensitivity and specificity of brain tumor detection by combining advanced segmentation and classification techniques. The deep learning-based classification system enables the identification of subtle patterns that may be difficult for traditional methods to detect, allowing for more accurate diagnosis. The final step in the process involves the integration of these techniques into a user-friendly interface, making it easier for healthcare professionals to analyze the results and make informed decisions. By automating key aspects of the diagnostic process, the system aims to improve patient outcomes through earlier and more accurate detection, precise tumor localization, and personalized treatment planning.
The methodology follows a two-phase process. In the first phase, MRI brain images are sourced from a patient database. These images are preprocessed to eliminate noise and artifacts that could distort tumor detection. Techniques such as weighted median filters are used in this phase to reduce the noise in the images without compromising the sharpness of edges, which is essential for accurate segmentation. In the second phase, the preprocessed
MRI images are subjected to segmentation, with the goal of accurately identifying key tissue structures within the brain. This segmentation step is crucial for distinguishing between the tumor and the surrounding healthy tissue, which is vital for diagnosing the tumor's size, location, and potential malignancy. By applying Graph Cut-based segmentation, the system enables precise localization of the tumor regions, which is necessary for subsequent analysis and diagnosis.
The ultimate objective of this invention is to provide an automated system that enhances the processes of image enhancement, segmentation, and tumor classification, ultimately facilitating better clinical decision-making. This system is designed to be used by neurosurgeons and healthcare professionals, assisting in the early diagnosis, treatment planning, and monitoring of brain tumors. By integrating various techniques such as image processing, pattern analysis, and computer vision, the system aims to improve the efficiency, sensitivity, and specificity of brain tumor detection. In medical imaging, it is critical to extract meaningful and accurate information from MRI scans with minimal error, especially when it comes to detecting brain tumors at an early stage. The successful identification of tumor regions from the MRI images enables clinicians to track tumor growth or response to treatment, improving patient outcomes.
To achieve accurate tumor classification, the system incorporates advanced feature extraction techniques. These methods analyze various characteristics of the tumor, such as texture, shape, and intensity. Feature descriptors like Local Binary Patterns (LBP) and Scale-Invariant Feature Transform (SIFT) are used to extract these important attributes. Once the tumor region is accurately segmented and relevant features are extracted, the system employs deep learning algorithms, specifically Convolutional Neural Networks (CNNs), for classification. CNNs are particularly effective in medical imaging because they can learn intricate patterns from raw pixel data, making them suitable for classifying tumors as benign or malignant based on the extracted features. The CNN model, with multiple layers and advanced techniques like dropout and batch normalization, enhances classification accuracy and robustness.
Furthermore, the system integrates multiple techniques to provide a comprehensive solution for brain tumor detection. After feature extraction, the CNN classifier processes the data, distinguishing between different tumor types, such as benign and malignant, based on their characteristics. This binary classification output helps healthcare professionals determine the severity of the tumor and decide on an appropriate course of treatment. Validation and testing of the system using diverse MRI datasets ensure its accuracy and reliability for real-world clinical applications. The system's user-friendly interface is designed to integrate smoothly into existing clinical workflows, providing healthcare providers with a powerful tool for tumor detection, classification, and monitoring. As a result, this automated system holds the potential to transform how brain tumors are diagnosed and treated, offering significant improvements in both detection accuracy and overall patient care.
The first step in constructing the CNN is to initialize the model using the Sequential class, which allows the layers to be stacked in a linear fashion. This is done by creating an object of the class, which serves as the base for adding subsequent layers to the model. The next step involves adding the convolutional layers. The Convolution2D function is used to apply feature detectors (filters) to the input image. Each filter is designed to detect specific patterns or features, and the number of filters can be adjusted based on the complexity of the task. For example, in this case, we use 256 feature detectors. The input shape is specified to match the dimensions of the input image. If the image is in color, the input shape is defined with three channels (for RGB), and if the image is black and white, it is treated as a 2D array. The activation function for the convolutional layers is typically ReLU (Rectified Linear Unit), which helps introduce non-linearity into the model and ensures that negative pixel values are not computed.
Once the convolutional layers are added, the next step is to reduce the spatial dimensions of the feature maps using pooling. Pooling layers are used to decrease the computational load and retain only the most important features. The most common pooling technique is Max Pooling, which selects the maximum value from a portion of the image covered by the filter. This operation helps in focusing on the most prominent features while reducing the spatial size of the data. In our case, we use a pool size of 2x2 to achieve dimensionality reduction while retaining critical information.
After pooling, the feature maps are flattened into a single vector using the Flatten function. This process reshapes the pooled data so that it can be passed into the fully connected layers, which are the final layers of the CNN. The Dense function is then used to add fully connected layers. These layers are responsible for making the final predictions. The number of nodes in these layers can vary, and it is typically determined based on experimentation. In the case of binary classification (tumor or no tumor), the final output layer consists of a single node with a sigmoid activation function. This function outputs a probability between 0 and 1, indicating the likelihood of each class.
Through these steps, the CNN model is designed to process brain MRI images and predict whether a tumor is present. The model's ability to learn from the raw image data and
automatically extract relevant features makes it an effective tool for automated medical image classification. By training the network on large datasets of labeled MRI images, CNN can accurately detect tumors, offering significant potential for improving diagnostic accuracy in clinical settings.
Fig: 2 illustrates a method for detecting brain tumor regions in MRI images using segmentation techniques.
Image Acquisition: The MRI images used in this system are obtained from patients and stored in a format compatible with MATLAB 7.0, where they are represented as grayscale or intensity images. The images are typically 256x256 pixels in size, providing a high-resolution view of the brain's structure. MRI scans are conducted using a 0.5T open interventional MRI system (Signap), with each slice having a thickness of 1mm and an in-plane resolution of 1x1mm. The acquired images are transferred through a LAN to a Linux network for further processing and analysis. These images are crucial for identifying the tumor and other brain structures, and their accurate capture is vital for the success of the entire segmentation and classification process.
Pre-processing: Preprocessing is an essential step in preparing MRI images for segmentation. During this phase, noise and artifacts that may interfere with tumor detection are removed. Noise can cause inaccuracies, making it harder for the segmentation algorithm to differentiate between healthy tissue and tumor regions. To address this, a weighted median filter is applied. This filter is particularly effective at preserving edges while reducing noise. Unlike standard filters, the weighted median filter is nonlinear and adaptive, offering better noise attenuation and preserving important image features, which is crucial for accurate tumor detection. The pre-processing phase ensures that the image is in the best possible form for segmentation.
Segmentation Using Graph Cut Algorithm: The core segmentation technique used in this invention is based on the Graph Cut algorithm. In this method, the MRI image is treated as a graph, where each pixel is represented as a node. These nodes are connected to one another through edges, which represent pixel relationships. The segmentation process aims to separate the tumor region from the healthy brain tissue by solving a max-flow/min-cut optimization problem. This process involves assigning a "source" node to the tumor region and a "sink" node to the healthy tissue. The Graph Cut algorithm then minimizes the energy function to find the optimal cut, effectively partitioning the image into meaningful segments. This method ensures that the tumor region is accurately isolated, enabling precise analysis and classification.
Feature Extraction: Once the tumor region is segmented, feature extraction techniques are applied to analyze the characteristics of the identified tumor. These features may include texture, shape, and intensity, all of which are important for distinguishing between different types of tumors. Local Binary Patterns (LBP) and Scale-Invariant Feature Transform (SIFT) are commonly used techniques for extracting these features. These methods capture essential patterns in the image that can then be used for classification. By quantifying these features, the system can provide a more accurate representation of the tumor, aiding in its classification and diagnosis.
Deep CNN Classifier: The final step in the process is tumor classification using a Convolutional Neural Network (CNN). The extracted features are fed into the CNN, which is a type of deep learning algorithm well-suited for image classification tasks. The CNN learns to identify patterns in the features that distinguish between benign and malignant tumors. The architecture of the CNN includes multiple convolutional layers that process the image data and pooling layers that reduce the image's spatial dimensions, retaining only the most important features. Fully connected layers then combine these features to produce the final output. The output layer uses a sigmoid activation function to classify the tumor as benign or malignant. This automated classification significantly reduces the time and effort required for diagnosis while improving accuracy.
Output and Analysis: After the classification step, the output is analyzed to determine the nature of the tumor. This analysis provides valuable information to clinicians, helping them make informed decisions regarding the appropriate treatment plan. The combination of advanced segmentation and classification techniques results in an automated system capable of reliably detecting and diagnosing brain tumors. By integrating these tools, healthcare providers can enhance the accuracy of their diagnoses, leading to more effective treatments and better patient outcomes. The system's ability to process large datasets efficiently ensures that it can be used in real-world clinical settings, where timely and accurate brain tumor diagnosis is critical.
Brain Tumor Image Classification Using Convolutional Neural Networks: CNNs are a powerful tool for image classification, especially in medical imaging, where accuracy is paramount. The CNN used in this invention is capable of identifying intricate details within the MRI images, such as the texture and shape of the tumor, by learning filters during training. These filters allow the network to focus on the most relevant features in the image while minimizing computational resources. The use of pooling layers ensures that the network captures rotational and positional invariances in the image, which is essential for
accurate classification. The fully connected layers at the end of the network consolidate the learned features and produce a final classification output. This process enhances the overall accuracy of the system, enabling it to classify tumors as benign or malignant with high confidence. The automated nature of the classification reduces the reliance on manual interpretation, which can be time-consuming and prone to errors, making it an invaluable tool for healthcare professionals.
The process of data preprocessing and feature extraction is crucial for improving the performance of convolutional neural networks (CNNs) in detecting brain tumors from MRI images. One effective technique used for extracting texture features from images is the Gray-Level Co-occurrence Matrix (GLCM). GLCM is particularly useful for analyzing the spatial relationships between pixel intensities in MRI images, helping to identify patterns that are critical for distinguishing tumor regions from surrounding tissue. Tumors often exhibit unique textural characteristics such as irregularities, heterogeneity, or variations in contrast that can be detected using this method.
To compute the GLCM, the image is analyzed by establishing spatial relationships between pairs of pixels. This involves defining parameters such as distance and angle to capture how often different pixel intensity combinations occur in proximity. The result is a matrix that quantifies the frequency of these intensity combinations. From this matrix, several texture descriptors are derived, such as contrast, energy, entropy, and homogeneity. Each descriptor provides specific information about the texture of the image. For example, contrast measures the intensity variations between neighboring pixels, energy reflects the uniformity of pixel pairs, entropy indicates the level of randomness or complexity in the image, and homogeneity quantifies the similarity between pixel pairs, particularly those near the diagonal of the GLCM.
The computational process involves sliding a window over the preprocessed MRI image and calculating the GLCM for each window position. As the window moves across the image, the texture descriptors-contrast, energy, entropy, and homogeneity-are calculated for every region. This process allows for the extraction of texture information from the image, which is valuable for identifying different tissue types and pathological conditions, such as tumors. The resulting features provide insights into the spatial texture patterns present in the MRI image, enabling the detection of tumor areas based on these distinctive characteristics.
In brain tumor detection, the significance of GLCM features lies in their ability to capture subtle differences in texture that might not be visible in the raw pixel intensities. Tumors tend to exhibit distinct textural patterns, such as areas with uneven contrast or heterogeneous regions, which GLCM can effectively capture. By incorporating these features into CNN models, the system can leverage texture information to improve the accuracy of distinguishing tumor and non-tumor regions. This enhances the model's ability to detect and characterize brain tumors, contributing to more accurate diagnoses and better patient outcomes.
MATLAB is an advanced computational tool widely used in technical computing, and it plays a vital role in the implementation of deep learning techniques like Convolutional Neural Networks (CNNs), especially in medical image analysis tasks such as brain tumor detection from MRI images. The ability of MATLAB to handle complex matrix computations, visualize data, and provide an interactive programming environment makes it an ideal choice for developing a brain tumor classification system. The integration of MATLAB with deep learning algorithms such as CNNs streamlines the entire process from image preprocessing to model training, which is crucial in efficiently analyzing and classifying MRI images.
One of the key advantages of MATLAB is its efficient handling of matrix and tensor operations. Since CNNs are heavily reliant on matrix operations for tasks like convolution, pooling, and fully connected layers, MATLAB's native support for these types of computations accelerates the development and implementation of the CNN. MATLAB's Deep Learning Toolbox offers a rich set of functions for designing, training, and validating neural networks. These functions abstract away much of the complexity, allowing developers to focus on the architecture and model performance rather than low-level implementation details. Moreover, MATLAB's ability to integrate with other toolboxes, like the Image Processing Toolbox, enhances the preprocessing and feature extraction steps in the tumor detection pipeline.
In terms of image preprocessing, MATLAB simplifies the process of loading, resizing, and normalizing MRI images, which are essential steps before feeding data into the CNN. Functions like imread() allow easy image loading, while imresize() and rgb2gray() help resize and convert images to grayscale, respectively. These preprocessing tasks are vital for standardizing the input data, ensuring that the CNN receives consistent and high-quality images. MATLAB also offers robust filtering capabilities, such as imfilter() and medfilt2(), which help remove noise from the MRI scans, improving the quality of the input images and, subsequently, the performance of the CNN model.
When it comes to CNN construction, MATLAB's Deep Learning Toolbox provides a straightforward approach for defining the CNN architecture. Using built-in functions like
convolution2dLayer, maxPooling2dLayer, and fully Connected Layer, users can design custom networks that are well-suited for image classification tasks. For instance, the convolution layers in the network apply various filters to the image, while pooling layers reduce the image size, making the network more computationally efficient without losing critical features. The final layers, such as the fully connected layers and the output layer with a sigmoid activation function, ensure the network outputs a binary classification (tumor or non-tumor). MATLAB's ability to define complex neural network layers with minimal code simplifies the process of constructing a tailored CNN architecture.
Once the CNN architecture is defined, training the model in MATLAB becomes a seamless process. With the train Network() function, users can train the network on labeled MRI data, optimizing the model's weights and biases to minimize classification errors. MATLAB allows users to specify various training parameters, such as the learning rate, batch size, and the number of epochs, through the training Options() function. This flexibility enables fine-tuning of the model for optimal performance. Moreover, MATLAB supports GPU acceleration, which can significantly speed up the training process, making it particularly useful for large datasets and complex models like CNNs. The easy-to-use training framework allows developers to experiment with different network configurations, improving the overall model accuracy.
MATLAB provides a comprehensive and user-friendly platform for building and deploying deep learning models like CNNs for brain tumor detection. Its high-level functions, combined with robust computational power and visualization capabilities, make it an invaluable tool for researchers and practitioners in the field of medical image analysis. Through its intuitive environment and powerful toolboxes, MATLAB streamlines the workflow of developing, training, and evaluating deep learning models, ensuring the effective detection of tumors in brain MRI scans.
Fig: 3 illustrates the segmentation of the images.
Fig 3(a): Input Image: The first image in the sequence, labeled "Input Image," represents the original MRI scan of the brain, which serves as the starting point for the analysis process. This image may be captured using various imaging techniques, and it could either be in grayscale or color, depending on the specific approach used for scanning. The MRI scan contains raw data that will be processed in subsequent stages. At this point, the image may include noise or irrelevant details that are not essential for tumor detection, which is why further processing and analysis are necessary to extract useful information.
Fig 3(b): Image Reconstruction: Following the input image, "Image Reconstruction" refers to the process where the raw MRI scan undergoes enhancement and refinement. This step is crucial for improving the quality of the image, specifically by reducing noise, enhancing key features, and making the image clearer for analysis. Reconstruction techniques, such as filtering or contrast enhancement, help highlight the areas of interest-such as the brain tumor-by improving the visibility of these features. The goal is to ensure that important details, which may be obscured by noise or blurriness in the original scan, are more clearly defined, thereby aiding the accuracy of subsequent tumor detection.
Fig 3 (c): MRI Image: The "MRI Image" in this figure likely represents a different slice or view of the brain, taken from a different angle or depth compared to the input image. While this image may visually resemble the original MRI scan shown in Fig 6.1, it provides additional perspectives or slices of the brain that are crucial for comprehensive analysis. Brain tumors can appear differently depending on their location and the orientation of the MRI scan, so having multiple views ensures a more complete assessment and enables better detection of abnormalities in different brain regions.
Fig 3 (d): Divided into 2 Segments: In this image, the MRI scan has been segmented into two distinct regions, with one possibly representing the tumor and the other indicating healthy tissue. Segmentation is a critical step in medical image analysis, as it isolates the regions of interest-such as the tumor-while excluding irrelevant areas. By clearly differentiating between the tumor and surrounding tissues, segmentation enables more accurate feature extraction and classification in the subsequent steps of the detection process. This process is essential for distinguishing between healthy brain matter and potentially pathological structures, facilitating a more precise diagnosis.
, Claims:We claim
1. A system (100) for brain tumor detection in MRI images, comprising:
a) an image acquisition module (110) for obtaining MRI scans of the brain, configured to standardize and store images in grayscale with dimensions suitable for analysis;
b) a pre-processing module (120) configured to remove noise from the MRI image using a weighted median filter, thereby retaining essential features while enhancing image clarity;
c) a segmentation module (130) employing a Graph Cut algorithm for separating tumor regions from surrounding tissues by constructing a graphical model with pixel nodes and using s-t graph-based segmentation;
d) a feature extraction module (140) that identifies significant features within segmented regions, including shape, texture, and intensity metrics for effective classification; and
e) a CNN classifier (150) configured to analyze extracted features and classify tumor types, such as benign or malignant, using a layered neural network model with convolution, pooling, and dense layers to facilitate tumor diagnosis and treatment planning.
2. The system as claimed in claim 1, wherein the segmentation module further comprises a graphical interface for setting parameters of the Graph Cut algorithm, such as number of nodes and edges, to allow customization based on the specific MRI image properties.
3. The system as claimed in claim 1, when the feature extraction module incorporates multi-scale texture analysis to enhance tumor feature extraction by examining pixel intensity variations at different resolutions.
4. A method (200) of pre-processing and segmenting brain MRI images to detect tumors, the method comprising:
a) acquiring MRI brain images in grayscale format and standard resolution;
b) pre-processing the images by applying a weighted median filter to minimize noise and preserve edge information;
c) segmenting tumor regions from the pre-processed MRI images using a Graph Cut algorithm that models each pixel as a node and identifies tumor boundaries through max-flow/min-cut calculations;
d) extracting feature descriptors such as LBP or SIFT from segmented regions to capture texture, shape, and intensity characteristics of tumor structures; and
e) processing the extracted features through a convolutional neural network to classify tumor regions as benign or malignant, with classification results displayed for clinical interpretation.
5. A method for detecting brain tumor regions in MRI images using segmentation techniques, comprising:
a) acquiring a grayscale MRI brain image from a patient database, with noise and artifacts removed through a weighted median filter, thereby enhancing the image for analysis;
b) segmenting the image to detect tumor regions using an unsupervised Graph Cut algorithm, wherein a graphical model represents each pixel as a node and uses a max-flow/min-cut approach to differentiate tumor regions from non-tumor areas;
c) extracting key features from segmented regions, including texture, shape, and intensity, by applying feature descriptors such as Local Binary Patterns (LBP) or Scale-Invariant Feature Transform (SIFT); and
d) classifying the segmented image regions into benign or malignant categories through a deep convolutional neural network (CNN), which processes extracted features to provide classification outputs that aid in clinical decision-making.
e) The method as claimed in claim 2, further comprising an output display unit configured to present classification results as a visual overlay on the MRI scan, allowing healthcare professionals to view tumor localization and classification in real-time.
6. A computer-implemented system for identifying and classifying brain tumors in MRI images, the system comprising:
a) a preprocessing unit to remove noise and artifacts from MRI images by applying a weighted median filter that preserves crucial edge information;
b) a segmentation unit employing an unsupervised Graph Cut algorithm, wherein each MRI image pixel is represented as a node in a graph, and max-flow/min-cut techniques are used to delineate tumor boundaries accurately;
c) a feature extraction unit configured to generate descriptors of segmented regions, including texture and shape characteristics, using methods such as Local Binary Patterns or Scale-Invariant Feature Transform;
d) a deep learning classifier employing a convolutional neural network (CNN) with layers for convolution, pooling, flattening, and dense connections to classify tumor types, using the extracted features as input for precise tumor categorization; and
e) a user interface to display the classification results and provide actionable insights to healthcare professionals, facilitating integration into clinical workflows for brain tumor diagnosis and treatment.
7. The system as claimed in claim 6, wherein the preprocessing unit is further configured to normalize image intensities across the MRI dataset, ensuring consistent input for the segmentation and classification steps.
8. The system of claim 16, wherein the user interface includes annotation tools to allow radiologists to manually refine segmentation outputs based on visual inspection.
Documents
Name | Date |
---|---|
202441088675-COMPLETE SPECIFICATION [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-DECLARATION OF INVENTORSHIP (FORM 5) [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-DRAWINGS [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-FORM 1 [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-FORM FOR SMALL ENTITY [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-FORM FOR SMALL ENTITY(FORM-28) [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-FORM-9 [16-11-2024(online)].pdf | 16/11/2024 |
202441088675-REQUEST FOR EARLY PUBLICATION(FORM-9) [16-11-2024(online)].pdf | 16/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.