image
image
user-login
Patent search/

PROCESS FOR EARLY DETECTION OF TUMOR

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

PROCESS FOR EARLY DETECTION OF TUMOR

ORDINARY APPLICATION

Published

date

Filed on 9 November 2024

Abstract

Abstract The present invention discloses a process for early detection of tumor using image processing methods integrated with deep learning algorithms. The invention employs a two-stage approach where medical images, such as MRI or CT scans, are first processed using a U-Net segmentation model to accurately isolate tumor regions. Subsequently, a Convolutional Neural Network (CNN) is utilized for feature extraction, analyzing the tumor's shape, texture, and intensity. These features are fed into a classification system to distinguish between benign and malignant tumors with high precision. The method is scalable, making it suitable for use in telemedicine, hospitals, and diagnostic centers.

Patent Information

Application ID202411086537
Invention FieldCOMPUTER SCIENCE
Date of Application09/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Saloni BansalDepartment of Computer Science and Engineering, GLA University, 17km Stone, NH-2, Mathura-Delhi Road P.O. Chaumuhan, Mathura, Uttar Pradesh 281406.IndiaIndia

Applicants

NameAddressCountryNationality
GLA University, Mathura17km Stone, NH-2, Mathura-Delhi Road P.O. Chaumuhan, Mathura, Uttar Pradesh 281406IndiaIndia

Specification

Description:PROCESS FOR EARLY DETECTION OF TUMOR

Field of Invention
The present invention relates to the early detection of tumor. More particularly, a process for early detection of tumor using image processing method.

Background of the Invention
For cancer patients to receive better treatment outcomes and for longer survival, early tumor diagnosis is important. Medical imaging techniques such as edge detection and segmentation have long been utilized to aid with tumor diagnosis. They aren't necessarily the most precise or efficient, though.
Mehdy, M. M., P. Y. Ng, E. F. Shair, NI Md Saleh, and Chandima Gomes. "Artificial neural networks in image processing for early detection of breast cancer." Computational and mathematical methods in medicine 2017, no. 1 (2017): 2610628. (For this paper, medical imaging techniques have widely been in use in the diagnosis and detection of breast cancer. The drawback of applying these techniques is the large time consumption in the manual diagnosis of each image pattern by a professional radiologist. In this paper, Mammography is one of the most effective methods used in hospitals and clinics for early detection of breast cancer. It has been proven effective to reduce mortality as much as by 30%.
Svetlana V. Panasyuk, "Medical hyperspectral imaging for evaluation of tissue and tumor" (2017): PCT/9795303B2.In this work, the invention is directed to a hyperspectral imaging analysis that assists in real and near-real time assessment of biological tissue condition, viability, and type, and monitoring the above over time. Embodiments of the invention are particularly useful in surgery, clinical procedures, tissue assessment, diagnostic procedures, health monitoring, and medical evaluations.
Govinda, E. and Dutt, V.B.S.I., 2020. Artificial neural networks in UWB image processing for early detection of breast cancer. International Journal of Advanced Sciences Technology, 29(5), pp.2717-2730. The medical imaging methods have been commonly used for breast cancer diagnosis and detection. The limitation of using these techniques is the vast amount of time a qualified radiologist takes in the manual analysis of image pattern. Automated classifiers may dramatically upgrade the diagnostic process by automatically separating benign and malignant patterns in terms of both accuracy and time requirements. Artificial Neural network (ANN) plays an important role in this respect, especially in the application of UWB image processing for early breast cancer detection. GLCM technique is used and optimized using ANN. By using GLCM, 12 feature derivatives were extracted.
The problem in the prior art are intended to be solved by the present invention are discussed below: gives sophisticated image processing methods for early tumor detection in medical imaging utilizing U-Net segmentation and CNN classification more accurate, sensitive, and specific than older methods, making them ideal for early tumor detection. The capacity to precisely localize and classify tumor locations offers a possible answer to conventional approaches' problems, especially in lowering false positives and negatives. These methods may learn complicated patterns from medical imaging data and adapt to different imaging modalities.

Objectives of the Invention
The prime objective of the present invention is to provide a process for early detection of tumor.

Another object of this invention is to provide the process for early detection of tumor using image processing methods.

Another object of this invention is to provide the process for early detection of tumor where the image processing method identify tumors in their early stages when they are more treatable, helping in detecting minute changes in tissues, often before symptoms develop, leading to improved survival rates.

Another object of this invention is to provide the process for early detection of tumor where the image processing allows for the measurement of tumor size, shape, and growth over time, offers quantitative data that is valuable for treatment planning and monitoring the effectiveness of therapy.

Yet another object of this invention is to provide the process for early detection of tumor where the automated and precise tumor detection reduces the time required for diagnosis and minimizes the need for repeated imaging, lowering healthcare costs and speeding up the treatment process.
These and other objects of the present invention will be apparent from the drawings and descriptions herein. Every object of the invention is attained by at least one embodiment of the present invention.

Summary of the Invention
In one aspect of the present invention provides the process for early detection of tumor using image processing method where the advanced image processing enhances the quality and resolution of medical images (MRI, CT scans pictures), allowing for clearer visualization of potential tumor regions.

In one of the aspects, the present invention works in two modules, namely, U-Net segmentation model used for Tumor localization and CNN-based classification model distinguishing between benign and malignant Tumor regions.

In another aspect, the present invention predicts whether the tumor is benign or malignant, assisting in early and accurate diagnosis of tumor.
Brief Description of Drawings
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure. Further objectives and advantages of this invention will be more apparent from the ensuing description when read in conjunction with the accompanying drawing and wherein:

Figure 1 illustrates the U-Net segmentation model used for Tumor localization according to an embodiment of the present invention.
Figure 2 illustrates the CNN-based classification model distinguishing between benign and malignant Tumor regions according to an embodiment of the present invention.

DETAIL DESCRIPTION OF INVENTION
Unless the context requires otherwise, throughout the specification which follow, the word "comprise" and variations thereof, such as, "comprises" and "comprising" are to be construed in an open, inclusive sense that is as "including, but not limited to".
In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.

Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
As used in this specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the content clearly dictates otherwise. It should also be noted that the term "or" is generally employed in its sense including "and/or" unless the content clearly dictates otherwise.

The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.

The headings and abstract of the invention provided herein are for convenience only and do not interpret the scope or meaning of the embodiments. Reference will now be made in detail to the exemplary embodiments of the present invention.
The present invention discloses the process for early detection of tumor using image processing with deep learning algorithms employing a two-stage approach for accurately isolate tumor regions and analyzing the tumor's shape, texture, and intensity.

In describing the preferred embodiment of the present invention, reference will be made herein to like numerals refer to like features of the invention.

According to preferred embodiment of the invention, the process for early detection of tumor using image processing comprises of two modules namely, U-Net segmentation model used and CNN-based classification model.
The present invention works on this two-stage approach where medical images, such as MRI or CT scans, are first processed using a U-Net segmentation model to accurately isolate tumor regions. Subsequently, a Convolutional Neural Network (CNN) is utilized for feature extraction, analyzing the tumor's shape, texture, and intensity. These features are fed into a classification system to distinguish between benign and malignant tumors with high precision.

According to another embodiment of the invention, referring to Figure 1, the moudle1 is the U-Net segmentation that localizes the Tumor in the following steps:
Step 1. Input MRI/CT scan: Taking input images from CT (Computerized Tomography) or MRI (Magnetic Resonance Imaging) scans is the initial stage. These medical photos, which display the brain or other body components, including any possible cancers, are the unprocessed data.
Step 2. Pre-processing Noise Reduction and Normalization. The images undergo pre-processing before analysis, which includes normalization (to equalize image intensity levels) and noise reduction (to eliminate unnecessary data or distortions). These procedures make sure the images are more consistent and clearer, which is necessary for the model to function well.
Step 3. U-net Segmentation. One kind of Convolutional neural network (CNN) intended for medical picture segmentation is called a U-Net. Here, particular regions, like a tumor, are identified and segmented out of the pre-processed pictures using the U-Net model. By differentiating potential tumor sites from normal tissues, the model learns to emphasize those locations.
Step 4. Segmented Tumor Region. A segmented tumor region is produced by the U-Net model after it has processed the image. In this stage, the likely tumor-containing region is isolated.
Step 5. Segmented Images with Tumor Region Highlighted: Finally, the processed image is presented, where the tumor region is highlighted or labeled in the image for further analysis or diagnostic purposes. The output helps clinicians identify where the tumor is located.
Step 6. Output: Segmented Images with Tumor Region Highlighted: The processed image is shown, where the tumor site is highlighted or labeled in the image for additional analysis or diagnostic purposes. Clinicians can detect the tumor with the aid of the output.

According to another embodiment of the invention, referring to Figure 2, the module 2 is the CNN-based classification model for distinguishing between benign and malignant Tumor regions, works in the following steps:
Step 1. Input Segmented Tumor: The process starts with a segmented tumor image. This image is typically generated by a segmentation model such as U-Net, which isolates the tumor region from the rest of the medical image (For MRI or CT scan).
Step 2. Feature Extraction with CNN: Once the tumor is segmented, a Convolutional Neural Network (CNN) is applied to extract relevant features from the image. CNNs are designed to automatically capture important characteristics of the image, such as texture, shape, and intensity patterns that are indicative of tumor properties.
Step 3. Feature Shape, Texture, Intensity: The CNN extracts key features from the tumor, such as its shape, texture, and intensity. These features are crucial for distinguishing between different types of tumors, as malignant and benign tumors often differ in these characteristics.
Step 4. Classification Layer: The extracted features are passed into a classification layer, which typically consists of fully connected layers that process the features to make a prediction. This layer assigns probabilities to the tumor belonging to specific categories (benign or malignant).
Step 5. Output: Classified Tumor Region: The output of the classification layer is a classified tumor region, where the system has analyzed the segmented tumor and made a prediction about its nature.
Step 6. Benign/Malignant Tumor Classification: Finally, based on the features extracted and processed, the system outputs whether the tumor is benign (non-cancerous) or malignant (cancerous). This classification helps in determining the next steps in the patient's diagnosis and treatment plan.

According to another embodiment of the invention, the process significantly enhances diagnostic accuracy, achieving 92% accuracy, 89% sensitivity, and 93% specificity, thereby reducing false positives and false negatives. The process's automated nature improves efficiency, enabling faster diagnosis and potentially early-stage tumor detection in clinical settings.

According to another embodiment of the invention, the process for early detection of tumor using image processing has one industrial use is in the development of AI-powered diagnostic tools for healthcare facilities such as hospitals, diagnostic centers, and telemedicine platforms. For Example: A company developing AI solutions for healthcare could deploy these early detection algorithms to assist radiologists in breast cancer screening, allowing earlier detection and treatment, which can lead to higher survival rates. Such a system could be marketed to hospitals and clinics to improve their diagnostic services.

According to another embodiment of the invention, the process for early detection of tumor using image processing has following key benefits:
1. Automated Diagnostics: AI-based imaging systems can be integrated into radiology departments to assist radiologists in detecting tumors earlier and more accurately. This system can automatically analyze medical images, identifying suspicious areas for further investigation.
2. Enhanced Accuracy and Speed: These systems significantly reduce the time it takes to diagnose a tumor compared to manual review by radiologists. The combined use of deep learning algorithms such as U-Net (for segmentation) and CNN (for classification) improves diagnostic accuracy and reduces the chances of human error.
3. Scalability in Telemedicine: with the growth of telemedicine, AI-powered imaging tools can be deployed in remote areas where access to highly specialized doctors may be limited. These systems can analyze images remotely and send accurate reports to medical professionals for immediate consultation.
4. Cost Efficiency: Automated tumor detection tools reduce the need for repeated diagnostic tests, lower labor costs, and increase efficiency by speeding up the diagnosis process. This translates into cost savings for healthcare providers and patients.
5. Integration into Medical Devices: Companies manufacturing MRI, CT, or ultrasound machines can incorporate these tumor detection algorithms directly into their devices, providing real-time diagnostic assistance at the point of imaging. This can lead to the creation of a new generation of "smart" medical imaging devices with built-in AI capabilities.

Although a preferred embodiment of the invention has been illustrated and described, it will at once be apparent to those skilled in the art that the invention includes advantages and features over and beyond the specific illustrated construction. Accordingly it is intended that the scope of the invention be limited solely by the scope of the hereinafter appended claims, and not by the foregoing specification, when interpreted in light of the relevant prior art.

, Claims:We Claim;
1. A process for early detection of tumor using image processing comprises of two modules: firstly an U-Net segmentation model used and secondly a CNN-based classification model, wherein the process works on the two-stage approach where medical images are first processed using the U-Net segmentation model to accurately isolate tumor regions; the Convolutional Neural Network (CNN) utilized for feature extraction, analyzing the tumor's shape, texture, and intensity, thereafter these features are fed into a classification system to distinguish between benign and malignant tumors with high precision.

2. The process for early detection of tumor using image processing as claimed in claim 1, wherein U-Net segmentation localizes the Tumor in the following steps:
Step 1. Input MRI/CT scan: initially taking input images from CT (Computerized Tomography) or MRI (Magnetic Resonance Imaging) scans, these medical photos are the unprocessed data;
Step 2. Pre-processing Noise Reduction and Normalization: The images undergo pre-processing before analysis, which includes normalization (to equalize image intensity levels) and noise reduction (to eliminate unnecessary data or distortions), ensuring that the images are more consistent and clearer, which is necessary for the model to function well;
Step 3. U-net Segmentation: the particular regions, like a tumor, are identified and segmented out of the pre-processed pictures using the U-Net model, by differentiating potential tumor sites from normal tissues, the model learns to emphasize those locations;
Step 4. Segmented Tumor Region: A segmented tumor region is produced by the U-Net model after it has processed the image, in this stage, the likely tumor-containing region is isolated;
Step 5. Segmented Images with Tumor Region Highlighted: Finally, the processed image is presented, where the tumor region is highlighted or labeled in the image for further analysis or diagnostic purposes, the output helps clinicians identify where the tumor is located;
Step 6. Output: Segmented Images with Tumor Region Highlighted: The processed image is shown, where the tumor site is highlighted or labeled in the image for additional analysis or diagnostic purposes, clinicians can detect the tumor with the aid of the output.

3. The process for early detection of tumor using image processing as claimed in claim 1, wherein the CNN-based classification model for distinguishing between benign and malignant Tumor regions, works in the following steps:
Step 1. Input Segmented Tumor: The process starts with a segmented tumor image, this image is typically generated by a segmentation model such as U-Net, which isolates the tumor region from the rest of the medical image;
Step 2. Feature Extraction with CNN: Once the tumor is segmented, a Convolutional Neural Network (CNN) is applied to extract relevant features from the image;
Step 3. Feature Shape, Texture, Intensity: The CNN extracts key features from the tumor, such as its shape, texture, and intensity, these features are crucial for distinguishing between different types of tumors, as malignant and benign tumors often differ in these characteristics;
Step 4. Classification Layer: The extracted features are passed into a classification layer, which typically consists of fully connected layers that process the features to make a prediction, this layer assigns probabilities to the tumor belonging to specific categories (benign or malignant);
Step 5. Output: Classified Tumor Region: The output of the classification layer is a classified tumor region, where the system has analysed the segmented tumor and made a prediction about its nature;
Step 6. Benign/Malignant Tumor Classification: Finally, based on the features extracted and processed, the system outputs whether the tumor is benign (non-cancerous) or malignant (cancerous), this classification helps in determining the next steps in the patient's diagnosis and treatment plan.

Documents

NameDate
202411086537-FORM 18 [02-12-2024(online)].pdf02/12/2024
202411086537-FORM-8 [14-11-2024(online)].pdf14/11/2024
202411086537-FORM-9 [11-11-2024(online)].pdf11/11/2024
202411086537-COMPLETE SPECIFICATION [09-11-2024(online)].pdf09/11/2024
202411086537-DECLARATION OF INVENTORSHIP (FORM 5) [09-11-2024(online)].pdf09/11/2024
202411086537-DRAWINGS [09-11-2024(online)].pdf09/11/2024
202411086537-EDUCATIONAL INSTITUTION(S) [09-11-2024(online)].pdf09/11/2024
202411086537-EVIDENCE FOR REGISTRATION UNDER SSI [09-11-2024(online)].pdf09/11/2024
202411086537-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-11-2024(online)].pdf09/11/2024
202411086537-FORM 1 [09-11-2024(online)].pdf09/11/2024
202411086537-FORM FOR SMALL ENTITY(FORM-28) [09-11-2024(online)].pdf09/11/2024
202411086537-POWER OF AUTHORITY [09-11-2024(online)].pdf09/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.