image
image
user-login
Patent search/

Machine Learning-Based System for Automated Diagnosis of Skin Cancer Using Dermoscopic Images

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

Machine Learning-Based System for Automated Diagnosis of Skin Cancer Using Dermoscopic Images

ORDINARY APPLICATION

Published

date

Filed on 4 November 2024

Abstract

Abstract This invention presents a machine learning-based system for automated skin cancer diagnosis using dermoscopic images, designed to support early and accurate detection of malignant lesions such as melanoma, basal cell carcinoma, and squamous cell carcinoma. The system includes an image acquisition module for capturing high-resolution images, a pre-processing module for enhancing image quality, and a lesion segmentation module utilizing convolutional neural networks (CNNs) to isolate lesions from surrounding skin. The segmented image undergoes feature extraction, analyzing color distribution, texture, shape, and vascular patterns associated with malignancy. A classification module then categorizes the lesion as benign or malignant, with a probability score for specific cancer types. A feedback mechanism enables continuous learning, improving diagnostic precision with new clinical data. By automating skin cancer detection, this system facilitates faster, more accessible diagnostics, supporting clinicians in both primary care and remote settings for efficient patient management and early intervention.

Patent Information

Application ID202441084169
Invention FieldCOMPUTER SCIENCE
Date of Application04/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
Dr. RVS PraveenDirector Product Engineering Digital Engineering and Assurance LTIMindtree Limited, Hyderabad, Telangana, India.IndiaIndia
Anoop VProfessor Artificial Intelligence and Datascience Jyothi Engineering College- Thrissur 679531, KeralaIndiaIndia
B Anusha RaniAssistant professor Computer science and engineering Vardhaman College of Engineering, Narkhuda, Nagarguda - Shamshabad Rd, Kacharam, Hyderabad, Telangana, India.IndiaIndia
S V Juno Bella GraciaAssistant Professor Information Technology Sri Sairam Engineering College, Chennai, Tamil NaduIndiaIndia
Sherin KAssistant Professor Computer Science and Engineering St.Joseph's Institute of Technology, Chennai, Tamil NaduIndiaIndia
M.RubaAssistant Professor Computer Science And Engineering K.Ramakrishnan College Of Engineering, Trichy,Tamil NaduIndiaIndia
N DeshaiAssistant Professor in Dept of IT SRKR Engineering College Chinna Amiram Bhimavaram 534204 W.G.Dt A.P IndiaIndiaIndia
S.VaradharajanAssistant Professor Department of computer science and engineering VELS Institute Of Science, Technology And Advanced Studies, Krishnapuram, Pallavaram, Chennai, Tamil Nadu 600117IndiaIndia
Hoyala JataboinaAssistant Professor CSE-AIML LORDS Institute Of Engineering And Technology, Medchal Malkajgiri, Telangana, 500040IndiaIndia
Dr Shivi SharmaAssistant Professor Computer Science Engineering Organization: Jain University , Banglore Karnataka.IndiaIndia
Dr D D Sharmaprofessor Shoolini University, Solan Department : (Agricultural Extension & Communication) MS Swaminathan School of Agriculture, shoolini University, Solan-Oachghat-Kumarhatti Highway, Bajhol, Himachal PradesIndiaIndia

Applicants

NameAddressCountryNationality
Dr. RVS PraveenDirector Product Engineering Digital Engineering and Assurance LTIMindtree Limited, Hyderabad, Telangana, India.IndiaIndia
Anoop VProfessor Artificial Intelligence and Datascience Jyothi Engineering College- Thrissur 679531, KeralaIndiaIndia
B Anusha RaniAssistant professor Computer science and engineering Vardhaman College of Engineering, Narkhuda, Nagarguda - Shamshabad Rd, Kacharam, Hyderabad, Telangana, India.IndiaIndia
S V Juno Bella GraciaAssistant Professor Information Technology Sri Sairam Engineering College, Chennai, Tamil NaduIndiaIndia
Sherin KAssistant Professor Computer Science and Engineering St.Joseph's Institute of Technology, Chennai, Tamil NaduIndiaIndia
M.RubaAssistant Professor Computer Science And Engineering K.Ramakrishnan College Of Engineering, Trichy,Tamil NaduIndiaIndia
N DeshaiAssistant Professor in Dept of IT SRKR Engineering College Chinna Amiram Bhimavaram 534204 W.G.Dt A.P IndiaIndiaIndia
S.VaradharajanAssistant Professor Department of computer science and engineering VELS Institute Of Science, Technology And Advanced Studies, Krishnapuram, Pallavaram, Chennai, Tamil Nadu 600117IndiaIndia
Hoyala JataboinaAssistant Professor CSE-AIML LORDS Institute Of Engineering And Technology, Medchal Malkajgiri, Telangana, 500040IndiaIndia
Dr Shivi SharmaAssistant Professor Computer Science Engineering Organization: Jain University , Banglore Karnataka.IndiaIndia
Dr D D Sharmaprofessor Shoolini University, Solan Department : (Agricultural Extension & Communication) MS Swaminathan School of Agriculture, shoolini University, Solan-Oachghat-Kumarhatti Highway, Bajhol, Himachal PradesIndiaIndia

Specification

Description:Machine Learning-Based System for Automated Diagnosis of Skin Cancer Using Dermoscopic Images

Field of the Invention
The present invention relates to medical diagnostics, specifically to a machine learning-based system for the automated detection and classification of skin cancer using dermoscopic images. The invention leverages advanced machine learning algorithms and image processing techniques to enhance diagnostic accuracy, providing early detection that can significantly improve patient outcomes.
Background of the Invention
Skin cancer remains one of the most prevalent forms of cancer globally, with millions of new cases diagnosed annually. The three primary types of skin cancer-melanoma, basal cell carcinoma (BCC), and squamous cell carcinoma (SCC)-vary significantly in terms of severity, prevalence, and prognosis. Melanoma, in particular, is known for its aggressive progression and high mortality rate if not detected early. Early detection is crucial across all types but is especially vital for melanoma, as timely intervention can significantly improve patient survival rates. However, traditional diagnostic practices rely heavily on manual examination methods that are often subjective, time-intensive, and dependent on the experience of the diagnosing dermatologist.
Challenges in Traditional Skin Cancer Diagnosis
Dermoscopy, a non-invasive imaging technique, has revolutionized skin cancer diagnosis by enabling clinicians to observe subsurface skin structures and patterns that are invisible to the naked eye. Despite its benefits, dermoscopy requires extensive training and expertise for accurate interpretation, creating a barrier for effective, large-scale screening and early diagnosis. Studies reveal significant variability in diagnostic accuracy among dermatologists, particularly when dealing with ambiguous or borderline cases. This variability, coupled with the ever-growing demand for dermatological services, underscores the need for automated, reliable, and scalable diagnostic solutions.
Advancements in Machine Learning for Medical Imaging
Recent advancements in artificial intelligence (AI) and, more specifically, machine learning (ML), have demonstrated transformative potential in medical imaging applications. Machine learning algorithms, particularly deep learning, are well-suited for image analysis tasks due to their ability to learn complex patterns from large datasets. Convolutional neural networks (CNNs), a subset of deep learning, have been widely adopted for image classification and segmentation in various domains, including radiology, ophthalmology, and dermatology. These networks excel in recognizing and classifying visual patterns, making them ideal for analyzing dermoscopic images of skin lesions.
In recent years, numerous studies have attempted to apply machine learning to dermatology, with several research models achieving diagnostic accuracy comparable to or even exceeding that of experienced dermatologists. Despite promising results, these studies face significant challenges in deployment. Issues such as image standardization, feature extraction for different skin types and colors, and interpretability of AI-driven results pose considerable obstacles for clinical integration. Additionally, the ability to generalize diagnostic models across different populations and geographic regions remains a challenge, as skin characteristics can vary significantly by ethnicity and environment.
Limitations of Current Automated Diagnostic Systems
Existing automated diagnostic systems, while effective in controlled research environments, lack the robustness needed for widespread clinical deployment. Most models rely on small, homogeneous datasets, which can result in biases when applied to diverse real-world populations. These systems often struggle with segmentation, as skin lesions exhibit considerable variability in size, shape, color, and texture. Many lesions also share overlapping features, making it challenging to distinguish between benign and malignant lesions accurately. Without precise lesion segmentation and feature extraction, these models are prone to errors, particularly in cases of atypical presentations.
Moreover, the diagnostic process in dermatology is nuanced, often requiring both pattern recognition and contextual analysis. For instance, melanoma may present with irregular borders and color variations, but benign lesions can exhibit similar features. A successful automated system must, therefore, be capable of analyzing and interpreting these subtle, intricate features accurately.
Need for a Comprehensive ML-Based Diagnostic Solution
To address these challenges, there is a need for an advanced, machine learning-based diagnostic system that can operate with high accuracy across diverse skin types and lesion complexities. Such a system should incorporate robust pre-processing techniques to standardize input images, minimizing the influence of environmental factors like lighting and skin pigmentation. Moreover, it should integrate advanced segmentation algorithms capable of isolating lesions with precision, thereby facilitating the extraction of relevant features that are key to accurate diagnosis.
The proposed invention fills this gap by introducing a comprehensive ML-based system that utilizes a multi-step process to diagnose skin cancer through dermoscopic images. The system's modular design includes image pre-processing, lesion segmentation, feature extraction, and classification components, each tailored to address the unique challenges of skin cancer diagnosis. Leveraging convolutional neural networks, ensemble learning, and continuous model refinement, the system offers a scalable and adaptable solution for clinical use, enhancing diagnostic reliability and providing clinicians with a valuable tool for early skin cancer detection.
Impact of Early and Accurate Skin Cancer Detection
Early detection of skin cancer is directly linked to improved survival rates and reduced treatment costs. For instance, melanoma patients diagnosed at an early stage have a five-year survival rate of over 95%, while this rate significantly drops in later stages. Automated diagnostic systems can facilitate early detection by acting as a triage tool in clinics, allowing dermatologists to prioritize high-risk cases for further examination. This technology can also empower general practitioners and healthcare providers in underserved areas to offer preliminary diagnoses, increasing access to early skin cancer screening.
The proposed invention aims to redefine the role of AI in dermatology by providing a reliable, accessible, and clinically relevant solution for skin cancer diagnosis. Through continuous learning mechanisms, the system adapts to evolving data patterns, improving its diagnostic capabilities over time and making it an invaluable asset for modern dermatology practices.
Summary of the Invention
This invention presents a Machine Learning-Based System for Automated Diagnosis of Skin Cancer Using Dermoscopic Images. Designed to support clinicians in identifying and classifying skin cancer types such as melanoma, basal cell carcinoma, and squamous cell carcinoma, the system uses advanced machine learning techniques to analyze dermoscopic images for early, accurate diagnosis.
The system consists of several key modules: (1) Image Acquisition and Pre-processing, which standardizes and enhances dermoscopic images for consistency across varying skin tones and lighting conditions; (2) Lesion Segmentation, which employs convolutional neural networks (CNNs) to precisely isolate skin lesions from surrounding tissue; (3) Feature Extraction, which identifies key lesion characteristics like color asymmetry, texture, shape, and vascular structures to detect early signs of malignancy; and (4) Classification, which uses a deep learning model or ensemble of models to categorize the lesion as malignant or benign.
A feedback and continuous learning mechanism enables the system to refine its model over time, adapting to new data and improving diagnostic accuracy. By offering automated, high-accuracy diagnostics, the system aims to make early detection accessible in clinical and remote settings, ultimately enhancing patient outcomes and reducing diagnostic workloads for healthcare providers.
Detailed Description of the Invention
This invention details a sophisticated system for the automated diagnosis of skin cancer, designed specifically to analyze dermoscopic images for early identification and classification of skin cancer types, including melanoma, basal cell carcinoma, and squamous cell carcinoma. The system employs a structured approach, integrating advanced machine learning (ML) and deep learning algorithms with a suite of image processing techniques to achieve precise, reliable, and fast diagnostics.
The system is composed of interconnected modules, each fulfilling a specific function to handle the unique challenges associated with dermoscopic image analysis. The process begins with an Image Acquisition and Pre-processing module. In this stage, high-resolution dermoscopic images of skin lesions are captured by a digital dermoscope and entered into the system. Recognizing the variability inherent in dermoscopic images due to differences in lighting conditions, skin tones, and angles, this module includes a robust pre-processing pipeline. The images undergo normalization to ensure consistent color distribution, noise reduction to minimize visual artifacts, and color correction to standardize the tonal balance across images. These pre-processing steps create a stable input, minimizing potential discrepancies and ensuring uniformity in the images that feed into subsequent analysis.
Following pre-processing, the Lesion Segmentation Module isolates the skin lesion from the surrounding skin area using a convolutional neural network (CNN) specifically trained on annotated dermoscopic datasets. Skin lesions are inherently complex in their shape, size, color, and texture, which poses a challenge for accurate segmentation. To address this, the CNN model in this module is trained with diverse data representing a variety of skin types and lesion patterns, allowing it to generalize effectively across different patient demographics. By precisely outlining lesion boundaries, this module enhances diagnostic accuracy by focusing the analysis exclusively on the area of interest, avoiding the inclusion of surrounding skin tissue which could skew results.
Once the lesion is segmented, the Feature Extraction Module identifies and quantifies critical attributes that are most relevant for diagnosing skin cancer. This process involves extracting a range of features from the lesion, each indicative of various skin cancer characteristics. First, color analysis detects asymmetry and variations in pigmentation-key indicators of malignancy. The module examines unevenness in hue distribution, such as areas that are darker or lighter, which can suggest irregular cell growth. Second, texture analysis evaluates the structural patterns within the lesion. It considers granularity, smoothness, and other textural properties that differ between benign and malignant lesions, providing valuable insight into the lesion's pathology. Third, shape and border analysis focuses on the geometric properties of the lesion, measuring aspects like asymmetry, border irregularity, and overall shape. Irregular borders or asymmetrical shapes often correlate with malignancy, making these features critical for accurate classification. Finally, vascular structure identification is included to detect any microvascular patterns associated with skin cancers. Specific vascular formations can be unique to certain cancer types and thus are invaluable for precise classification.
With the extracted features in hand, the system moves to the Classification Module, where the diagnostic decision is made. This module employs a deep CNN model, fine-tuned on a large, labeled dataset of dermoscopic images. The CNN model processes the extracted features and calculates a likelihood score for each possible skin cancer type. The model's architecture enables it to recognize complex patterns within the image data, making it capable of accurately differentiating between melanoma, basal cell carcinoma, squamous cell carcinoma, and benign lesions. To enhance classification reliability, the system may deploy an ensemble learning approach, wherein multiple models contribute to the final decision. This ensemble technique combines the strengths of individual models, with each adding its prediction to a weighted vote, thus reducing the likelihood of misclassification and boosting diagnostic accuracy.
A unique aspect of the invention is the Feedback and Model Improvement Module, designed to enable continuous learning. Each time the system processes a new dermoscopic image, it has the opportunity to refine its diagnostic model. Clinically validated results are periodically fed back into the model as training data, allowing the system to adapt to new lesion patterns and emerging diagnostic nuances. This feedback mechanism ensures that the system evolves over time, improving accuracy, and expanding its applicability across diverse populations and clinical contexts.
The operational workflow of the system is both straightforward and efficient, aiming to deliver real-time diagnostics. Once a dermoscopic image is captured, it enters the pre-processing stage for enhancement and standardization. After segmentation and feature extraction, the classification module assesses the lesion based on the previously extracted features, generating a report that includes the probability of malignancy and suggested next steps. The system is designed to provide this information in an easily interpretable format, enabling clinicians to make informed decisions promptly.
This invention holds several key advantages. By automating the skin cancer diagnostic process, it minimizes reliance on specialist dermatological expertise, making early detection more accessible, particularly in resource-constrained settings. The system also reduces diagnostic times, allowing clinicians to focus on high-priority cases and improving overall patient management efficiency. Importantly, the continuous learning mechanism ensures that the system remains at the forefront of diagnostic capabilities, adapting to changes in skin cancer presentation and expanding its diagnostic range to include rare and atypical cases.
The proposed machine learning-based system for skin cancer diagnosis presents a powerful tool in modern dermatology, promising high accuracy, adaptability, and accessibility in clinical and remote settings. It represents a substantial advancement in the use of AI for medical diagnostics, empowering healthcare providers with a reliable, fast, and clinically valuable resource for early skin cancer detection.
The system begins by accepting high-resolution dermoscopic images of skin lesions, typically acquired through a digital dermoscope. These images serve as the foundation for the diagnostic process. Due to the variability of input conditions such as lighting, skin tones, and image quality, the system's first step involves an extensive Pre-processing Phase. In this phase, images are normalized to ensure consistent brightness and contrast across all input data. Color correction techniques adjust the color tones to eliminate any unwanted hues, and noise reduction filters help to remove image artifacts, enhancing the image's clarity and making it optimal for further analysis. Through these pre-processing steps, the system standardizes the input data, which improves the accuracy and reliability of subsequent stages.
Once pre-processed, the images are directed to the Lesion Segmentation Phase, where a convolutional neural network (CNN) is employed to isolate the lesion area from the surrounding skin. This step is crucial, as it ensures that only relevant lesion features are analyzed. To achieve accurate segmentation, the CNN has been trained on a diverse dataset containing a variety of skin lesion types and appearances, allowing it to identify edges and contours of lesions effectively, regardless of variations in skin type or lesion presentation. By accurately distinguishing the lesion from the background, the segmentation process provides a clearly defined region of interest, which is then passed on to the next phase of the system.
Following segmentation, the system enters the Feature Extraction Phase, where it analyzes the segmented lesion for specific characteristics that can aid in distinguishing between benign and malignant lesions. The system evaluates several key features, starting with color distribution within the lesion. It examines any asymmetrical color patterns, including variations in pigment intensity, which are often indicative of malignant growth. Next, it assesses texture features, identifying smoothness or irregularities in the lesion's surface, which further helps to differentiate between benign and malignant conditions. The system also calculates the shape and border characteristics of the lesion, as irregular shapes and jagged, uneven borders are common in malignancies. Additionally, the system examines vascular structures, detecting any unique vascular patterns that may suggest particular types of skin cancer. This extraction of relevant features ensures that the classifier receives a detailed, high-quality representation of the lesion for accurate classification.
In the subsequent Classification Phase, the system leverages deep learning to make diagnostic determinations. A convolutional neural network (CNN), specifically trained on an extensive, labeled dataset of dermoscopic images, processes the extracted features. The CNN architecture allows the system to capture intricate patterns and relationships within the data, which are crucial for distinguishing between different types of skin lesions. The classifier assigns a probability score for each category, determining whether the lesion is likely to be benign or malignant and, if malignant, suggesting a probable type such as melanoma, basal cell carcinoma, or squamous cell carcinoma. The CNN's deep layers facilitate a nuanced understanding of the lesion's visual characteristics, leading to a highly accurate diagnostic output.
To further enhance reliability, the system may use an ensemble learning approach by combining multiple CNN models, each offering an independent assessment of the lesion. This ensemble approach averages the predictions or uses a weighted voting system to reach a final decision, reducing potential biases and increasing diagnostic confidence. By aggregating multiple model outputs, the system achieves a balanced diagnostic result, which is less prone to errors caused by any single model's limitations.
The final diagnostic output is then presented to the user in a comprehensive report format. This report includes the lesion classification, probability scores for each type of skin cancer, and visual markers highlighting significant features observed in the lesion. The system aims to communicate these results in an accessible way, helping clinicians interpret the findings quickly and accurately, thereby enhancing their decision-making process.
One of the unique elements of this invention is its Continuous Learning and Feedback Mechanism. The system is designed to evolve over time by incorporating new, clinically validated images into its training dataset. With each confirmed diagnosis, additional data points are added to the CNN's training regimen, enabling the model to learn from new cases and continuously improve its diagnostic precision. This feedback loop allows the system to remain up-to-date with emerging trends in lesion presentation, adapting to a broader range of skin cancer cases as it accumulates more data over time.
In operation, the system is both efficient and flexible, capable of processing images and delivering diagnostic insights in real time, making it well-suited for clinical and remote applications alike. By streamlining the diagnostic workflow, the system supports dermatologists in managing their workload, as it can serve as an initial triage tool, flagging high-risk lesions for further review. This capability enables healthcare providers to prioritize cases that require urgent attention, thereby improving the overall quality and speed of patient care.
In summary, the invention combines sophisticated machine learning techniques with dermoscopic image analysis to provide a reliable, adaptable, and clinically valuable tool for the automated diagnosis of skin cancer. Through its structured process of image pre-processing, segmentation, feature extraction, and classification, along with continuous learning, the system delivers accurate diagnostics with minimal human intervention, making it a powerful resource in the early detection and management of skin cancer.
, Claims:We Claim:
1. A machine learning-based system for automated diagnosis of skin cancer using dermoscopic images, comprising:
◦ an image acquisition module configured to capture and input high-resolution dermoscopic images of skin lesions;
◦ an image pre-processing module operable to standardize captured images through color normalization, noise reduction, and color correction to enhance image quality;
◦ a lesion segmentation module comprising a convolutional neural network (CNN) trained to isolate the lesion area from surrounding skin, thereby generating a segmented image of the lesion;
◦ a feature extraction module configured to analyze the segmented image and extract one or more characteristics including color distribution, texture, shape, border irregularity, and vascular structure associated with skin cancer diagnosis;
◦ a classification module comprising a deep learning model trained to process the extracted characteristics and classify the lesion as benign or malignant, with an assigned probability score for one or more skin cancer types, including melanoma, basal cell carcinoma, and squamous cell carcinoma; and
◦ a feedback mechanism that updates the deep learning model based on clinically validated inputs, enabling continuous learning to improve diagnostic accuracy over time.

2. The system of claim 1, wherein the image pre-processing module is further configured to enhance images through brightness adjustment and contrast balancing, ensuring uniform visual quality across diverse dermoscopic images.
3. The system of claim 1, wherein the lesion segmentation module employs multiple convolutional neural networks in an ensemble, each trained on a distinct dataset to capture a wider range of lesion presentations, thereby increasing segmentation accuracy.
4. The system of claim 1, wherein the feature extraction module includes a vascular structure detection unit specifically trained to identify microvascular patterns unique to certain skin cancer types, enhancing the specificity of the diagnostic output.
5. The system of claim 1, wherein the classification module further comprises an ensemble of machine learning models that aggregate predictions from each model through a weighted voting system to achieve a robust diagnostic result with minimized bias.
6. The system of claim 1, wherein the feedback mechanism incorporates new image data from clinically validated diagnoses at regular intervals, allowing the model to expand its diagnostic capabilities to include rare and atypical skin cancer presentations.

Documents

NameDate
202441084169-COMPLETE SPECIFICATION [04-11-2024(online)].pdf04/11/2024
202441084169-DECLARATION OF INVENTORSHIP (FORM 5) [04-11-2024(online)].pdf04/11/2024
202441084169-FORM 1 [04-11-2024(online)].pdf04/11/2024
202441084169-FORM-9 [04-11-2024(online)].pdf04/11/2024
202441084169-POWER OF AUTHORITY [04-11-2024(online)].pdf04/11/2024
202441084169-REQUEST FOR EARLY PUBLICATION(FORM-9) [04-11-2024(online)].pdf04/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.