image
image
user-login
Patent search/

DESIGN AUGMENTED INTELLIGENCE BASED SKIN CANCER CLASSIFICATION AND PREDICTION FRAMEWORK ANALYSIS

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

DESIGN AUGMENTED INTELLIGENCE BASED SKIN CANCER CLASSIFICATION AND PREDICTION FRAMEWORK ANALYSIS

ORDINARY APPLICATION

Published

date

Filed on 13 November 2024

Abstract

Skin diseases, disorders, and deficiencies pose fundamental challenges to the human body, having become increasingly prevalent over recent decades and often leading to cancer. This spectrum of ailments, ranging from benign conditions to potentially life-threatening cancers, presents significant global health challenges. Utilizing the HAM_10000 Metadata dataset, which includes diverse skin lesions and 100 high-resolution images of Squamous Cell Carcinoma (SCC), we employ pre-trained models, specifically multi-model CNNs, to extract intricate patterns and features. The accurate classification of these diseases and timely prediction of malignant transformations are critical for effective diagnosis and treatment planning. In our research paper, we present an innovative approach that integrates multimodal data to enhance the precision of skin disease classification and improve the prediction of cancerous transformations. By amalgamating information from these heterogeneous sources and applying advanced machine learning techniques, the results underscore the effectiveness of this integrative methodology.

Patent Information

Application ID202441087380
Invention FieldBIO-MEDICAL ENGINEERING
Date of Application13/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Mrs. Jujjuru VijayasreeAssistant Professor, Department of Computer Science and Engineering, Anurag Engineering College, Ananthagiri (V&M), Suryapet - 508206, Telangana, IndiaIndiaIndia
Mrs. Nekkanti MownikaAssistant Professor, Department of Computer Science and Engineering, Anurag Engineering College, Ananthagiri (V&M), Suryapet - 508206, Telangana, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
ANURAG ENGINEERING COLLEGEAnanthagiri (V&M), Suryapet - 508206, Telangana, IndiaIndiaIndia

Specification

Description:FIELD OF INVENTION
The main field of invention has the potential to revolutionize dermatology and oncology by offering patients with skin diseases accurate, efficient, and early diagnoses. Through this method, it was discovered that SCC images might hold a higher priority for skin cancer identification, achieving an impressive accuracy rate of 92% on the original dataset.
BACKGROUND OF INVENTION
Skin cancer, the most prevalent form of cancer in developing nations, has seen a surge in diagnoses and predictions. With 500,000 new cases reported in the United States, it stands as the 19th most common cancer worldwide. This article introduces a groundbreaking method for classifying and predicting skin cancer, leveraging augmented intelligence. The approach is implemented on Kaggle Re-Snet50 datasets integrated into a deep neural networking framework, where Re-Snet50 boasts a depth eight times greater than VGG net datasets. The Augmented Deep Neural Networking (AuDNN) technique extracts arbitrary features and identifies Regions of Interest (RoI) for cancer region extraction and clustering. The extracted datasets undergo synchronization with multilayer attribute dependency mapping, enhancing the prediction ratio. Developed based on Industrial IoT standards terminology, the proposed technique establishes a reliable communication framework for effective coordination within the global instrumental networking ecosystem, employing a dual cross-reference validation technique. Notably, the technique has surpassed previous methods due to its augmented intelligence-based Deep Neural Network (DNN) multidimensional mapping, achieving an impressive 93.26% accuracy in the rational computation of skin cancer classification and prediction.
The patent application number 201921049460 discloses a prediction theory-cancer, asthma, arthritis, diabetes, atherosclerosis, allergic rhinitis, skin allerfies/skin inflammatory disorders, leukoderma all are caused by protozoa entamoeba histolytica with the rise of esr and are completely curable by iv metronidazole/tinidazole+iv ciprofloxacin/levofloxacin or orals with effective therapeutic dose.
The patent application number 201941044048 discloses a synthesis of dried bikki seed reduced zinc nanoparticles against skin cancer cell line thereof.
The patent application number 201921032495 discloses a novel topical herbal cream for the treatment of psoriasis and related skin disorder.
The patent application number 201917013575 discloses a patch comprising potassium permanganate for the treatment of skin disorder.
The patent application number 201917013316 discloses a sublingual or buccal administration of dim for treatment of skin diseases.
SUMMARY
Skin cancer can develop when skin cells are damaged, often due to excessive sunlight exposure that produces ultraviolet (UV) radiation. Despite being a major global health concern, skin cancer diagnoses are not consistently reported compared to other types of cancer. However, the World Health Organization (WHO) notes that skin cancer accounts for one-third of all cancer diagnoses. Exploring the elimination of skin cancer using computer vision and deep learning offers significant advantages. These technologies allow for early and precise detection by automating the analysis of extensive medical data, particularly dermoscopic images. Incorporating explainable AI technology alongside advanced computer vision and deep learning techniques enhances interpretability, transparency, and reliability. This research focuses on classifying cancer images accurately while providing a thorough understanding of the model's decision-making process. The study utilized the HAM10000 dataset and applied four pre-trained CNN models-XceptionNet, EfficientNetV2S, EfficientNetV2M, and InceptionResNetV2-achieving an accuracy rate of 88.72%. Image augmentation and various performance measures were employed to assess the models' efficiency, with explainable AI methods like SmoothGrad and Faster Score-Cam enhancing result interpretability. The research contributes to the development of a skin cancer detection technique, addressing dataset imbalances and utilizing image augmentation for evaluation.
DETAILED DESCRIPTION OF INVENTION
In this work, we present an intelligent system designed to diagnose a variety of skin diseases, including Atopic Dermatitis, Basal Cell Carcinoma, Benign Keratosis-like Lesions, Eczema, Melanocytic Nevi, Melanoma, Psoriasis, Seborrheic Keratoses, Tinea Ringworm, Warts, and Molluscum. The proposed system differs from existing methods by working with multimodality, incorporating both text and image data. While previous systems employed a singleton data-driven method for skin cancer prediction with low accuracy in single modality, our approach utilizes a model containing HAM10000+100 SCC images for predicting skin cancer using deep neural networks. The dataset comprises text files providing information about each image, including attributes like cell type, gender, affected area, size of the image, name of the image, and age of the patient. An Artificial Neural Network (ANN) is applied to integrate multimodality with both text and image data. Data preprocessing involves resizing images to (100, 125, 3) for efficient processing in the proposed model. Exploratory data analysis is conducted to detect errors, identify outliers, understand relationships, and find patterns within the dataset.
Data augmentation is applied using Image Data Generator in Python with the TensorFlow library, generating real-time image augmentations during model training. The model includes layers such as Conv2D, Maxpool2D, Dropout, Flatten, and Dense with activation functions ReLU + SoftMax, and its predefined neural network is trained and tested using the train_test_split function for feature extraction and random_state.
The accuracy achieved in the initial single model is approximately 71.75% with 50 epochs and a batch size of 33. Subsequently, a Convolutional Neural Network (CNN) is employed for pattern recognition in images, preventing overfitting through data augmentation. The CNN model achieves an accuracy of around 75% with 150 epochs and a batch size of 16.
Furthermore, an integrated model is designed, comprising 2 input layers, 1 Conv2D, 1 Maxpooling2D, 1 embedding, 1 Flatten, 1 LSTM (64 filters), 1 Concatenate (Flatten + LSTM), and 2 Dense layers (128 + 20 filters). The final model fitting is conducted with 60 epochs and a batch size of 16, achieving an impressive accuracy of around 92%. The integration of text and image data enhances the effectiveness and efficiency of the proposed system for skin disease diagnosis.

Fig 1. Multi-modality System with LSTM and CNN
The experimental work was carried out on a machine with 25 GB of RAM and Tensor flow 2.14.0 installed. The dataset used for the experiment is the HAM1000 dataset, which is a well-known collection of high-quality images of common skin lesions. This dataset is widely utilized in the fields of dermatology and artificial intelligence research, comprising 10,000 dermoscopy images representing various types of both benign and malignant skin lesions. Additionally, 100 images of Squamous Cell Carcinoma (SCC) were added to create a new dataset. This supplementary dataset was collected independently and combined with the HAM1000 dataset, resulting in a comprehensive dataset containing around 10,115 skin images. This amalgamation of datasets enhances the diversity and richness of the dataset, making it suitable for developing and testing algorithms related to skin lesion classification and diagnosis. The new dataset is visually represented in Figure 2.

Fig. 2: Visualization of skin cancer images from dataset
We utilized the HAM10000 dataset along with an additional 100 SCC images, primarily addressing its susceptibility to highly imbalanced issues. The challenge of imbalanced data arises during the training of deep learning models for complex tasks, leading to biased or skewed predictions that impact overall performance. To mitigate this issue, data augmentation becomes crucial, as it helps augment the sample size for imbalanced classes, creating a more balanced dataset. The effectiveness of predictions from a supervised deep learning model relies on the diversity and size of the training dataset, making data balancing compulsory for high performance in solving complex image classification tasks. The Data Image Generator function was employed for dataset augmentation, as depicted in Figure 3. Utilizing data augmentation processes, such as cropping, horizontal flipping, and rotation at various angles, we increased the dataset size to enhance image separation for training and testing of each class. This resulted in a new dataset containing approximately 10,115 images. Subsequently, our proposed model was applied for accuracy in training and testing tasks.

Fig 3. Data augmentation process with images.
Table: - Skin Dataset with name and percentage of images

The model's confusion matrix, depicted in Figure 7, serves as a fundamental tool for evaluating the performance of classification models. It provides a detailed breakdown of the model's predictions, including True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). The integrated confusion matrix adds valuable insight into the model's overall performance, enabling the calculation of sensitivity (recall) and specificity. Sensitivity measures the model's ability to correctly identify positive instances, such as potential cancerous conditions, while specificity gauges accuracy in identifying negative instances, representing benign conditions. Figure 8 illustrates the analysis of improper fraction values in the dataset, which is crucial in skin disease classification and cancer prediction research. Addressing improper fractions or irregularities is essential, as they may indicate overrepresented instances of certain skin diseases, potentially influencing model training and evaluation. The results analysis of skin type dataset as fraction classification incorrectly revealed that SCC images were properly classified up to 50%-60%, indicating favorable outcomes for the initial phase of skin cancer diagnosis. The proposed multi-model system with deep learning demonstrated better accuracy on balanced data compared to unstructured/imbalanced data of images. Feature extraction focused on those features that performed well in image classification. However, the model is more flexible compared to previous versions. In the generalization process, challenges such as overfitting and underfitting of the dataset were encountered. The captured data includes noise and specific details that may not generalize well to new data, representing a minor limitation of the proposed system. Despite this, the overall results showcase the effectiveness and improved accuracy achieved through the proposed approach.
DETAILED DESCRIPTION OF DIAGRAM
Fig 1. Multi-modality System with LSTM and CNN
Fig. 2: Visualization of skin cancer images from dataset
Fig 3. Data augmentation process with images. , Claims:1. Design Augmented Intelligence Based Skin Cancer Classification and Prediction Framework analysis claims that the realm of medical science focused on skin disease classification with image analysis and the prediction of skin cancer has emerged as a pivotal area of research and development in the medical field.
2. Covering a wide spectrum from benign conditions to potentially life-threatening cancers, accurate and early diagnosis is crucial for effective treatment planning and patient care. Our research, centered around the integration of multimodal data to enhance skin disease classification and cancer prediction, has yielded promising and impactful results.
3. Utilizing pre-trained multi-model Convolutional Neural Networks (CNNs) on the HAM_10000 Metadata and Squamous Cell Carcinoma (SCC) images, our model demonstrated exceptional performance across key metrics.
4. The achieved accuracy of 92% attests to the model's high precision in correctly predicting instances, showcasing its reliability in clinical applications. A precision rate of 86.66% underscores the accuracy in identifying positive instances, while a recall rate of 95.16% emphasizes its effectiveness in capturing a significant proportion of actual positive cases.
5. These metrics collectively affirm the model's robustness in skin disease classification, particularly in distinguishing between benign and potentially malignant conditions. However, acknowledging the specificity value of 49.90% indicates room for improvement in correctly identifying negative instances, suggesting an area for future optimization to achieve a more balanced performance across both positive and negative predictions.
6. The F1 Score of 90.71%, representing a harmonious balance between precision and recall, strengthens the model's potential for real-world clinical applications. Notably, the findings suggest that SCC images might hold a higher priority for skin cancer identification, supported by the impressive accuracy rate of 92% on the original dataset.
7. The integration of multimodal data sources, combined with advanced machine learning techniques, opens avenues for revolutionizing dermatology and oncology.
8. The proposed methodology has the potential to provide accurate, efficient, and early diagnoses for patients with skin diseases, contributing significantly to improved patient care and treatment planning.

Documents

NameDate
202441087380-COMPLETE SPECIFICATION [13-11-2024(online)].pdf13/11/2024
202441087380-DRAWINGS [13-11-2024(online)].pdf13/11/2024
202441087380-FORM 1 [13-11-2024(online)].pdf13/11/2024
202441087380-FORM-9 [13-11-2024(online)].pdf13/11/2024
202441087380-POWER OF AUTHORITY [13-11-2024(online)].pdf13/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.