image
image
user-login
Patent search/

AI-Enhanced Image Segmentation Method for Real-time Medical Diagnostics Using Machine Learning Algorithms

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

AI-Enhanced Image Segmentation Method for Real-time Medical Diagnostics Using Machine Learning Algorithms

ORDINARY APPLICATION

Published

date

Filed on 14 November 2024

Abstract

This invention describes an AI-enhanced image segmentation method for real-time medical diagnostics using machine learning algorithms. The method uses a dual-stage segmentation module comprising CNN and U-Net architectures to achieve accurate, real-time segmentation of medical images. A preprocessing module standardizes input data quality, and an active learning module allows continuous improvement through clinician feedback. The system is adaptable to various imaging modalities, such as MRI, CT, ultrasound, and X-ray, and supports real-time display of segmented images with annotated boundaries around key anatomical structures. This invention provides a robust, efficient solution for medical image segmentation, facilitating faster, more accurate diagnostic decisions. Accompanied Drawing [FIG. 1]

Patent Information

Application ID202441088304
Invention FieldCOMPUTER SCIENCE
Date of Application14/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
Dr. S. ShanthiProfessor & HoD, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Dr. S.Rahamat BashaAssociate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Mr. M.SandeepAssociate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Mr. Manoj Kumar GottimukkulaAssociate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. W.NirmalaAssociate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. P.Honey DianaAssociate Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Mr. P.Dastagiri ReddyAssistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Mr. N.Siva KumarAssistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia
Ms. N.BharathiAssistant Professor, Department of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia

Applicants

NameAddressCountryNationality
Malla Reddy College of Engineering & TechnologyDepartment of Computer Science and Engineering, Malla Reddy College of Engineering & Technology (UGC-Autonomous), Maisammaguda, Dhulapally, Secunderabad, Telangana, India. Pin Code:500100IndiaIndia

Specification

Description:[001] The present invention relates to the fields of artificial intelligence, machine learning, and medical imaging. Specifically, it provides an AI-enhanced image segmentation method that employs machine learning algorithms to segment medical images in real-time, assisting in faster and more accurate diagnostics. This invention is particularly relevant in fields such as radiology, pathology, and oncology, where precise segmentation of medical images is essential for diagnosis and treatment planning.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] Medical imaging technologies, including MRI, CT, and ultrasound, produce complex images that require accurate segmentation to identify and isolate specific structures, such as organs, tumors, and lesions. However, manual segmentation is labor-intensive, prone to inconsistencies, and can be slow, limiting its application in real-time diagnostics. While machine learning has been increasingly used for image segmentation, existing methods may lack the required accuracy or speed, particularly in high-stakes medical applications.
[004] Machine learning algorithms, especially deep learning models like convolutional neural networks (CNNs) and U-Net architectures, have demonstrated potential in image segmentation tasks. However, adapting these models for real-time segmentation in medical diagnostics demands optimization for speed, accuracy, and generalization across diverse imaging modalities and patient demographics. This invention proposes a system that leverages a hybrid of machine learning models, optimized for both computational efficiency and accuracy, to deliver real-time segmentation that meets the standards required in medical diagnostics.
[005] Accordingly, to overcome the prior art limitations based on aforesaid facts. The present invention provides an AI-Enhanced Image Segmentation Method for Real-time Medical Diagnostics Using Machine Learning Algorithms. Therefore, it would be useful and desirable to have a system, method and apparatus to meet the above-mentioned needs.

SUMMARY OF THE PRESENT INVENTION
[006] This invention provides an AI-enhanced image segmentation system that uses a hybrid machine learning framework for real-time segmentation of medical images. The system combines convolutional neural networks (CNNs) with U-Net and Transformer models to achieve high-accuracy segmentation while maintaining real-time performance. The framework operates by initially preprocessing the input image to standardize data quality, followed by a dual-stage segmentation process. The first stage performs coarse segmentation to identify general regions of interest (ROIs), and the second stage refines the segmentation at a higher resolution to delineate fine details.

[007] The system is adaptable to various imaging modalities, including MRI, CT, X-ray, and ultrasound, and can be tailored to target different anatomical structures, such as the brain, lungs, liver, and more. Additionally, the system supports active learning, allowing it to improve segmentation accuracy over time by incorporating feedback from clinicians. The result is a robust, high-speed segmentation system that can assist medical professionals by providing segmented images in real time, thus enabling quicker diagnostic decisions.
[008] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[009] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS
[010] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
FIG. 1: Block diagram of the AI-enhanced image segmentation system architecture.
FIG. 2: Flowchart illustrating the image preprocessing and data normalization steps.
FIG. 3: Schematic diagram of the dual-stage segmentation process, detailing the CNN-based coarse segmentation followed by U-Net-based fine segmentation.
FIG. 4: Flowchart showing the active learning module for continuous improvement based on clinician feedback.
FIG. 5: Example output image displaying real-time segmentation on an MRI scan, illustrating the system's ability to isolate anatomical structures with high precision.
DETAILED DESCRIPTION OF THE INVENTION
[011] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or are common general knowledge in the field relevant to the present invention.
[012] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[013] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[014] This invention presents an advanced image and video compression system that combines hybrid neural networks (Variational Autoencoders, Generative Adversarial Networks, and Transformers) with quantum computing and edge computing for enhanced efficiency and scalability. The system compresses data by encoding it into a latent space, reconstructing high-quality images, and capturing dependencies across video frames. Quantum processors handle intensive computations, while edge computing facilitates real-time compression closer to data sources. Auxiliary data and meta-learning optimize compression for varying content, and a reinforcement learning agent ensures adaptive data flow in fluctuating network conditions. This system is suited for applications requiring high-quality, low-latency compression, such as streaming, telemedicine, and AR/VR.
[015] System Architecture (FIG. 1)
The system architecture consists of an image preprocessing module, a dual-stage segmentation module, and an active learning module. Each module plays a crucial role in enhancing segmentation accuracy and speed, allowing the system to perform real-time segmentation in medical diagnostics.
[016] Image Preprocessing Module (FIG. 2):
I. This module receives raw medical images and performs normalization to standardize pixel intensity, contrast, and resolution. This preprocessing ensures consistency across input data, which is essential for accurate machine learning predictions.
II. Data augmentation techniques, such as rotation, scaling, and mirroring, are applied to increase the model's robustness and generalization capabilities.
[017] Dual-Stage Segmentation Module (FIG. 3):
I. The dual-stage segmentation module operates in two steps. The first stage employs a convolutional neural network (CNN) for coarse segmentation, providing a broad outline of the target structures within the image.
II. The second stage utilizes a U-Net architecture with attention mechanisms or Transformer layers to refine the segmentation, capturing fine details such as the borders of tumors or organ tissues.
III. This hybrid approach allows the system to segment large, complex images efficiently by first isolating regions of interest (ROIs) and then applying precise segmentation within those regions. This process reduces computational load while maintaining high accuracy.
[018] Active Learning Module (FIG. 4):
I. The system incorporates an active learning module that interacts with clinicians to improve segmentation accuracy continuously. Clinicians can provide feedback by adjusting segmented areas, and the model updates its training based on this input.
II. This iterative feedback loop allows the model to learn from clinical expertise, making it more accurate and adaptable to specific use cases over time. Active learning also allows the model to stay up-to-date with changes in imaging protocols and new diagnostic criteria.
[019] Real-time Processing and Output (FIG. 5):
I. The final segmented image is displayed to the clinician in real time, enabling prompt diagnostic decisions. Real-time processing is achieved through optimization techniques, such as batch processing and model quantization, which reduce latency without sacrificing segmentation quality.
II. Segmented images are annotated with boundary lines around structures like organs and lesions, highlighting areas of interest for diagnostic purposes.
Workflow
[020] Data Preprocessing:
The input medical image undergoes preprocessing, where intensity normalization, contrast adjustments, and augmentation are applied. This ensures consistency across images from different sources and enhances the model's robustness.
[021] Coarse Segmentation:
I. The first stage of the dual-stage segmentation module uses a CNN to perform a fast, low-resolution segmentation. This identifies the general location of structures within the image, such as major organs, lesions, or bones.
II. Coarse segmentation is particularly useful in large or noisy images, where processing the entire image at full resolution would be computationally intensive.
[022] Fine Segmentation:
I. The second stage performs fine segmentation using a U-Net or Transformer-enhanced U-Net model. This model is trained on high-resolution data and includes attention mechanisms that help focus on relevant features in the image.
II. Fine segmentation captures intricate details, such as the contours of tumors or organ edges, with high precision.
[023] Active Learning Integration:
The active learning module allows clinicians to review and adjust the segmented images, providing feedback to the model. This feedback is used to improve the model's performance in future cases, enhancing accuracy and adaptability over time.
[024] Output and Real-time Display:
The segmented image is displayed to the clinician, annotated with clear boundaries around relevant anatomical structures. Real-time processing ensures that clinicians can view and interpret segmentation results without delay, supporting faster decision-making.
[025] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[026] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
, Claims:1. An AI-enhanced image segmentation system for real-time medical diagnostics, comprising a preprocessing module, a dual-stage segmentation module, and an active learning module, wherein the dual-stage segmentation module uses a combination of CNN and U-Net architectures to segment medical images.
2. The system of claim 1, wherein the preprocessing module normalizes pixel intensity, contrast, and resolution in medical images to ensure consistent input data quality.
3. The system of claim 1, wherein the dual-stage segmentation module comprises a CNN for coarse segmentation followed by a U-Net with attention mechanisms for fine segmentation.
4. The system of claim 1, wherein the active learning module receives feedback from clinicians, allowing the model to update its segmentation accuracy based on user-provided adjustments.
5. The system of claim 1, wherein the dual-stage segmentation module adapts to multiple imaging modalities, including MRI, CT, ultrasound, and X-ray.
6. The system of claim 1, further comprising a real-time display unit that presents segmented images to clinicians with annotated boundaries around identified structures.
7. The system of claim 3, wherein the U-Net model is enhanced with Transformer layers to improve feature extraction and segmentation accuracy.
8. The system of claim 1, wherein the preprocessing module applies data augmentation techniques, including rotation, scaling, and mirroring, to enhance model robustness.

Documents

NameDate
202441088304-COMPLETE SPECIFICATION [14-11-2024(online)].pdf14/11/2024
202441088304-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf14/11/2024
202441088304-DRAWINGS [14-11-2024(online)].pdf14/11/2024
202441088304-FORM 1 [14-11-2024(online)].pdf14/11/2024
202441088304-FORM-9 [14-11-2024(online)].pdf14/11/2024
202441088304-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf14/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.