Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
SYSTEM AND METHOD FOR REAL-TIME DETECTION AND CLASSIFICATION OF DIABETIC RETINOPATHY (DR) USING A SMARTPHONE-BASED IMAGING SYSTEM
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 4 November 2024
Abstract
Disclosure is related to a system and a method for real-time detection and classification of diabetic retinopathy (DR) using a smartphone-based imaging system. The method comprising receiving a fundus image captured by a camera coupled with a lens from a user device. The method further comprising analyzing the received image to determine its quality based on predetermined quality parameters. Furthermore, the method comprising processing the analysed image using an algorithm to detect features indicative of DR. Additionally, the method comprising classifying the severity of DR into one or more stages based on the detected features. The DR classification result is outputted the DR classification result to the user device.
Patent Information
Application ID | 202411084056 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 04/11/2024 |
Publication Number | 46/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Ms. Huma Naz | UPES, Energy acres, Bidholi Campus (248007), Dehradun, Uttarakhand. | India | India |
Dr Neelu Jyothi Ahuja | UPES, Energy acres, Bidholi Campus (248007), Dehradun, Uttarakhand. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
UNIVERSITY OF PETROLEUM AND ENERGY STUDIES, DEHRADUN | University of Petroleum and Energy Studies, Bidholi Campus, Via Prem Nagar, Dehradun, Uttarakhand, India 248007. | India | India |
Specification
Description:FIELD OF INVENTION
The present invention relates to the field of medical imaging and diagnostic systems, specifically to systems and methods for the real-time detection and classification of Diabetic Retinopathy (DR).
BACKGROUND OF THE INVENTION
[002] Diabetic Retinopathy (DR) is a severe complication of diabetes, affecting a significant proportion of the global population. According to the 10th report of the International Diabetes Federation (IDF), 1 in 5 diabetes patients experience some degree of DR, with 103 million adults globally affected in 2020. Projections suggest this number will rise significantly, with approximately 3.2 million people expected to suffer from DR by 2030. This alarming increase, coupled with the limited healthcare resources available in many regions, underscores the critical need for early identification and management of DR to prevent progression to vision-threatening stages.
[003] Diabetic Retinopathy develops progressively, starting with non-proliferative diabetic retinopathy (NPDR), which can advance to the more severe proliferative diabetic retinopathy (PDR) if left untreated. Early detection and ongoing monitoring are essential in the NPDR phase to prevent irreversible damage. However, research indicates that a significant number of diabetes patients neglect annual eye examinations, often due to lengthy examination times, lack of symptoms, and insufficient access to retinal specialists. This issue is particularly pronounced in under-resourced regions, where retinal specialists are scarce, and patients face significant barriers to accessing healthcare services.
[004] In recent years, Artificial Intelligence (AI) and Deep Learning (DL) technologies have transformed healthcare, offering innovative solutions for improving diagnostic accuracy and efficiency in various medical domains, including ophthalmology. These AI-powered systems have demonstrated significant promise in detecting and grading DR, allowing for earlier intervention and better patient outcomes. Despite these advancements, most existing algorithms rely heavily on labeled datasets and are not optimized for real-time application in remote or resource-constrained environments. As a result, there is a pressing need for a solution that facilitates the real-time detection and grading of DR, particularly in underserved areas where access to healthcare professionals is limited.
[005] This invention addresses these challenges by introducing a novel AI-driven system designed specifically for real-time, automated grading of DR. The system leverages advanced deep learning techniques to streamline the diagnostic process, enabling timely and accurate identification of DR stages. This innovation is particularly crucial for remote areas, where it can reduce examination time, improve accessibility, and alleviate the burden on healthcare professionals.
SUMMARY OF THE INVENTION
[006] The present invention relates to a system and a method for real-time detection and classification of Diabetic Retinopathy (DR) performed by a server. The method comprising receiving a fundus image captured by a camera coupled with a lens from a user device. Further, the method comprising analyzing the received image to determine its quality based on predetermined quality parameters. The method further comprising processing the analysed image using an algorithm to detect features indicative of DR. Furthermore, the method comprising classifying the severity of DR into one or more stages based on the detected features. Additionally, the method comprising outputting the DR classification result to the user device.
[007] Several conventional algorithms exist for Diabetic Retinopathy (DR) detection, but the global healthcare system remains fragile, requiring extensive manual effort to label data for supervised learning processes. Additionally, the stringent regulations surrounding data sharing for medical imaging and the limited availability of experts for labeling further complicate the use of these algorithms. Many current DR detection algorithms rely on supervised learning, which could be improved with more labeled data and better real-life application. However, to date, no algorithm has been effectively deployed in real-life settings for real-time DR detection, especially in resource-limited environments where such tools are most needed. This lack of real-time solutions hinders the advancement of healthcare interventions for individuals at risk of DR.
[008] The proposed Diabetic Retinopathy (DR) detection system overcomes these challenges by providing several key advantages. Leveraging deep learning technology, the system offers a reliable second opinion through mobile devices, making it highly accessible and user-friendly. Its compatibility with any mobile phone ensures that it can be widely deployed, even in remote or resource-constrained environments. This capability addresses the growing shortage of ophthalmologists in underserved regions, enabling timely detection and intervention for patients at risk of DR.
OBJECTIVE OF THE INVENTION
[009] The combination of optimized fuzzy C-means and Convolutional neural networks to improve accuracy in DR detection.
[0010] To utilize a smartphone compatible application for real time detection.
[0011] To employ the MIIRet camera with a 20D Volk lens for fundus imaging, offering a cost-effective alternative to standard fundus cameras
[0012] To provide a detailed classification of DR severity from MILD PDR to PDR.
[0013] To design specifically for deployment in remote areas.
[0014] To implement innovative techniques to minimize the need for extensive labelled datasets.
BRIEF DESCRIPTION OF DRAWING
[0015] The foregoing and other features of this disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings, in which:
[0016] FIG. 1 illustrates a block diagram of a system for real-time detection and classification of diabetic retinopathy, in accordance with an embodiment of the present disclosure.
[0017] FIG. 2 illustrates an exemplary diagram of the system, in accordance with an embodiment of the present disclosure.
[0018] FIG. 3 illustrates an exemplary diagram of the system, in accordance with an embodiment of the present disclosure.
[0019] FIG. 4 illustrates an exemplary diagram of the system, in accordance with an embodiment of the present disclosure.
[0020] FIG. 5 illustrates an exemplary diagram of the system, in accordance with an embodiment of the present disclosure.
[0021] FIG. 6 illustrates an exemplary diagram of the system, in accordance with an embodiment of the present disclosure.
[0022] FIG. 7 illustrates a flow diagram of the method for real-time detection and classification of diabetic retinopathy, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF DRAWING
[0023] As used herein, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. In this specification, the terms "comprising ", or" comprising "and the like should not be construed as necessarily including the various elements or steps described in the specification or may be further comprised of additional components or steps. Also, the terms "part," & quote; module, "and the like described in the specification mean units for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software.
[0024] The following description, along with the accompanying drawings, sets forth certain specific details in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that the disclosed embodiments may be practiced in various combinations, without one or more of these instances, well known structures or components that are associated with the environment of the present disclosure, including but not limited to the communication systems and networks, have not been shown or described in order to avoid unnecessarily obscuring descriptions of the embodiments. Additionally, the various embodiments may be methods, systems, media, or devices. Accordingly, the various embodiments may be entirely hardware embodiments, entirely software embodiments or embodiments combining software and hardware aspects.
[0025] Retina Care utilizes a hybrid deep learning approach, combining Optimized Fuzzy C-means (OFCM) and Convolutional Neural Networks (CNNs) to accurately and efficiently detect Diabetic Retinopathy (DR) from unlabeled fundus images. This unsupervised deep clustering algorithm enhances detection precision, especially when labeled images are scarce, making supervised learning impractical. Unlike traditional systems that rely on conventional image processing and supervised machine learning, RetinaCare's unsupervised AI-driven solution delivers superior accuracy in DR detection.
[0026] Most existing Diabetic Retinopathy (DR) detection tools rely on costly table-top fundus cameras or specialized equipment, making them difficult to access in remote or underserved areas. RETINA CARE offers a cost-effective, smartphone-based solution that enables on-site DR screening, providing patients in rural regions with crucial access to ophthalmic care. By pairing the MIIRet Cam with a 20D Volk lens, the system captures high-quality fundus images, effectively addressing the limitations of standard mobile cameras.
[0027] A major challenge in ophthalmic imaging is the high cost of professional-grade equipment. Retina Care addresses this issue by utilizing the MIIRet camera paired with a 20D Volk lens, offering an affordable alternative to traditional fundus cameras. This solution enables users to capture high-quality retinal images at a significantly lower cost. With its real-time image-capturing capability in a portable format, Retina Care helps clinics in remote locations overcome resource constraints.
[0028] Many existing solutions require specialized skills for operating complex imaging devices. In contrast, Retina Care has been designed with a simple, intuitive interface that allows non-specialists to adjust brightness, contrast, and image orientation directly within the mobile app, ensuring that the captured image is optimized for analysis before triggering the classification process. This feature improves usability and reduces the need for re-takes or image manipulation by skilled professionals.
[0029] While many DR detection tools require the patient to visit a well-equipped facility, Retina Care can be deployed in remote, resource-constrained environments. The mobile-based application is highly accessible, allowing healthcare workers to use it in the field and upload results in real-time. This solution bridges the gap in regions with a shortage of ophthalmologists, providing immediate support for early DR detection and guiding patients toward further medical consultation when needed.
[0030] Figure 1 illustrates a block diagram of a system (100) for real-time detection and classification of diabetic retinopathy, according to an embodiment. The diabetic retinopathy classification and real-time detection system (100) may be referred to as "system". The diabetic retinopathy classification and real-time detection system (100) is a comprehensive system designed to detect and classify the stages of Diabetic Retinopathy from retinal (fundus) images in real-time. The DR Classification and Real-Time Detection System (100) enables accessible, cost-effective, and efficient screening of Diabetic Retinopathy, particularly in remote or underserved locations where specialized equipment is scarce. The user device (102) in the system (100) may be configured to capture high-quality retinal images using a smartphone camera or a specialized imaging device such as the MIIRet camera combined with a lens 104. The lens may include 20D Volk lens. It may be noted that the user device may wirelessly connect to either the network (106), via a computing device, mobile phone, laptop, or similar device etc. In additional embodiment, a plurality of the user device (not shown) may be connected to the network (106). The plurality of the user devices may be represented as a first user, a second user, and Nth user.
[0031] In an embodiment, the user device (102) may be connected to a server (108) via the network (106). The server (108) may receive a set of instructions via the user device (102) for captured by a camera coupled with a lens from a user device. Further, the server (108), at least based on the user instructions, may operate different modules of the system. The server (108) is a computer or system that provides resources, data, services, or programs to other computers, known as clients, over a network (104). The server (106) is a specialized computer of software system designed to deliver services, data, or resources to other computer, referred to as clients, over the network (108). It may function as a central hub or facilitator within a networked environment, performing various roles to enhance communication, data processing, and resource management. The server (108) can take the form of different types, such as a web server, file server, database server, email server, or application server. Additionally, the server (108) may receive a fundus image may be part of or accessible through a local area network (LAN), a mobile communications network, a satellite communication network, the Internet, a public or private cloud, a hybrid cloud, a server farm, or any combination of these.
[0032] In an embodiment, the server (108) may include a communication module (110). The communication module (110) may be configured to communicate with the user device (102) via the network (106). The communication module (110) may be configured for receiving a fundus image from the user device (102) and to transmit the result back to the user device (102). The communication module (110) accesses the network (106) via a wireless and/or wired connection. In some embodiment, the communication module (110) may be configured to Frequency Division Multiple Access (FDMA), Single Carrier FDMA (SC-FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), Orthogonal Frequency Division Multiplexing (OFDM), Orthogonal Frequency Division Multiple Access (OFDMA), Global System for Mobile (GSM) communications, General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), cdma2000, Wideband CDMA (WCDMA), High-Speed Downlink Packet Access (HSDPA), High-Speed Uplink Packet Access (HSUPA), High-Speed Packet Access (HSPA), Long Term Evolution (LTE), LTE Advanced (LTE-A), 802.11x, Wi-Fi, Zigbee, Ultra-WideBand (UWB), 802.16x, 802.15, Home Node-B (HnB), Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Near-Field Communications (NFC), fifth generation (5G), New Radio (NR), any combination thereof, and/or any other currently existing or future-implemented communications standard and/or protocol without deviating from the scope of the invention. In some embodiments, communication module (110) may include one or more antennas that are singular, arrayed, phased, switched, beamforming, beam steering, a combination thereof, and or any other antenna configuration without deviating from the scope of the invention.
[0033] In an embodiment, an I/O (Input/Output) interface (114) of the system (100) refers to the mechanism or point through which a system exchanges data with external entities, such as users, other systems, devices, or networks. The I/O interface (114) handles the input data or signals received by the system and output data or signals transmitted from the system (100) and ensures proper communication between the system's internal components and external peripherals.
[0034] Further, the communication module (110) may be coupled with a bus (116). The bus (116) may be configured electronically couple with the different components of the server (108). The server (108) may include a memory (112) and a processor (116). In an embodiment, the bus (116) may allow the memory (112) and the processor (118), to electronically exchange the instructions set for transmitting the results back to the user device (102).
[0035] The memory (112) may store a plurality of instructions that need to be executed by the processor (118). In an embodiment, the memory (112) may include a dataset to store the received fundus images, both in their raw form (as captured by the user device) and processed form (after quality assessment and feature detection). In an exemplary embodiment, the predefined instruction may include to classify the severity of DR based on detected features. Further, the memory (112) may store the plurality of instruction containing the instructions to classify the severity of DR based on detected features. The memory (112) may be comprised of any combination of Random Access Memory (RAM), Read Only Memory (ROM), flash memory, cache, static storage such as a magnetic or optical disk, or any other types of non-transitory computer-readable media or combinations thereof. Non-transitory computer-readable media may be any available media that can be accessed by processor(s) (118) and may include volatile media, non-volatile media, or both. The media may also be removable, non-removable, or both.
[0036] The processor (118) may be coupled to the bus (116). The processor (118) may execute instructions set to carry out logic for generating the hash message of the input message. The processor (118) may be any type of general or specific purpose processor, including a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Graphics Processing Unit (GPU), multiple instances thereof, and/or any combination thereof. Processor(s) (118) may also have multiple processing cores, and at least some of the cores may be configured to perform specific functions. Multi parallel processing may be used in some embodiments. In certain embodiments, at least one of processor(s) (118) may be a neuromorphic circuit that includes processing elements that mimic biological neurons. In some embodiments, neuromorphic circuits may not require the typical components of a Von Neumann computing architecture.
[0037] In an embodiment, the input module (120) may be electronically coupled to the processor (116). The processor (118) may include instructions set that are designed to enable the server (108) to receive high-quality retinal images using a smartphone camera or a specialized imaging device. The Input Module (120) serves as the interface between the external image-capturing device (e.g., smartphone camera or specialized imaging device) and the system's internal components. It captures the raw retinal images from the external device and passes them on to the processor (118) for further handling. When a retinal image is captured, the Input Module transmits the image data to the processor (118) in real-time. In an embodiment, the input module (120) may receive a fundus image captured by a camera coupled with a lens (104) from a user device (102). In an exemplary embodiment, the fundus image captures the interior surface of the eye, specifically the retina, optic disc, macula, fovea, and posterior pole. It provides detailed information essential for diagnosing various eye conditions, such as Diabetic Retinopathy (DR).
[0038] In an embodiment, the determination module (122) may analyse the received image to determine its quality based on predetermined quality parameters. The determination module evaluates the quality of the received retinal image. In an embodiment, the determination module (122) may also be able to make automated adjustments, such as enhancing brightness, contrast, or sharpness. In an embodiment, the predetermined quality parameters include at least one of brightness, contrast, sharpness, or focus of the received fundus image. If minor improvements may make the image suitable for analysis, the system (100) apply these changes before passing the image to the processing stage. The determination module (122) works in conjunction with the processor (118) and input module (120). After the input module (120) captures the image and transmits it to the system (100), the processor (118) uses the instructions provided by the determination module (122) to evaluate the image quality. Further, the processor (118) processing the analysed image using an algorithm to detect features indicative of DR. In an exemplary embodiment, CNNs are widely used in medical image analysis. For DR detection, CNNs process the retinal image in multiple layers, where each layer extracts increasingly complex features, starting with basic structures (e.g., edges) and moving on to detect more specific patterns like microaneurysms, hemorrhages, or neovascularization. Optimized Fuzzy C-Means clustering is an unsupervised machine learning algorithm that segments the retinal image by grouping pixels with similar properties (e.g., color, brightness) into clusters. This helps in identifying abnormal regions that may correspond to DR-related features, particularly useful when labeled datasets are unavailable. In an embodiment, the processor (118) may be configured to alert the user when a DR classification exceeds a predefined severity threshold. The processor (118) continuously compares the DR classification against a predefined severity threshold, which could be determined based on system (100) parameters or user-specific settings (e.g., age, medical history, or activity level). If the classification exceeds this threshold, the processor triggers an alert. This alert may be communicated to the user through various means, including but not limited to audible notifications via speakers, visual notifications via a display, haptic feedback, such as vibrations from a wearable device, push notifications to a connected device, such as a smartphone.
[0039] In an embodiment, the classification module (124) may classify the severity of DR into one or more stages based on the detected features. In an embodiment, detecting features indicative of DR and training a labelled fundus image based on a deep learning-based classification algorithm. Based on the type, quantity, and distribution of the detected features, the classification module evaluates the overall condition of the retina and assigns a severity level to the DR. In some cases, DR may be classified across multiple stages, as some features may represent early-stage DR while others indicate progression. The classification module (124) accounts for this and provides a comprehensive assessment of the retina's condition. The classification stages of DR comprises Mild NPDR, Moderate NPDR, Severe NPDR, and Proliferative Diabetic Retinopathy (PDR).
[0040] In an embodiment, the output module (126) may output the DR classification result to the user device (102). The output module (126) may deliver the DR classification result in multiple forms depending on the user device's capabilities. Multiple forms may include a visual representation of the classification (e.g., charts, graphs, severity levels), text-based notifications, audio cues or spoken alerts if the user device (102) has text-to-speech capabilities, vibration or haptic feedback on wearables to signify important or critical alerts. In an embodiment, the processor (118) may be configured to apply data anonymization techniques to the received fundus image to ensure patient privacy before processing and classification.
[0041] In an embodiment, an exemplary diagram (200) is provided, in accordance with an embodiment. Retina care and login screen is provided. In this embodiment, the system (100) incorporates a "retina care" feature designed to protect the user's eyes by optimizing the visual output, especially on devices with high-resolution displays (often referred to as "Retina displays" on devices like smartphones, tablets, or computers). The login screen of the system (100) is the primary access point for users and is designed to authenticate the user's identity before granting access to sensitive system data or functionalities.
[0042] In an embodiment, an exemplary diagram (300) is provided, in accordance with an embodiment. Retina care image capture and control screen is provided as part of the system (100). In an embodiment, the retina care image capture functionality ensures that the system (100) optimizes both the display and the process of capturing images, taking into account the health and comfort of the user's eyes. In this embodiment, the control screen is provided to allow users to manage the settings related to image capture, display output, and overall system functionality, with a focus on retina care.
[0043] In an embodiment, an exemplary diagram (400) is provided, in accordance with an embodiment. Ret Cam equipped with iPhone and 20 D lens and MII Ret Cam is provided as part of the system. In an embodiment, the system (100) incorporates two specialized components: a Ret Cam equipped with an iPhone and a 20D lens, and an MII Ret Cam, both of which are designed for retinal imaging, offering high-quality eye health monitoring and diagnostic capabilities. These devices are used for capturing detailed images of the retina, providing crucial insights into the user's eye health as part of the overall retina care system.
[0044] In an embodiment, an exemplary diagram (500) is provided, in accordance with an embodiment. Systemic Operation of Proposed Unsupervised DL Algorithm (FCMNN) is provided. In an embodiment, the FCMNN (Fuzzy C-Means Neural Network) is to combine deep learning techniques with an enhanced version of the Fuzzy C-Means (FCM) algorithm. In this approach, a Convolutional Neural Network (CNN) is used to improve feature extraction by utilizing sub-sampling layers, convolutional filters, and pooling stages. The FCMNN is developed within the CNN framework, where modifications are applied specifically to the convolution layer, and the standard classification section is omitted. A CNN typically consists of two critical components: the convolutional layer and the fully connected layer. These components work together to extract meaningful features from the input data and enable precise classifications. However, in the creation of FCMNN, the focus is on enhancing the convolutional layer while bypassing the classification stage. Instead of classification, the clustering process is implemented using a Modified Fuzzy C-Means (FCM) algorithm, which is an improved version of the original FCM technique. The input data, whether in 1D, 2D, or 3D dimensions, is fed into the convolutional layer of the CNN, where convolutional filtering takes place. The filter size (e.g., 3×3) is chosen based on the nature of the input data. After each pooling stage, the ConNet produces gradient results using multiple convolutional filter banks. These resulting features are then passed to the FCM process. For example, if each ConNet stage generates six feature slices and there are four stages in total, the FCM process will generate a total of 24 feature slices. The number of stages in the ConNet is directly correlated with the number of convolutional filters used in the network. After gathering these featured slices, the FCM clustering proceeds as outlined below.
F_VAF=?_(j?N_(i,i?j))¦?(Mean(W_ij))/(Mean(W_ij )+1) ?D_(x_i )-C_j ?^2 ?
[0045] In an embodiment, FIGs (6A) and (6B) are provided, in accordance with an embodiment of the disclosure. In Fig. 6A DR Fundus Images Testing with Moderate and Severe Grading (600A) is provided as part of the system (100). The system (100) is equipped to test fundus images for signs of diabetic retinopathy, an eye condition that affects the blood vessels of the retina and can lead to vision loss if left untreated. The testing process in Fig. 6A specifically handles images that have been graded as moderate and severe, indicating different levels of disease progression.
[0046] In Fig. 6B Fundus Image Captured with MIIRet Cam with moderate grading (600B) is provided as part of the system (100). The moderate grading in this context refers to fundus images where diabetic retinopathy is present but not yet at a critical stage. The captured image shows features such as microaneurysms, small hemorrhages, or mild retinal swelling, which are early indicators of DR progression. These features are used by the system to assign the moderate grading (600) to the image.
[0047] Figure 7 illustrates a flow diagram for a method 700 for in accordance with an implementation of the system as described in Figs 1, 2, 3, 4, 5 and 6. The method 700 is adapted to provide flexibility by using one or more modules such as an input module 120, a determination module 122, a classification module 124, and an output module 126.
[0048] At step 702, the method comprising the steps of receiving a fundus image captured by a camera coupled with a lens from a user device.
[0049] At step 704, the method comprising the steps of analyzing the received image to determine its quality based on predetermined quality parameters.
[0050] At step 706, the method comprising the step of processing the analysed image using an algorithm to detect features indicative of DR.
[0051] At step 708, the method comprising the step of classifying the severity of DR into one or more stages based on the detected features.
[0052] At step 710, the method comprising the step of outputting the DR classification result to the user device.
[0053] While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described.
, Claims:1. A method for real-time detection and classification of Diabetic Retinopathy (DR) performed by a server, the method comprising:
receiving a fundus image captured by a camera coupled with a lens from a user device;
analyzing the received image to determine its quality based on predetermined quality parameters;
processing the analysed image using an algorithm to detect features indicative of DR;
classifying the severity of DR into one or more stages based on the detected features;
outputting the DR classification result to the user device.
2. The method as claimed in claim 1, further comprising detecting features indicative of DR and training a labeled fundus image based on a deep learning-based classification algorithm.
3. The method as claimed in claim 1, wherein the predetermined quality parameters include at least one of brightness, contrast, sharpness, or focus of the received fundus image.
4. The method as claimed in claim 1, further comprising storing the received image and corresponding DR classification result in a database for future reference or model retraining.
5. The method as claimed in claim 1, wherein the fundus image is captured using a smartphone-based camera system equipped with a 20D lens for real-time image acquisition.
6. The method as claimed in claim 1, further comprising alerting the user when a DR classification exceeds a predefined severity threshold.
7. The method as claimed in claim 1, wherein the classification stages of DR comprises Mild NPDR, Moderate NPDR, Severe NPDR, and Proliferative Diabetic Retinopathy (PDR).
8. The method as claimed in claim 1, further comprising applying data anonymization techniques to the received fundus image to ensure patient privacy before processing and classification.
9. A system for real-time detection and classification of Diabetic Retinopathy (DR) performed by a server, wherein the system comprising a processor for executing the method steps comprising:
receiving a fundus image captured by a camera coupled with a lens from a user device;
analyzing the received image to determine its quality based on predetermined quality parameters;
processing the analysed image using an algorithm to detect features indicative of DR;
classifying the severity of DR into one or more stages based on the detected features.
Documents
Name | Date |
---|---|
202411084056-FORM 18 [05-11-2024(online)].pdf | 05/11/2024 |
202411084056-FORM-9 [05-11-2024(online)].pdf | 05/11/2024 |
202411084056-COMPLETE SPECIFICATION [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-DECLARATION OF INVENTORSHIP (FORM 5) [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-DRAWINGS [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-EDUCATIONAL INSTITUTION(S) [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-EVIDENCE FOR REGISTRATION UNDER SSI [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-FORM 1 [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-FORM FOR SMALL ENTITY(FORM-28) [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-POWER OF AUTHORITY [04-11-2024(online)].pdf | 04/11/2024 |
202411084056-PROOF OF RIGHT [04-11-2024(online)].pdf | 04/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.