image
image
user-login
Patent search/

A SYSTEM AND A METHOD FOR REAL-TIME PNEUMONIA DIAGNOSIS ON A RESOURCE-CONSTRAINED HARDWARE PLATFORM

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

A SYSTEM AND A METHOD FOR REAL-TIME PNEUMONIA DIAGNOSIS ON A RESOURCE-CONSTRAINED HARDWARE PLATFORM

ORDINARY APPLICATION

Published

date

Filed on 5 November 2024

Abstract

ABSTRACT A SYSTEM AND A METHOD FOR REAL-TIME PNEUMONIA DIAGNOSIS ON A RESOURCE-CONSTRAINED HARDWARE PLATFORM The present disclosure discloses a system (100) for real-time pneumonia diagnosis on a resource-constrained hardware platform. The system (100) comprises a microcontroller (102) that is configured to run a deep learning neural network model 104, connecting a peripheral device (106) to receive instructions from an operational device (108). The image acquisition module (110) features a parallel imaging interface (112) and captures real-time chest images as per the given instructions. A pre-processing module (114) connected to the image acquisition module (110) to generate a standardized pre-processed image. The feature extraction module (116) extracts relevant features from this image, which are then classified by the classification module (118) utilizing the optimized neural network model (104). The results are displayed in real-time on a touch-enabled display (120). A communication interface (122) transmits diagnostic data, including chest image and classification results, to external devices for further analysis and record-keeping, ensuring efficient and accurate pneumonia diagnosis.

Patent Information

Application ID202441084727
Invention FieldCOMPUTER SCIENCE
Date of Application05/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
POOLA, RAHUL GOWTHAMSRM University-AP, Neerukonda, Mangalagiri mandal, Guntur- 522502, Andhra Pradesh, IndiaIndiaIndia
PUCHAKAYALA, LAHARI LOKESHSRM University-AP, Neerukonda, Mangalagiri mandal, Guntur- 522502, Andhra Pradesh, IndiaIndiaIndia
YELLAMPALLI, SIVA SANKARSRM University-AP, Neerukonda, Mangalagiri mandal, Guntur- 522502, Andhra Pradesh, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
SRM UNIVERSITYAmaravati, Mangalagiri, Andhra Pradesh-522502, IndiaIndiaIndia

Specification

Description:FIELD
The present disclosure is related primarily to medical diagnostics. More particularly, the present disclosure focuses on real-time pneumonia detection using deep learning techniques on resource-constrained hardware. It integrates edge computing with medical imaging technologies.
BACKGROUND
Conventionally pneumonia is diagnosed by multiple techniques which involve gathering medical history, conducting a physical examination, and performing diagnostic tests such as chest X-rays, blood tests, or sputum analysis to detect signs of infection and inflammation in the lungs. The goal of the diagnosis is to determine the cause, severity, and appropriate treatment for pneumonia.
Traditionally pneumonia diagnosis relies on high-performance computational infrastructure or cloud-based servers to run deep-learning algorithms on medical images, such as chest X-rays. Even though traditional techniques achieve high accuracy, they are impractical for deployment in remote areas where access to such computing power is limited or non-existent.
Further, existing pneumonia diagnosis solutions require advanced infrastructure or rely on limited local hardware resulting in slow and inefficient outcomes. Lack of reliable internet connectivity further restricts cloud-based systems. Thus, there is a need for efficient diagnosis techniques that work in low-resource environments without advanced infrastructure or internet, ensuring timely and accurate results.
Therefore, there is a need for a system and a method for real-time pneumonia diagnosis on a resource-constrained hardware platform that alleviates the aforementioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide a system that uses a deep learning model for pneumonia diagnosis.
Another object of the present disclosure is to provide a system that allows real-time diagnosis without the need for high-power computing resources.
Yet another object of the present disclosure is to provide a system that ensures timely and accurate pneumonia diagnosis.
Another object of the present disclosure is to provide a system that classifies pneumonia images with standardized images.
Still, another object of the present disclosure is to provide a system that reduces the overall cost of diagnostics.
Yet another object of the present disclosure is to provide a system that deploys an AI-based diagnostic system to improve the speed and efficiency of medical care in time-sensitive situations.
Still, another object of the present disclosure is to provide a system that enhances portability and accessibility for diagnosing pneumonia.
Yet another object of the present disclosure is to provide a system that has a user-friendly interface.
Another object of the present disclosure is to provide a method to diagnose pneumonia in real-time using deep learning model.
Yet another object of the present disclosure is to provide an efficient method for training a neural network model.
Still, another object of the present disclosure is to provide a method that ensures high classification accuracy and low latency.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a system and a method for real-time pneumonia diagnosis on a resource-constrained hardware platform.
The system includes a microcontroller, a neural network model, a peripheral device, an operational device, an image acquisition module, a parallel imaging interference, a preprocessing module, a feature extraction module, a classification module, a touch-enabled display, and a communication interface.
The microcontroller is configured to process and execute a deep learning neural network model and connected with a peripheral device to receive a set of instructions from an operational device of a user.
The image acquisition module comprising a parallel imaging interface is configured to capture at least one real-time chest image of a patient in accordance with the received set of instructions.
The pre-processing module is configured to cooperate with the image acquisition module to receive and process the chest image by means of a set of pre-processing techniques to generate a standardized pre-processed image.
The feature extraction module is configured to receive the standardized pre-processed image and extract relevant features from the standardized pre-processed image by means of a set of feature extraction techniques.
The classification module is configured to apply the extracted features on said deep learning neural network model to generate classification result which classifies the chest image as either normal or pneumonia-affected, the deep learning neural network model being optimized for operation of the microcontroller that is resource-constrained.
The touch-enabled display is configured to display the classification result in real-time.
The communication interface is configured to transmit diagnostic data, including the chest image and its corresponding classification result, to external devices or systems for further analysis or record-keeping.
In an embodiment, the microcontroller is a MAX78000 microcontroller optimized for deep learning inference on resource-constrained devices.
In an embodiment, the microcontroller is configured to operate with low power consumption, making the system suitable for deployment in areas with limited access to electrical power.
In an embodiment, the touch-enabled display is configured to show diagnostic results, including the classification confidence level of the pneumonia diagnosis, in an intuitive user interface.
In an embodiment, the deep learning neural network model is a convolutional neural network (CNN) model specifically trained for pneumonia diagnosis using a dataset of chest images.
In an embodiment, the deep learning neural network model performs real-time inference, providing diagnostic results within seconds of image capture.
In an embodiment, the diagnostic data is processed locally on the microcontroller ensuring enhanced data security and patient privacy by eliminating the need to transmit sensitive medical information to external servers.
In an embodiment, the pneumonia diagnosis is implemented in a device that is compact and portable, designed for point-of-care diagnostics in remote or under-resourced areas, where access to advanced diagnostic equipment is limited.
In an embodiment, the peripheral device is an olimex arm-usb-ocd-h, configured to debug the microcontroller and enable precise control and diagnosis of the standardized image.
In an embodiment, the imaging interface comprises a camera, sensor, or device capable of capturing chest images.
In an embodiment, the preprocessing module is further configured to enhance standardized images, wherein the enhanced image includes noise reduction, contrast adjustment, and feature extraction to improve the accuracy of pneumonia detection.
In an embodiment, the set of pre-processing techniques includes contrast enhancement and noise reduction techniques to improve the quality of the chest images for accurate classification.
In an embodiment, the standardized image maintains the essential features of the input, wherein the essential features comprise edges, textures, shapes of organs (lungs, heart), areas of opacity or density, and abnormalities like lung infiltrates.
In an embodiment, the deep learning neural network model is trained using deep learning techniques and fine-tuned to optimize performance for the microcontroller, wherein the deep learning techniques include convolutional neural networks (CNNs) for feature extraction, max-pooling for dimensionality reduction, activation functions (e.g., ReLU) for non-linearity, and SoftMax for class probability distribution.
In an embodiment, the deep learning neural network model is configured to:
• receive an input image by an input layer with dimensions of 128x128 pixels and 3 channels;
• apply a filter by a series of convolutional layers to extract features from the input image;
• introduce non-linearity by a rectified linear unit (ReLU) layer connected to each convolutional layer which outputs a further input image if positive and zero otherwise;
• down-sample the further input image by one or more max-pooling layers to reduce its dimensionality; and
• converting an output of max-pooling layers by the SoftMax layer to probabilities of each class representing the network's prediction.
The present disclosure also envisages a method for real-time pneumonia diagnosis on a resource-constrained hardware platform. The method comprises the following steps:
• receiving, by microcontroller a set of images from an operational device of a user via a peripheral device ;
• capturing, by an inputting module comprising a parallel imaging interface, at least one real-time chest image of a patient;
• processing, by a pre-processing module the captured image by means of a set of pre-processing techniques to generate a standardized pre-processed image;
• extracting, by a feature extraction module one or more relevant features from the standardized pre-processed image by means of a set of feature extraction techniques;
• classifying, by a classification module, one or more relevant features by implementing a deep learning neural network model to generate a classification result that classifies the chest image as either normal or pneumonia-affected, the deep learning neural network model being optimized for operation of the microcontroller that is resource- constrained;
• display, by a touch-enabled display, the classification result in real-time; and
• transmitted, by a communication interface diagnostic data, including the chest image and its corresponding classification result, to external devices or systems for further analysis or record-keeping.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A system and a method for real-time pneumonia diagnosis on a resource-constrained hardware platform of the present disclosure will now be described with the help of the accompanying drawings, in which:
Figure 1 illustrates a system for real-time pneumonia diagnosis on a resource-constrained hardware platform, following the present disclosure;
Figure 2A and Figure 2B illustrate a flow chart depicting the steps involved in a method for real-time pneumonia diagnosis on a resource-constrained hardware platform in accordance with an embodiment of the present disclosure;
Figure 3A - Figure 3C illustrate the hardware setup and implementation of a system and a method for real-time pneumonia diagnosis on a resource-constrained hardware platform, in accordance with the present disclosure;
Figure 4A - Figure 4F illustrate the diagnostic prediction accuracies of test X-ray images in real-time, in accordance with the present disclosure; and
Figure 5 illustrates test x-ray images for real-time MAX78000 microcontroller implementation, in accordance with the present disclosure.
LIST OF REFERENCE NUMERALS
100 - System
102 - Microcontroller
104 - Neural network module
106 - Peripheral device
108 - Operational device
110 - Image acquisition module
112 - Parallel imaging interface
114 - preprocessing module
116 - Feature extraction module
118 - Classification module
120 - Touch enabled display
122 - Communication interface
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawings.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "including," and "having," are open ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
When an element is referred to as being "engaged to," "connected to," or "coupled to" another element, it may be directly engaged, connected or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
Conventional pneumonia diagnosis techniques involve gathering medical history, conducting a physical examination, and performing diagnostic tests such as chest X-rays, blood tests, or sputum analysis to detect signs of infection and inflammation in the lungs. The goal is to determine the cause, severity, and appropriate treatment for pneumonia.
Traditionally pneumonia diagnosis relies on high-performance computational infrastructure or cloud-based servers to run deep-learning algorithms on medical images, such as chest X-rays. Even though traditional techniques achieve high accuracy, they are impractical for deployment in remote areas where access to such computing power is limited or non-existent.
Existing pneumonia diagnosis solutions often require advanced infrastructure or rely on limited local hardware, resulting in slow and inefficient outcomes. Lack of reliable internet connectivity further restricts cloud-based systems. Thus, there is a need for efficient diagnosis techniques that work in low-resource environments without advanced infrastructure or internet, ensuring timely and accurate results.
To address the aforementioned problems, the present disclosure envisages a system (hereinafter referred to as "system 100") for real-time pneumonia diagnosis on a resource-constrained hardware platform and a method thereof (hereinafter referred to as "method 200"). The system 100 will be now described with reference to Figure 1 and the method 200 will be described with reference to Figure 2.
Referring to Figure 1, the system 100 comprises a microcontroller 102, a deep learning neural network model 104, a peripheral device 106, an operational device 108, an image acquisition module 110, a parallel imaging interference 112, a preprocessing module 114, a feature extraction module 116, a classification module 118, a touch-enabled display 120, and a communication interface 122.
The microcontroller 102 is configured to process and execute the deep learning neural network model 104 and connected with a peripheral device 106 to receive a set of instructions from an operational device 108 of a user, wherein the microcontroller 102 is a MAX78000 microcontroller optimized for deep learning inference on resource-constrained devices.
The microcontroller 102 is designed to operate with low power consumption, making the system suitable for deployment in areas with limited access to electrical power.
The peripheral device 106 is an olimex arm-usb-ocd-h, configured to debug the microcontroller 102 and enable precise control and diagnosis of the standardized image.
The image acquisition module 110 comprises a parallel imaging interface 112 configured to capture at least one real-time chest image of a patient in accordance with the received set of instructions, wherein the imaging interface comprises a camera, sensor, or device capable of capturing chest images.
The pre-processing module 114 is configured to cooperate with the image acquisition module 110 to receive and process the chest image by means of a set of pre-processing techniques to generate a standardized pre-processed image.
The preprocessing module 114 is further configured to enhance standardized images, wherein the enhanced image includes noise reduction, contrast adjustment, and feature extraction to improve the accuracy of pneumonia detection.
In an embodiment, the set of pre-processing techniques includes contrast enhancement and noise reduction techniques to improve the quality of the chest images for accurate classification, wherein the standardized image maintains the essential features of the input, wherein the essential features comprise edges, textures, shapes of organs (lungs, heart), areas of opacity or density, and abnormalities like lung infiltrates.
The feature extraction module 116 is configured to receive the standardized pre-processed image and extract relevant features from the standardized pre-processed image by means of a set of feature extraction techniques.
The classification module 118 is configured to apply the extracted features on the deep learning neural network model 104 to generate classification result that classifies the chest image as either normal or pneumonia-affected, said deep learning neural network model being optimized for operation of the microcontroller 102 that is resource-constrained.
In an embodiment, the deep learning neural network model 104 is a convolutional neural network (CNN) model specifically trained for pneumonia diagnosis using a dataset of chest images and performs real-time inference, providing diagnostic results within seconds of image capture, wherein all diagnostic data is processed locally on the microcontroller, ensuring enhanced data security and patient privacy by eliminating the need to transmit sensitive medical information to external servers.
The pneumonia diagnosis is implemented in a device that is compact and portable, designed for point-of-care diagnostics in remote or under-resourced areas, where access to advanced diagnostic equipment is limited.
Further, the deep learning neural network model 104 is trained using deep learning techniques and fine-tuned to optimize performance for the microcontroller, wherein the deep learning techniques include convolutional neural networks (CNNs) for feature extraction, max-pooling for dimensionality reduction, activation functions (e.g., ReLU) for non-linearity, and SoftMax for class probability distribution.
The deep learning neural network model is configured to:
• receive an input image by an input layer with dimensions of 128x128 pixels and 3 channels;
• apply a filter by a series of convolutional layers to extract features from the input image;
• introduce non-linearity by a rectified linear unit (ReLU) layer connected to each convolutional layer which outputs a further input image if positive and zero otherwise;
• down-sample the further input image by one or more max-pooling layers to reduce its dimensionality; and
• converting an output of max-pooling layers by the SoftMax layer to probabilities of each class representing the network's prediction.
The convolutional layers comprise weights and biases associated with convolutional filters that are adjusted during training to optimize feature extraction from the input image.
The max-pooling layers downsample the input image by selecting the maximum value from a defined window, thereby reducing the spatial dimensions of the feature map while retaining the most important features.
The SoftMax layer is configured to convert an output vector into a probability distribution over multiple classes, indicating the likelihood of the image belonging to each class.
The touch-enabled display 120 is configured to display the classification result in real-time, wherein the touch-enabled display is to show diagnostic results, including the classification confidence level of the pneumonia diagnosis, in an intuitive user interface.
The communication interface 122 is configured to transmit diagnostic data, including the chest image and its corresponding classification result, to external devices or systems for further analysis or record-keeping.
Figure 2A and Figure 2B illustrate a flow chart depicting the steps involved in a method for real-time pneumonia diagnosis on a resource-constrained hardware platform in accordance with an embodiment of the present disclosure. The order in which method 200 is described is not intended to be construed as a limitation, and any number of the described method steps may be combined in any order to implement method 200, or an alternative method. Furthermore, method 200 may be implemented by processing resource or computing device(s) through any suitable hardware, non-transitory machine-readable medium/instructions, or a combination thereof. The method 200 comprises the following steps:
At step 202, method 200 includes receiving, by a microcontroller 102 a set of images from an operational device 108 of a user via a peripheral device.
At step 204, method 200 includes capturing, by an inputting module 110 comprising a parallel imaging interface 112, at least one real-time chest image of a patient.
At step 206, method 200 includes processing, by a pre-processing module 114, the captured image by means of a set of pre-processing techniques to generate a standardized pre-processed image.
At step 208, method 200 includes extracting, by a feature extraction module 118, one or more relevant features from the standardized pre-processed image by means of a set of feature extraction techniques.
At step 210, method 200 includes classifying, by a classification module 120 the one or more relevant features by implementing the deep learning neural network model 104 to generate a classification result that classifies said chest image as either normal or pneumonia-affected, the deep learning neural network model 104 being optimized for operation of the microcontroller 104 that is resource- constrained.
At step 212, method 200 includes display, by a touch-enabled display (the classification result in real-time.
At step 214, method 200 includes transmitting, by a communication interface 122 diagnostic data, including the chest image and its corresponding classification result, to external devices or systems for further analysis or record-keeping.
Figure 3A - Figure 3C illustrate the hardware setup and implementation of a system and a method for real-time pneumonia diagnosis on a resource-constrained hardware platform, in accordance with the present disclosure. The hardware setup for a real-time pneumonia diagnosis system is implemented for efficiency on a resource-constrained platform. At its core is a microcontroller 102 optimized for real-time image processing and executing deep learning models for diagnostic purposes. An onboard parallel interference module 112 captures X-ray images directly from an X-ray source, allowing for immediate analysis.
The system 100 includes a touch-enabled display 120 that visualizes the captured X-ray data, providing user interaction for diagnostic evaluation, such as displaying a probability score of pneumonia (68%). An Olimex ARM-USB-OCD-H debugging tool facilitates the programming and troubleshooting of the microcontroller 102. Additionally, a communication interface 122 transmits diagnostic data, including chest images and classification results, to external devices for further analysis. Various cables and connectors enhance the system's functionality by ensuring seamless power and data transmission. Overall, this hardware configuration is specifically used for real-time analysis and immediate feedback, making it well-suited for pneumonia diagnostics.
Figure 4A - Figure 4F illustrate the diagnostic prediction accuracies of test X-ray images in real-time, in accordance with the present disclosure. The touch-enabled display 120 shows the X-ray image along with the classification results, determining whether the image indicates pneumonia or a healthy lung. For each X-ray, the system 100 provides a prediction label ("Pneumonia" or "Normal") and an accuracy percentage, reflecting the deep learning neural network model 104 certainty in the diagnosis. For X-rays showing pneumonia, the system 100 gives prediction accuracies of 88%, 80%, and 66%, indicating varying levels of certainty in detecting the disease. In contrast, for healthy lung X-rays, the system achieves higher prediction accuracies of 97%, 94%, and 88%, demonstrating strong reliability in identifying normal cases. Each prediction is processed in real-time for analysis of medical images. The system 100 integrates both image processing and classification, making it a powerful tool for real-time pneumonia diagnosis on a resource-constrained hardware platform.
Figure 5 illustrates test x-ray images for real-time MAX78000 microcontroller implementation, in accordance with the present disclosure. The normal class contains three X-ray images (Test Images 1, 2, and 3) showing healthy lungs without any signs of pneumonia. In the bottom row, labeled as the Pneumonia Class, three X-ray images (Test Images 4, 5, and 6) exhibit symptoms of pneumonia, with visible signs of lung infection. These images are likely part of a dataset utilized to evaluate the performance of a diagnostic model running on the MAX 78000 microcontroller, specifically designed for real-time image classification.
The Performance Evaluation of real-time MAX 78000 Microcontroller implementation is shown below table:
This table contains the results of the deep learning neural network model 104 classifying chest X-rays as either "normal" or "pneumonia." The table lists the image, the algorithm used, the classification result, and the prediction accuracy. The neural network model was able to correctly classify the first three images as "normal" with accuracies of 97%, 94%, and 88%, respectively. The neural network model also correctly classified the fourth and fifth images as "pneumonia" with accuracies of 88% and 80%, respectively. However, the neural network model had a lower accuracy of 66% for the sixth image, which was also classified as "pneumonia." These results suggest that the neural network model is able to distinguish between normal and pneumonia chest X-rays with a relatively high degree of accuracy.
In an operative configuration, The system 100 comprises a microcontroller 102, a deep learning neural network model 104, a peripheral device 106, an operational device 108, an image acquisition module 110, a parallel imaging interface 112, a preprocessing module 114, a feature extraction module 116, a classification module 118, a touch-enabled display 120, and a communication interface 122. The microcontroller 102, specifically a MAX78000 microcontroller optimized for deep learning inference on resource-constrained devices, processes and executes the deep learning neural network model 104. It connects with the peripheral device 106, such as an Olimex ARM-USB-OCD-H, to receive instructions from an operational device 108, providing precise control and debugging capabilities. The system captures real-time chest images of patients via the image acquisition module 110, which includes a parallel imaging interface 112 consisting of a camera or sensor. The preprocessing module 114 receives and processes the chest image using noise reduction and contrast adjustment techniques to generate a standardized pre-processed image, which retains essential features such as edges, textures, organ shapes (lungs, heart), and abnormalities like lung infiltrates. The pre-processed image is then sent to the feature extraction module 116, which applies feature extraction techniques to identify relevant characteristics. These features are subsequently used by the classification module 118, which utilizes the deep learning neural network model 104, a convolutional neural network (CNN) specifically trained for pneumonia diagnosis, to classify the chest image as either normal or pneumonia-affected. The deep learning neural network model 104 is optimized to operate on the microcontroller 102, enabling real-time inference and providing diagnostic results within seconds, while ensuring data security by processing all data locally. The system is compact and portable, making it suitable for point-of-care diagnostics in remote or under-resourced areas, where advanced diagnostic equipment may not be accessible. The deep learning neural network model 104 uses CNNs for feature extraction, max-pooling for dimensionality reduction, ReLU for non-linearity, and SoftMax for a class probability distribution. It processes input images (128x128 pixels with 3 channels) through several convolutional layers, applies non-linearity via ReLU layers, downsamples via max-pooling layers, and generates a class prediction output through SoftMax.
Advantageously, the system 100 provides a real-time pneumonia diagnosis on resource-constrained hardware. Utilizing a low-power MAX78000 microcontroller, it is suitable for deployment in remote areas with limited access to electricity and advanced equipment. Its compact design enables point-of-care diagnostics, and local data processing enhances patient privacy by eliminating reliance on external servers. Additionally, the deep learning neural network model delivers fast and accurate diagnostic results within seconds, facilitating timely treatment decisions.
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and a method for real-time pneumonia diagnosis on a resource-constrained hardware platform that:
• pneumonia diagnosis in real-time;
• uses a deep learning model for pneumonia diagnosis;
• is energy-efficient;
• eliminates latency and the need for internet connectivity;
• reduces the cost of diagnosis; and
• user-friendly interface.
The aspect herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression "at least" or "at least one" suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
Any discussion of devices, articles or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation. , Claims:WE CLAIM:
1. A system (100) for real-time pneumonia diagnosis on a resource-constrained hardware platform, said system (100) comprising:
• a microcontroller (102) configured to process and execute a deep learning neural network model (104), and connected with a peripheral device (106) to receive a set of instructions from an operational device (108) of a user;
• an image acquisition module (110) comprising a parallel imaging interface (112) configured to capture at least one real time chest image of a patient in accordance with the received set of instructions;
• a pre-processing module (114) configured to cooperate with said image acquisition module (110) to receive and process said chest image by means of a set of pre-processing techniques to generate a standardized pre-processed image;
• a feature extraction module (116) configured to receive said standardized pre-processed image and extract relevant features from the standardized pre-processed image by means of a set of feature extraction techniques;
• a classification module (118) configured to apply the extracted features on said deep learning neural network model (104) to generate classification result that classifies said chest image as either normal or pneumonia-affected, said deep learning neural network model (104) being optimized for operation of the microcontroller (102) that is resource- constrained;
• a touch-enabled display (120) configured to display the classification result in real-time; and
• a communication interface (122) configured to transmit diagnostic data, including the chest image and its corresponding classification result, to external devices or systems for further analysis or record-keeping.
2. The system (100) as claimed in claim 1, wherein the microcontroller (102) is a MAX78000 microcontroller optimized for deep learning inference on resource-constrained devices.
3. The system (100) as claimed in claim 1, wherein the microcontroller (102) is designed to operate with low power consumption, making the system suitable for deployment in areas with limited access to electrical power.
4. The system (100) as claimed in claim 1, wherein the touch-enabled display (120) is configured to show diagnostic results, including the classification confidence level of the pneumonia diagnosis, in an intuitive user interface.
5. The system (100) as claimed in claim 1, wherein the deep learning neural network model (104) is a convolutional neural network (CNN) model specifically trained for pneumonia diagnosis using a dataset of chest images.
6. The system (100) as claimed in claim 1, wherein the deep learning neural network model (104) performs real-time inference, providing diagnostic results within seconds of image capture.
7. The system (100) as claimed in claim 1, wherein all diagnostic data is processed locally on the microcontroller (102), ensures enhanced data security and patient privacy by eliminating the need to transmit sensitive medical information to external servers.
8. The system (100) as claimed in claim 1, wherein the system (100) is implemented in a device that is compact and portable, designed for point-of-care diagnostics in remote or under-resourced areas, where access to advanced diagnostic equipment is limited.
9. The system (100) as claimed in claim 1, wherein the peripheral device (106) is an olimex arm-usb-ocd-h, configured to debug the microcontroller (102) and enable precise control and diagnosis of the standardized image.
10. The system (100) as claimed in claim 1, wherein the imaging interface (112) comprises a camera, sensor, or device capable of capturing chest images.
11. The system (100) as claimed in claim 1, wherein the preprocessing module (114) is further configured to enhance standardized images, wherein the enhanced image includes noise reduction, contrast adjustment, and feature extraction to improve the accuracy of pneumonia detection.
12. The system (100) as claimed in claim 1, wherein the set of pre-processing techniques includes contrast enhancement and noise reduction techniques to improve the quality of the chest images for accurate classification.
13. The system (100) as claimed in claim 1, wherein the standardized image maintains the essential features of the input, wherein the essential features comprise edges, textures, shapes of organs (lungs, heart), areas of opacity or density, and abnormalities like lung infiltrates.
14. The system (100) as claimed in claim 1, wherein the deep learning neural network model (104) is trained using deep learning techniques and fine-tuned to optimize performance for the microcontroller (102), wherein the deep learning techniques include convolutional neural networks (CNNs) for feature extraction, max-pooling for dimensionality reduction, activation functions (e.g., ReLU) for non-linearity, and SoftMax for a class probability distribution.
15. The system (100) as claimed in claim 1, wherein said deep learning neural network model (104) is configured to:
• receive an input image by an input layer with dimensions of 128x128 pixels and 3 channels;
• apply a filter by a series of convolutional layers to extract features from the input image;
• introduce non-linearity by a rectified linear unit (ReLU) layer connected to each convolutional layer which outputs a further input image if positive and zero otherwise;
• down-sample the further input image by one or more max-pooling layers to reduce its dimensionality; and
• converting an output of max-pooling layers by the SoftMax layer to probabilities of each class representing the network's prediction.
16. The system (100) as claimed in claim 15, wherein the convolutional layers comprise weights and biases associated with convolutional filters that are adjusted during training to optimize feature extraction from the input image.
17. The system (100) as claimed in claim 15, wherein the max-pooling layers down sample the input image by selecting the maximum value from a defined window, thereby reducing the spatial dimensions of the feature map while retaining the most important features.
18. The system (100) as claimed in claim 15, wherein the SoftMax layer is configured to convert an output vector into a probability distribution over multiple classes, indicating the likelihood of the image belonging to each class.
19. A method (200) real-time pneumonia diagnosis on a resource-constrained hardware platform, said method (200) comprising:
• receiving, by microcontroller (102) a set of images from an operational device (108) of a user via a peripheral device (106);
• capturing, by an inputting module (110) comprising a parallel imaging interface (112), at least one real time chest image of a patient;
• processing, by a pre-processing module (114) said captured image by means of a set of pre-processing techniques to generate a standardized pre-processed image;
• extracting, by a feature extraction module (116) one or more relevant features from the standardized pre-processed image by means of a set of feature extraction techniques;
• classifying, by a classification module (118), the one or more relevant features by implementing a deep learning neural network model (104) to generate classification result which classifies said chest image as either normal or pneumonia-affected, said deep learning neural network model (104) being optimized for operation of the microcontroller (104) that is resource- constrained;
• display, by a touch-enabled display (120), the classification result in real-time; and
• transmitted, by a communication interface (122), diagnostic data, including the chest image and its corresponding classification result, to external devices or systems for further analysis or record-keeping.
Dated this 05th Day of November, 2024

_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA - 25
OF R. K. DEWAN & CO.
AUTHORIZED AGENT OF APPLICANT

TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, AT CHENNAI

Documents

NameDate
202441084727-FORM-26 [06-11-2024(online)].pdf06/11/2024
202441084727-COMPLETE SPECIFICATION [05-11-2024(online)].pdf05/11/2024
202441084727-DECLARATION OF INVENTORSHIP (FORM 5) [05-11-2024(online)].pdf05/11/2024
202441084727-DRAWINGS [05-11-2024(online)].pdf05/11/2024
202441084727-EDUCATIONAL INSTITUTION(S) [05-11-2024(online)].pdf05/11/2024
202441084727-EVIDENCE FOR REGISTRATION UNDER SSI [05-11-2024(online)].pdf05/11/2024
202441084727-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-11-2024(online)].pdf05/11/2024
202441084727-FORM 1 [05-11-2024(online)].pdf05/11/2024
202441084727-FORM 18 [05-11-2024(online)].pdf05/11/2024
202441084727-FORM FOR SMALL ENTITY(FORM-28) [05-11-2024(online)].pdf05/11/2024
202441084727-FORM-9 [05-11-2024(online)].pdf05/11/2024
202441084727-PROOF OF RIGHT [05-11-2024(online)].pdf05/11/2024
202441084727-REQUEST FOR EARLY PUBLICATION(FORM-9) [05-11-2024(online)].pdf05/11/2024
202441084727-REQUEST FOR EXAMINATION (FORM-18) [05-11-2024(online)].pdf05/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.