Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A preclinical screening system for anemia detection based on eye conjunctiva images and method thereof
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 16 November 2024
Abstract
TITLE: A preclinical screening system for anemia detection based on eye conjunctiva images and method thereof The present invention (100) discusses about the system and method for preclinical screening for anemia detection based on images of eye conjunctiva. The present invention focuses on the non-invasive approach to estimate the anemic condition of the individual. The first step in the approach is the collection of eye conjunctiva region images through image acquisition module (105) with 3D printed spacer (102) macro lens (103) fitted mobile phone camera. The eye dataset is analyzed using object detection module to detect the conjunctiva. The conjunctiva region of the image is then extracted using the Segment Anything Model (SAM). Color and statistical features are extracted from the conjunctiva region and given to the MLP classifier model. The trained MLP classifier model was embedded in an interactive display module that captures and processes a patient's conjunctiva image. Main Illustrative: Figure 1
Patent Information
Application ID | 202441088606 |
Invention Field | BIO-MEDICAL ENGINEERING |
Date of Application | 16/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. M. Sasikala | Professor, Department of Biomedical Engineering, Anna University Chennai - 600025 | India | India |
A. M. Arunnagiri | Research Scholar, Department of Electronics and Communication Engineering, Anna University Chennai - 600025 | India | India |
Dr. N. Ramadass | Professor, Department of Electronics and Communication Engineering, Anna University Chennai - 600025 | India | India |
Ramya G | Student, Department of Electronics and Communication Engineering, Anna University Chennai - 600025 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
ANNA UNIVERSITY, CHENNAI | The Director, Centre for Intellectual Property Rights (CIPR), CPDE Building, College of Engineering Guindy, Anna University, Chennai – 600 025 | India | India |
Specification
Description:TITLE: A preclinical screening system for anemia detection based on eye conjunctiva images and method thereof
FIELD OF THE INVENTION
The present invention is related to the field of Biomedical Instrumentation. More particularly relates to a 3D printed Spacer design helps standardizing the distance between the eye and the camera. The present invention focus on the development of a preclinical screening tool for anemia detection based on eye conjunctiva images through digital devices such as smart phone and other such devices.
BACKGROUND OF THE INVENTION
Anemia is a medical condition characterized by a deficiency in the number or quality of red blood cells (RBCs), which impairs the oxygen-carrying capacity of the blood. This condition affects millions of people worldwide, with significant health impacts ranging from fatigue and weakness to severe complications in chronic cases. Traditional methods of detecting and classifying anemia involve manual examination of peripheral blood smears under a microscope, a time-consuming and labor-intensive process prone to human error. To address these challenges, recent research has focused on developing non-invasive approach of screening anemia using eye conjunctiva images.
In existing prior arts, the blood smear on the glass slide to be examined is placed on the microscope stage and the smartphone is mounted to the viewing port of the light microscope by adding a smartphone support. The magnification factor of the system is 1000 (100x * 10x * 1.4x) The image of the blood smear is acquired with the smartphone camera and passed to the processing phase. One constraint in Mobscope is the need for immersion oil to be applied in the smear and another constraint is that only one slide can be studied at a time.
In one of the prior art US20230360220 A1 titled "Method and system for imaging eye blood vessels" describes a method of diagnosing a condition of a subject, comprises: receiving image data of an anterior of an eye of the subject, and analyzing the image data to detect at least one of: flow of individual blood cells in libmal or conjunctival blood vessels of the eye, morphology of limbal or conjunctival blood vessels. The method also comprises determining the condition of the subject based on the detection(s).
In another prior art titled "Detection of anemia using conjunctiva images: A smartphone application approach" authored by Peter Appiahene et al., focuses on pallor analysis and used images of the conjunctiva of the eyes to detect anemia using machine learning techniques. This study used a publicly available dataset of 710 images of the conjunctiva of the eyes acquired with a unique tool that eliminates any interference from ambient light. We combined Convolutional Neural Networks, Logistic Regression, and Gaussian Blur algorithm to develop a conjunctiva detection model and an anemia detection model which runs on a Fast API server connected to a frontend mobile app built with React Native. The developed model was embedded into a smartphone application that can detect anemia by capturing and processing a patient's conjunctiva with a sensitivity of 90%, a specificity of 95%, and an accuracy of 92.50% on average performance in about 50 s.
In yet another prior art titled "Detection of Anemia from Image of the Anterior Conjunctiva of the Eye by Image Processing and Thresholding"; Azwad Tamir et al discloses a mechanism for the automated detection of anemia through non-invasive visual method. The process involves the detection of anemia by analyzing the anterior conjunctival pallor of the eye. It operates by quantifying the conjunctival color from digital photographs of the eye taken with a smartphone camera of appropriate resolution under adequate lighting conditions with the help of an android application that we have devised. These images are then processed to obtain the red and green component spectra of the conjunctiva color and compared against a threshold to determine whether the patient is anemic or not. We employed our method on 19 test subjects with known hemoglobin levels. The results obtained from our process agreed with the patient's blood report in 15 out of the 19 cases which translates to an accuracy of 78.9 percent.
In further another prior art patent number IN202341012964 titled "Non-invasive detection of anemia using AI techniques" discloses a system that is capable of the non-invasive and automated diagnosis of anemia using visual methods. The treatment of this disease might benefit from the use of non-invasive technologies for monitoring and recognizing possible risks of anemia, as well as devices based on smartphones that are capable of doing this duty. The above-mentioned prior art take account of the results of a few well-known previous investigations into the subject matter.
In further another prior art patent number IN202341014923 titled "Frame work for design and development of on-invasive method of anemia diagnosis in rural areas" discloses a non-invasive technique for anemia detection using smart phones. The approach proposed involves using an artificial neural network to detect anemic patients from images of the tongue and fingernails. To overcome the limitations of limited and small datasets, 15 image augmentation techniques are used to increase the number of available training images. Computer vision algorithms are used for preprocessing and feature extraction to standardize the non-invasive method.
The conventional method of invasive approach of blood collection-based anemia detection is time-consuming and not user-friendly. A non-invasive approach to anemia detection based on the eye conjunctiva images helps in easy diagnosis and mass screening of anemia. An existing constraint in eye conjunctiva-based screening is the non-uniform distance between lens and eye. Therefore, there is a need for high-performance-based tool for preclinical screening of anemia.
OBJECTIVES OF THE INVENTION
It is an object of the present invention to develop a preclinical screening system to detect anemia using eye conjunctiva images.
It is an object of the present invention to design a 3D-printed spacer integrated with Macro Lens and Smartphone camera for image acquisition.
It is another object of the present invention is to obtain an object detection method by localizing the eye conjunctiva region using the You Only Look Once (YOLO) YOLO.
It is further another object of the present invention to embed a trained model in the mobile application to screen for anemia subjects.
SUMMARY OF THE INVENTION
It is an aspect of the present invention to obtain a preclinical screening system to detect anemia using eye conjunctiva images through noninvasive technique.
It is an another aspect of the present invention is to develop a spacer assembly having a 3D-printed spacer integrated with a Macro Lens, in which the spacer is attachable to camera of smart device through clip arrangement thereby forming an image acquisition module which performs image acquisition of human eye conjunctiva with finer details.
It is an another aspect of the present invention is to develop an interactive display module integrated with a microprocessor and a memory module, in which the display module receives the images from image acquisition module. The image acquisition module is integrated with an object detection means which is subjected to localization using YOLO object detection method that detects conjunctiva region of human eye from the localized region. It is followed by segmentation and feature extraction using statistical and colour based features. The object detection means is embedded with a trained Multi-Layer Perceptron (MLP) classifier to classify anemic condition. The present system yield an accuracy of 98.3% without any invasive technique.
It is an another aspect of the present invention is to develop a preclinical screening system with a spacer as portable and involves in standardizing the distance between human eye and the camera which is about 4.5 cm. The macro lens has a magnification factor for about 20 times.
It is an another aspect of the present invention is to develop a preclinical screening system with a spacer which is fabricated with PETG filament material helps in image acquisition irrespective of the lighting conditions.
It is an another aspect of the present invention is to develop a preclinical screening system with segmentation of conjunctiva region of human eye is performed by Segmentation method using Segment Anything Model (SAM).
It is an another aspect of the present invention is to develop a preclinical screening system with the Multi-Layer Perceptron (MLP) classifier to classifies anemic condition as normal, moderate anemic and anemic.
It is an another aspect of the present invention is to develop a preclinical screening system with display device showing the results with anemic classification are geo-tagged to maintain the patient data, which helping to identify anemic clusters.
It is an another aspect of the present invention is to develop a method for detecting anemia using spacer assembly with the following steps obtaining the image of eye using 3D printed spacer assembly, extracting the region of interest (ROI) using YOLO object detection method that detects conjunctiva region of human eye, segmenting the extracted image using Segment Anything Model (SAM), extracting the features from the segmented image obtained by the previous step, wherein the features comprising statistical features such as Mean of red, green channels, Standard deviation of red channel, Entropy of red channel and color based features such as percentage of red pixel and high hue ratio, classifying the extracted features using Multi-Layer Perceptron (MLP) and transferring to a digital module where the classified images are identified to be anemic, moderate anemic or normal.
BRIEF DESCRIPTION OF DRAWINGS:
For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
Figure 1: Illustrates the block diagram of the present invention of detecting anemic using conjunctiva region of eye.
Figure 2: Illustrates (a) 3D-printed Spacer using PETG Filament (b) Side View representation of 3D-printed Spacer and (c) Top View representation of 3D-printed Spacer used in the present invention.
Figure 3: Illustrates the image acquisition using spacer assembly depicted with smart device, clip attachable macro lens, 3D printed spacer and human eye.
Figure 4: Illustrates the (a) coordinates of the bounding boxes are extracted (b) segmentation of conjunctiva using segment anything model using the present invention.
Figure 5: Illustrates the (a) confusion matrix of mobile phone camera images using SAM (b) confusion matrix of the present invention (i.e. confusion matrix of 3D fitted Mobile phone camera images using SAM).
DETAILED DESCRIPTION OF INVENTION:
The following detailed description illustrates by way of example and not by way of limitations.
The image is acquired using of a 3D printed Spacer integrated with Macro lens and smartphone camera. The 3D printed spacer is designed in such a way there are two openings in the 3D printed spacer where one opening is with diameter of 30mm to be easily attached macro lens, another opening diameter of 54.62mm to easily placed around the area of eye in such a way that image is acquired precisely focusing on conjunctiva region of the eye. So, the distance between lens and subject's eye is positioned as 45 mm. This setup allows for Capturing the finest details with remarkable clarity and precision. The captured image is analyzed using You Only Look Once (YOLO) object detection method to detect the conjunctiva region, then the conjunctiva region is segmented using Segment Anything Model (SAM). The statistical and color-based features are extracted from the segmented eye conjunctiva region and fed to multi-Layer perceptron (MLP) for classification of anemia as Anemic, Moderate Anemic and Normal. The trained model is embedded in the mobile application to screen for anemia subjects in the geographical region. Thus, nutritional deficiency can be easily addressed by health professionals through appropriate actions.
IMAGE ACQUISITION DEVICE
The image acquisition device used in the work consists of a Mobile phone camera with IMX586 sensor capturing images at a resolution of 3000 *4000 pixels at RAW image mode. The image acquisition of 3D printed Spacer Macro lens fitted with smart phone camera. It comprises a mobile phone, a 20x macro lens and a designed 3D printed spacer. The macro lens is attached to a mobile camera and assembled into the 3D-printed spacer, positioned 45 mm from the subject's eye. This setup allows for capturing close-up photos of the eye, enabling the visualization of tiny details and maintaining a fixed focal length. The images captured are used as PNG format. The 3D spacer, as shown in Fig. 2(a), was printed using Polyethylene terephthalate glycol (PETG) material using a 3D printer.
3D-Spacer is fabricated using Dreamer 3d- printer using a thermoplastic material called Polyethylene terephthalate glycol (PETG). PETG Material is being resistant to heat, impact and solvents. It is advertising displays and electronic insulators. It is easily recyclable and prevents pollution to environment. 3D printed PETG based spacer lens are designed in such a way that Macro lens can be fitted at the smaller end of the spacer. Thus, making it portable for all smart devices including phone camera to be attached.
ROI EXTRACTION
The extraction of the Region of Interest (ROI) in an image is a crucial step in applications of image processing to get enhanced feature extraction. Therefore, focusing on relevant data by isolating the ROI, the analysis can concentrate on the most pertinent parts of the image, improving the accuracy of detection, classification, and other processing tasks. YOLO (You Only Look Once) based real time object detection module is utilized in the present invention. It detects the Conjunctiva region of eye using bounding box.
SEGMENTATION
The segmentation techniques are implemented using both supervised and unsupervised learning. In supervised learning, the Segment Anything Model (SAM) is utilized.
COLOR BASED FEATURES
Extraction of a*Layer
The input image in the RGB color space is further converted to a CIE Lab color model (L*a*b*). The L*a*b* space consists of a luminosity layer L*, a chromaticity layer a*, which indicates where the color falls along the red-green axis, and a chromaticity-layer b*, which indicates where the color falls along the blue-yellow axis. a*layer contain the required color information for classification of eye conjunctiva.
SEGMENT ANYTHING MODEL
The Segment Anything Model (SAM) is a network architecture that uses prompts to generate segmentation masks from images. SAM has three components such as an image encoder, a flexible prompt encoder, and a fast mask decoder.
STATISTICAL FEATURES
Mean: Mean represents average pixel intensity in an image, calculated by adding intensities and dividing by the overall number of pixels, indicating brightness and central tendency. The representation of the mean is given a mathematical equation (1)
μ=1/n ∑_(i=1)^n▒x_i
Standard Deviation: Standard deviation measures the variation in pixel intensity around the mean, indicating the extent to which each value differs from the average. A lower standard deviation suggests that the values are closely clustered, while a higher standard deviation indicates a wider range. The mathematical equation is given by equation (2).
σ=√(1/(N-1) ∑_(i=1)^N▒(x_i-x ̅ ) )^2
Entropy: Entropy measures the uncertainty in pixel intensities in an image, providing details on the intricacy of the distribution. It is often estimated using the histogram of pixel intensities, with higher entropy values indicating a more 3 complicated distribution, and lower entropy values indicating a more uniform distribution. The mathematical equation is given by equation (3).
H(x)=-∑_(i=1)^n▒(p(xi)log2p(xi)
Ratio of Mean Red and Mean Green channels: The ratio of the mean red and green channels in an image represents the average intensity or brightness relationship between the red and green color channels within an image. The equation is given by equation (4).
Ratio = μ_R/μ_G
Difference between Mean red and mean green channels: The difference between the mean red and green channels in an image represents the difference in intensity or brightness between the red and green color channels within an image. The equation is given by equation (5).
Difference= μ_R-μ_G
COLOR-BASED FEATURES
High Hue Ratio (HHR): The RGB image is converted into HSI color space. In the HSI color model, H, S, and I represent the hue, saturation, and intensity, respectively. Then, a feature called the high hue ratio (HHR) is extracted, which means the proportion of pixels with high values in the hue component of the image. Equation (6) provides the formula for calculating the High Hue Ratio (HHR).
HHR = n_H/N
〖 n〗_H represents the number of high-hue pixels in an ROI, N=Total pixels
Red pixel Percentage: An RGB image can be segmented to extract red pixels by first separating the red channel and converting it to grayscale. The next step is to apply Otsu's threshold to the grayscale image. By counting the number of red pixels and calculating the total number of pixels in the image, the percentage of red pixels can be determined using equation (7).
Red pixel %= (Red Pixels) *100
(Total Pixels)
CLASSIFICATION
Multi-Layer Perceptron
Any input dimension may be converted to the required dimension using a multi-layer perceptron, which is a completely linked dense layer. A neural network with hidden layers is called a multi-layer perception. However, this network only makes use of one completely linked layer. A large number of processing components are combined to create the network, which has weighting functions that may be adjusted for each input. These processing components are often arranged in a layer-by-layer configuration with complete or haphazard connections between the levels. In general, there are three or more levels: an input layer where data are sent to the network via an input buffer, an output layer with a buffer containing the output response to a given input, and one or more intermediate or hidden layers. Table 2 lists the overall model parameters utilized in the study.
RESULTS AND DISCUSSIONS
YOLO OBJECT DETECTION
The set of instructions in YOLO is deployed to detect the region of interest with a bounding box, along with their confidence scores respectively. It is observed that the eye conjunctiva region is detected properly, making it easier to be extracted using segmentation techniques.
The performance metrics of the YOLO model is evaluated based on the box loss, mAP50(B) and mAP90-95(B). Box loss quantifies how well the model predicts the location and size of the bounding boxes around the objects. Training and validation of box loss are 0.96 and 0.67 respectively are found. Thus, Lower box_loss means the model is better at predicting where objects are and their scale. mAP50(B)Mean average precision calculated at an intersection over union (IoU) threshold of 0.50. Mean Average Precision averaged over IoU thresholds from 50% to 95% , providing a more comprehensive view of the model's performance across different levels of detection difficulty. It gives a comprehensive view of the model's performance across different levels of detection difficulty.
SEGMENT ANYTHING MODEL
In the Segment Anything Model process, the YOLO object detection algorithm is utilized to identify the region of interest, providing a bounding box and confidence score for the detected objects. These coordinates are then used to create a segmentation mask through the Segment Anything Model, which enables the removal of the background from the input image and extracts the conjunctiva region is shown in Fig 4(b).
DIFFERENT WHITE BALANCE CONDITIONS
After segmenting the conjunctiva region, application of different lighting conditions helps in Developing a model which is more suitable for more practical environmental conditions. The segmented image under different lighting conditions such as Cloudy, Daylight, Flash, Fluorescent, shade and tungsten respectively. Since the image parameters may vary when the lighting conditions change and may result in inaccurate predictions if the parameters of a single colour space alone were considered. Thus, images acquired using the camera is augmented and saved in six different lighting conditions.
FEATURE EXTRACTION
The extraction of features from the segmented eye region is crucial for evaluating the classifier model's performance. Features include color-based and statistical features, such as High Hue ratio (HHR), red pixel percentage, mean and standard deviation of the red, green, and blue channels in the RGB color space, as well as the L, a*, b* channels in the Lab color space, and the H, S, and V channels in the HSV color space. Additionally, the combination of features includes the ratio between the mean red and mean green channels and difference between mean red and mean green channels enhance the performance of the classifier model.
The feature extraction of images using the 3d-printed spacer Macro lens fitted Mobile phone camera using Segment Anything Model is shown in Table 1. Using equation (1-7) to calculate the feature extraction values of the modalities 3D fitted mobile phone using and segment anything model.
Table 1
The trained classifier model is embedded into a display module with microprocessor and memory module. In one embodiment of the present invention can be developed as an application. Reason for developing android mobile application system is the wide usage of the people across the world. Therefore, data can also be communicated easily among individuals. Data collection from the Subject such as Eye conjunctiva images using 3D printed spacer macrolens fitted mobile phone camera image is loaded into interactive display module (106) to predict using model already embedded in the application. Based on the information given by the user, the geographical location of the subject is geo-tagged. Location of the subjects are shared with healthcare workers so that it is easier to screen the Geographical clusters with anemic suspects. As an embodiment of the present invention, the information is highly secure as there is password given to access the information and all the information are stored. Therefore, the present invention can be used as preclinical screening of anemia in different geographical locations.
The present invention is an efficient cost-effective solution for capturing eye conjunctiva region by designing a 3D printed spacer Macro lens fitted with a mobile phone camera. Images were captured from camera modalities such as Mobile phone camera, and 3D-fitted spacer assembly mobile phone camera. In particular, the YOLO Real-time object detection algorithm precisely detects conjunctiva region from a given image using a bounding box. Segment Anything Model (SAM) have been used as segmentation algorithms to extract the eye conjunctiva region. Statistical features and color-based features are extracted.
Performance metrics
The performance of the classifier is evaluated using metrics such as accuracy, precision, sensitivity, specificity for anemic, moderate anemic and normal classifications; the related results are displayed from equations 8 to 12 where TP denotes True positive, FP denotes False positive, TN denotes True negative and FN denotes False Negative respectively. With diagonal components denoting accurate predictions and non-diagonal elements denoting inaccurate forecasts, it depicts the actual class and the predicted class. The performance of classification models is estimated using these metrics, the majority of which are based on the model's prediction using dataset values.
Model Parameters of MLP classifier is provided in Table 2.
Model parameters Values of parameters
Layer size 25
Activation function ReLu
Iteration limit 1000
No of Fully connected Layer 1
Training Testing split 90% 10%
Validation Method Resubstitution
Table 2
The AUC stands for area under the ROC. The main feature of AUC lies in its ability to measure the quality of a predictor irrespective of the decision threshold. It summarizes the trade-off between the true positive rate (TPR) and false positive rate (FPR) across all possible thresholds, thus providing a comprehensive metric for model performance.
Accuracy=□((TP+TN)/(TP+TN+FP+FN))*100 (8)
Precision = TP/(TP+FP)*100 (9)
Sensitivity=TP/(TP+FN)*100 (10)
Specificity= TN/(TN+FP)*100 (11)
F1-score=(2*TP)/(2*TP+(FP+FN) )*100 (12)
The performance matrix analysis of various image acquisition such as smart device and the present invention are tabulated in table 3 and pictorially represented in fig 5 (a-b).
Image Acquisition type Segmentation method Accuracy
(%) Sensitivity
(%) Specificity
(%) Precision
(%) F1-score
(%)
Smart device ( Mobile phone) camera YOLO with SAM 95 96 98 95 95.49
3D Printed spacer Macro lens fitted with Mobile phone camera 98.3 98 99 98 98
Table 3
Advantages
1. The proposed screening tool uses a macro lens at the end of the spacer unlike inside the spacer as mentioned in the existing image acquisition system. The proposed tool uses a macro lens where the clip of the macro lens integrates the smartphone and the anterior opening of the spacer. Thus, making the tool portable for all types of smartphones.
2. 3D printed spacer is designed using polyethylene terephthalate glycol (PETG) which is easy thermoforming and chemical resistant. Thus, material is well suited to medical device manufacturing industries.
3. For 3D printing using PETG material, it is essential to note that natural light can be utilized for image capture due to its transparency.
, Claims:1. A preclinical screening system (100) for anemia detection comprising:
a. a spacer assembly (101) comprising a 3D-printed spacer (102) integrated with a Macro Lens (103), wherein the spacer is attachable to camera of smart device (104) through clip arrangement thereby forming an image acquisition module (105) which performs image acquisition of human eye conjunctiva with finer details;
b. an interactive display module (106) integrated with a microprocessor and a memory module, wherein the display module receives the images from image acquisition module which is integrated with an object detection means subjected to localization using YOLO object detection method that detects conjunctiva region of human eye from the localized region, followed by segmentation and feature extraction using statistical and colour based features, wherein the object detection means is embedded with a trained Multi-Layer Perceptron (MLP) classifier to classify anemic condition, thereby the system yielding an accuracy of 98.3% without any invasive technique.
2. The preclinical screening system for anemia detection as claimed in claim 1, wherein the spacer is portable and involves in standardizing the distance between human eye and the camera which is about 4.5 cm and the macro lens has a magnification factor for about 20 times.
3. The preclinical screening system for anemia detection as claimed in claim 1, wherein the spacer is fabricated with PETG filament material helps in image acquisition irrespective of the lighting conditions.
4. The preclinical screening system for anemia detection as claimed in claim 1, wherein the segmentation of conjunctiva region of human eye is performed by segmentation method using Segment Anything Model (SAM).
5. The preclinical screening system for anemia detection as claimed in claim 1, wherein the Multi-Layer Perceptron (MLP) classifier to classifies anemic condition as normal, moderate anemic and anemic.
6. The preclinical screening system for anemia detection as claimed in claim 1, wherein the results with anemic classification are geo-tagged to maintain the patient data, thereby helping to identify anemic clusters.
7. A method for detecting anemia using spacer assembly comprising the following steps:
a. obtaining the image of eye using 3D printed spacer assembly;
b. extracting the region of interest (ROI) using YOLO object detection method that detects conjunctiva region of human eye;
c. segmenting the extracted image using Segment Anything Model (SAM);
d. extracting the features from the segmented image obtained by the previous step, wherein the features comprising statistical features such as Mean of red, green channels, Standard deviation of red channel, Entropy of red channel and color based features such as percentage of red pixel and high hue ratio;
e. classifying the extracted features using Multi-Layer Perceptron (MPL) and
f. transferring to a digital module where the classified images are identified to be anemic, moderate anemic or normal.
Documents
Name | Date |
---|---|
202441088606-FORM-26 [18-11-2024(online)].pdf | 18/11/2024 |
202441088606-COMPLETE SPECIFICATION [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-DRAWINGS [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-EDUCATIONAL INSTITUTION(S) [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-EVIDENCE OF ELIGIBILTY RULE 24C1f [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM 1 [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM 18 [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM 18A [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM 3 [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM FOR SMALL ENTITY(FORM-28) [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM-5 [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-FORM-9 [16-11-2024(online)].pdf | 16/11/2024 |
202441088606-OTHERS [16-11-2024(online)].pdf | 16/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.