image
image
user-login
Patent search/

ALZCARE: PERSONALIZED VOICE ASSISTANT FOR ALZHEIMER PATIENTS LEVERAGING DEEP LEARNING AND AZURE IOT

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

ALZCARE: PERSONALIZED VOICE ASSISTANT FOR ALZHEIMER PATIENTS LEVERAGING DEEP LEARNING AND AZURE IOT

ORDINARY APPLICATION

Published

date

Filed on 19 November 2024

Abstract

Alzheimer’s disease (AD) significantly impairs cognitive functions, posing challenges in memory, communication, and daily tasks for patients and their caregivers. To address these challenges, ALZCARE introduces a personalized voice assistant designed specifically for Alzheimer’s patients, powered by deep learning and Microsoft Azure IoT. This innovative system leverages natural language processing (NLP) for adaptive, empathetic interactions tailored to individual needs and speech patterns. ALZCARE provides memory support through reminders for daily routines, medication, and personal anecdotes, promoting comfort and familiarity. Integrated with Azure IoT, the system monitors vital health parameters in real time, offering caregivers actionable insights via a secure cloud interface. Safety features, including fall detection and location tracking, ensure timely emergency assistance. Built on Azure’s scalable infrastructure, ALZCARE dynamically learns and evolves to improve care delivery. By combining advanced technology with user-centric design, it empowers patients, relieves caregivers, and advances personalized healthcare for neurodegenerative diseases.

Patent Information

Application ID202441089436
Invention FieldBIO-MEDICAL ENGINEERING
Date of Application19/11/2024
Publication Number48/2024

Inventors

NameAddressCountryNationality
Dr. S. UmaraniProfessor, Department of Computer Science and Applications (BCA) Faculty of Science and Humanities, SRM Institute of Science and Technology, Rampuram, Chennai - 600089, Tamilnadu, IndiaIndiaIndia
Mr. S. Karthik RajanUG Student, Department of Computer Science and Engineering, Easwari Engineering College, Ramapuram, Chennai - 600089, Tamil Nadu, IndiaIndiaIndia
Mr. M. BhuvaneshSolutions Architect, Virtusa Corporation, Tampa, FL 33619, Florida, USAIndiaIndia
Dr. C. SundarProfessor & Dean, Faculty of Management, SRM Institute of Science and Technology, Rampuram, Chennai - 600089, Tamilnadu, IndiaIndiaIndia
Dr. N. RevathiAssistant Professor, Department of Computer Science and Applications (BCA), Faculty of Science and Humanities SRM Institute of Science and Technology Rampuram, Chennai - 600089, Tamilnadu, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
Dr. S. UmaraniProfessor, Department of Computer Science and Applications (BCA) Faculty of Science and Humanities, SRM Institute of Science and Technology, Rampuram, Chennai - 600089, Tamilnadu, IndiaIndiaIndia
Mr. S. Karthik RajanUG Student, Department of Computer Science and Engineering, Easwari Engineering College, Ramapuram, Chennai - 600089, Tamil Nadu, IndiaIndiaIndia
Mr. M. BhuvaneshSolutions Architect, Virtusa Corporation, Tampa, FL 33619, Florida, USAU.S.A.India
Dr. C. SundarProfessor & Dean, Faculty of Management, SRM Institute of Science and Technology, Rampuram, Chennai - 600089, Tamilnadu, IndiaIndiaIndia
Dr. N. RevathiAssistant Professor, Department of Computer Science and Applications (BCA), Faculty of Science and Humanities SRM Institute of Science and Technology Rampuram, Chennai - 600089, Tamilnadu, IndiaIndiaIndia

Specification

Description:FIELD OF INVENTION
ALZCARE revolutionizes Alzheimer's care by integrating a personalized voice assistant powered by deep learning and Azure IoT. This innovative system enhances patient support, enabling real-time health monitoring, cognitive engagement, and personalized reminders. It fosters independence, ensures safety, and streamlines communication between patients, caregivers, and healthcare providers, setting a new standard in intelligent, compassionate Alzheimer's care through cutting-edge technology.
BACKGROUND OF INVENTION
Alzheimer's disease, a progressive neurodegenerative disorder, significantly impairs memory, cognitive function, and daily living abilities, placing an immense emotional and physical burden on patients and caregivers. The rising prevalence of Alzheimer's globally underscores the urgent need for innovative, scalable solutions that can assist patients in maintaining their independence while alleviating caregiver strain. Traditional approaches often fall short in addressing the dynamic and personalized care needs of Alzheimer's patients.
ALZCARE emerges as a groundbreaking invention at the intersection of healthcare and technology, leveraging the power of deep learning and Azure IoT to revolutionize Alzheimer's care. This personalized voice assistant is designed to provide tailored cognitive support, real-time health monitoring, and an intelligent interface for reminders, routines, and emergency assistance. By employing advanced natural language processing, ALZCARE engages patients in meaningful conversations, helping to preserve cognitive functions while fostering a sense of companionship.
Integrated with Azure IoT, ALZCARE seamlessly connects to wearable and smart home devices, enabling proactive health tracking, fall detection, and environment control. Real-time data analytics and predictive insights empower caregivers and healthcare professionals to make informed decisions, ensuring timely interventions and improved outcomes.
This innovation goes beyond caregiving-it transforms lives. By combining cutting-edge artificial intelligence with compassionate design, ALZCARE bridges the gap between technological advancements and human-centric care, setting a new benchmark in Alzheimer's management. It heralds a future where technology serves as an empathetic partner in the journey of Alzheimer's patients and their caregivers.
The patent application number 201917050306 discloses a method of treating traumatic brain injury.
The patent application number 201921044794 discloses a brain activities analysis & notification sending by using EEG signals for stress recognition.
The patent application number 201911035498 discloses a brain wave controlled electric skateboard.
The patent application number 201947018029 discloses a novel anti-human transferrin receptor antibody capable of penetrating blood-brain barrier.
The patent application number 201911005112 discloses a drug delivery system and method thereof for treatment of paraplegias, stroke and brain death.
SUMMARY
ALZCARE represents a revolutionary leap in Alzheimer's care, blending the transformative capabilities of deep learning with the advanced connectivity of Azure IoT to create a personalized voice assistant tailored to the unique needs of Alzheimer's patients. Addressing the challenges posed by this neurodegenerative disorder, ALZCARE is designed to enhance daily living, foster cognitive engagement, and provide peace of mind for caregivers.
At its core, ALZCARE harnesses advanced natural language processing to engage patients in meaningful, empathetic conversations, offering personalized cognitive stimulation that slows memory decline and improves mental well-being. The voice assistant acts as a reliable companion, providing reminders for medications, appointments, and daily tasks, helping patients maintain independence and routine in their lives.
The integration of Azure IoT elevates ALZCARE's functionality by seamlessly connecting it to wearable and smart home devices. This connectivity enables real-time health monitoring, such as tracking vital signs and detecting emergencies like falls. Predictive analytics offer proactive insights, empowering caregivers and healthcare professionals to intervene at critical moments, ensuring timely and effective care.
ALZCARE's intelligent interface also bridges communication gaps, offering a secure platform for caregivers and medical teams to stay informed and connected to the patient's well-being. Its robust data analytics provide valuable insights, driving precision and personalization in Alzheimer's management.
By combining technological innovation with human-centric design, ALZCARE transforms how Alzheimer's care is delivered, fostering dignity, safety, and support for patients while reducing the burden on caregivers, paving the way for a brighter future in dementia care.
Objectives
This research's main objective is to reduce the uncertainty and overfitting issues, focusing on biomarker region preserving and identification issues to train the biomarker information of AD stages. The research objective fulfils the research gaps and improves the classification model's performance in AD stages detection. The classification model's performance has been compared with existing traditional and recent CNN models, and the efficiency is tested with various accuracy metrics. The subsequent section discusses the methodologies utilized to process the image for detecting AD stages.


DETAILED DESCRIPTION OF INVENTION
Most of the research papers discussed in this section Utilized CNN and its categories of models for detecting various AD stages. Few authors are investigated learning concepts by using CNN models to improve prediction accuracy. These CNN classifiers perform well, especially on image datasets. All the AD classification models discussed in this section perform poorly with uncertain images samples. Very little research included preprocessing phases like rescaling and enhancement. The framework for image enhancement has used sliding window adaptive histogram equalization approaches. For the first time, this method of MRI image equalization does not use just one histogram to redistribute the image's lightness value. Therefore, the edge of the brain MRI image slices is enhanced, making it ideal for local contrast. Most of the approaches are failed to focus on exact biomarker regions and their feature information. The early CNN model has overfitting issues because of too much image feature information. According to early studies, none of the research work addresses these issues. Therefore, a strong classification model has been required to detect the various AD stages without the issues mentioned above. In this research, the following methodologies are incorporated into the classification framework to resolve the gaps' research. The rescaling, adaptive filtering, and adaptive histogram equalization techniques have been incorporated to enhance image quality, reduce the automation system's storage capacity, and preserve biomarker features. The VBM technique segments the exact biomarker regions of Alzheimer's disease from the MRI image. Utilizing the biomarker feature extraction and irrelevant feature information reduction techniques utilization help identify the significant biomarker features, reducing the overfitting problem during the model training. The fish swarm optimiser's following behaviour in the DSNN network helps optimize the uncertainty issues.

METHODOLOGIES OF FSODSNN BASED AD STAGES DETECTION
Figure 1 illustrates the AD detection process; According to the AD detection system, initially, the 3D MRI brain image had been collected from patients in scan centres. The MRI scan image has been taken as input to the detection system. In the second stage, all the input MRI images are resized in common size. In the first two convolution layers, filtering and image enchantment techniques have been applied to improve the image's quality. The third convolution layer is responsible for VBM based biomarker region extraction from the enhanced image. The fourth convolution performs the process of the biomarker features, and the fifth convolution layer is responsible for the significant selection of biomarkers. The consolidated convolution features (constructed feature vector) in deep, fully connected utilized to train the DSNN model during the model training. Next, calculate the loss and backpropagate the network based on the loss value and update the weight using FSO. Finally, biomarker features are trained to create patterns and generate prediction reports (detect AD stages).

Table 1: Related study on AD stage detection

Figure 1: General structure of FSODSNN based AD stages detection
3D MRI data acquisition
Generally, the input image has been collected from the 3 Tesla T1 weighted imaged MRI scanner. However, in this research, the 3D MRI (Wee et al., 2019) baseline images are taken from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, obtained from 3 Tesla T1 weighted images. The structural T1 weighted MRI scan is obtained utilizing 1.5 T or 3 T scanners. The typical 1.5 T attainment variables repetition time (RT) = 2400 ms, minimum full echo time (ET), inversion time (IT) = 1000 ms, flip angle = 8, field of view (FOV) = 240 *240 mm2, acquisition matrix = 256 * 256 * 170 in x, y, and z dimensions. Capturing a voxel size of 1.25 *1.25 * 1.2 mm3. The acquisition parameter value of 3 T scans are RT = 2300 mm, minimum full of ET, IT = 900 ms, flip angle = 8, FOV = 260 * 260 mm2, acquisition matrix = 256 * 256 * 170, Capturing a voxel size of 1.0 * 1.0 * 1.2 mm3. The DSNN model is trained using the consolidated convolution features (built feature vector) deep, completely connected. Backpropagation and weight updates are done using FSO, which estimates loss and backpropagates the network. In the end, biomarker characteristics are taught to establish patterns and generate forecast reports (detect AD stages). The overall AD dataset is separated into two sets as test and training; it contains four different classes such as non Demented/CN, very mild Demented/SMC, mild Demented/MCI and Moderated/AD. These MRI images are collected from various databases, discussed later in this section. Table 2 contains overall images collected from the various data sources like ADNI, AIBL and OASIS. Alzheimer's Disease Neuroimaging Initiative (ADNI) database (ADNI, n.d.) is categorized into four types of datasets ADNI-1, ADNI-GO(Grand Opportunities), ADNI-2, ADNI-3. The ADNI1 & ADNI-GO jointly contains 400 SMC, 400 MCI, 200 AD. The ADNI-2 contains 150 ND, 150 SMC, 150 MCI and 150 AD. The ADNI-3 contains 133 ND, 151 MCI, 87 AD. The Australian imaging Biomarker & Lifestyle Flagship Study of Aging (AIBL) (AIBL, n.d.) database contains more than 1000 participants. The dataset contains the images of AD, MCI and CN. Open Access Serious of Imaging Studies (OASIS) dataset (OASIS, n.d.) collected from nearly 1000 participants. It contains 609 CN, 489 MCI patients MRI images. The collected MRI images are utilized for the analysis and detection of stages of AD.

Table 2: Overall image
Resizing
The input MRI images have been resized to reduce memory usage and increase classification performance. All the input image sizes are normalized during the resizing process as 240 * 256 * 176 voxels images are, after pre-processing, resized as 96 * 96 * 64, the resizing range [96 96 64]. The rescaled image quality needs to enhance for better prediction. The normalization range of the input image is rescaled [0, 1]. The minimum range for normalization is 0, and the maximum range is 1. This process facilitates the reduction of the memory utilization of the stages AD detection system.
Adaptive filtering
Noise removal is an essential step in preprocessing to preserve the MRI images' biomarkers (edge of brain neuron); it helps predict performance. The Gaussian (white) noise in an MRI image reduces prediction accuracy. The adaptive filtering approach produces better filtering results than linear filtering. For better classification and memory usage, input MRI images were resized. It is possible to resize all of your images to the same size using the pre-processing step, which normalizes the dimensions of all of your images. For more accurate forecasting, the image quality after rescaling must be improved. [0, 1] is the new normalization range for the input image. It preserves edges and other high-frequency parts of an MRI image. The mathematical derivation of the adaptive filtering is as follows,

The following three conditions facilitate to filter noise and preserve biomarkers edges.
Condition 1. if s2 h ¼ 0, return simply the value of g(x,y).
Condition 2. s2 L > s2 h, return a value close to g(x,y) (high local variance associated with edges are preserved).
Condition 3. if s2 L ¼ s2 h , return arithmetic mean mL.
• Simplifying the value
• A value close to high variance
• Arithmetic mean value
The noises in MRI imaging is a common issue, and it has been filtered by using the Equation (1) based on the above three conditions to preserve biomarkers edges. The next essential stage of the convolution layer is image enhancement.
Adaptive histogram equalization
The sliding window adaptive histogram equalization approaches have been utilized in the framework for image enhancement. Unlike other equalization approaches, it creates several histograms; each corresponds to a distinct section of the MRI images and uses them to redistribute the lightness value of the MRI. Therefore, it is suitable for local contrast and enhances the edge in the brain MRI image slices. Tiling the image is to slide the rectangle one voxel at a time and only additionally update the histogram for voxel by adding a new voxel row and subtracting the row left behind. The histogram calculation's computational complexity condenses from O(N2 ) to O(N). N indicates the width of the surrounding rectangle. Adaptive histogram equalization improves by transforming each voxel with a transformation function derived from the neighbourhood region. It can also simplify as each voxel is transformed based on the square's histogram surrounding it. The resultant enhanced MRI image has been taken as input for filtering the noise. In picture improvement, sliding window adaptive histogram equalization algorithms have been used. While previous equalization methods employ a single histogram to spread the MRI brightness value throughout the image, this method uses many histograms that correlate to a different portion of the image. Local contrast and sharpening of the brain slices can be achieved by using this filter. According to the results of the performance evolution, the new adaptive filter outperforms other comparison filtering approaches and is therefore suitable for biomedical image reconstruction. Extensive ultrasound scans were used to demonstrate the efficacy of a new framework for robust contrast enhancement and multiplicative noise suppression. Figure 2a,b, left corner MRI brain images and right side normalized histogram of both the image samples. The AHE approach utilization facilitates enhancing image quality.
Voxel-based morphometry (VBM)
The adaptive histogram equalization based enhanced images has been utilized for ROI region segmentation. The VBM approach segment the ROI of an image based on three voxel classes: white matter (WM), grey matter (GM), and cerebrospinal fluid (CF). The significant value of the grey matter cluster is ρ < 0:05 correction. The local maxima of the various biomarker regions and voxel values are right cerebellum (20, -62, -64), entorhinal area (27, 0, -20), amygdalae (-24, -2, -18), right posterior insula (38, -6, -2), right inferior temporal gyrus (57, -63, -15), and the right inferior occipital gyrus (44, -78, -12). The biomarker regions are segmented using grey matter the significant value ð Þρ from the local maxima of the biomarker mentioned above regions. Figure 3 illustrates the sample pre-processed and VBM based segmented MRI brain images of four stages of AD. The MCI, AD, CN, and SMC stages are displayed in the first, second, third, and fourth rows. The segmented biomarkers region's features are taken for feature extraction. The MRI biomarker characteristics of the brain's segmented biomarker regions are extracted using various feature extraction methods. This study used GLCM, Gabor, and wavelet features to extract the MRI image's biomarker information from the grey level.

Figure 2: Sample adaptive histogram equalization (AHE) results. (a) Sample image 1 with histogram. (b) Sample image 2 with histogram

Figure 3: Segmented sample biomarkers of CN, SMC, MCI, and AD regions
Feature extraction
Different feature extraction methods are applied to extract the brain's segmented biomarker regions' MRI biomarker features. This research applied grey-level co-occurrence matrices (GLCM), Gabor, and wavelet features to extract the MRI image's biomarker information. GLCM feature extracts the numerical features using spatial relationships of similar grey tones.



The Equation (4) is used to estimate the entropy value of an MRI image. The GM½ r,c is the grey tone spatial dependence matrix, and the r, c denotes row and columns values, and Ngl is the number of dissimilar grey levels in the quantized image.

where the Equation (5), Equation (6) and Equation (7) are used to calculate the cluster prominence, cluster shade and cluster tendency values of the segmented biomarker regions. The notation μx denotes mean of row and μy denotes the mean of the column. The GLCM based statistical relationships of various texture feature information of the biomarker textures are extracted to train the model.
The mathematic derivation of Gabor filter-based feature extraction has been represented as follows,


where the Equation (8) is the classical method for extraction of Gabor filter based texture feature is the energy Enrk, k=1,2 in the form of l1-norm and l2-norm. The notation r and c are the sizes of the sub-band intensity. The Gabor energy-based texture features information of the biomarker textures extracted to train the model. The 3 Tesla T1 weighted MRI scanner is typically used to gather the input picture. For the purposes of this study, the 3D MRI baseline pictures were obtained using 3 Tesla T1 weighted images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. 1.5 T or 3 T scanners are used to acquire structural T1 weighted MRI scans.
Wavelet
The fundamental idea of DWT is to deliver time-frequency depiction. The 2D-DWT signifies an image in terms of a set of shifted and dilated wavelet functions ωLH,ωHL,ωHH and scaling functions ϕ that form an orthonormal basic for L2(R2Þ. Given a J-scale DWT, an image of MxM is decomposed as follows,

where the value of M is denoted as Mp ¼ M=2p and the Equation (9) is used for the decomposition of the image x rð Þ ,c and Equation (10) represents the derivative of the scaling function. In this research, the LH, HL, and HH are named wavelet or DWT sub-band. is a scaling coefficient and denotes wavelet coefficient in scale p and sub-band B. The derivation of wavelets in Equation (9) and Equation (10) facilitates extracting the wavelet feature information to train the model. The above-discussed features extraction techniques are utilized in this research to extract the biomarkers feature information from MRI images. The MRI biomarker features of the brain's segmented biomarker regions are extracted using various feature extraction methods. This study extracted the MRI image biomarker information using GLCM, Ga-bor, and wavelet features. The GLCM feature uses spatial relationships between grey tones that are similar in tone to get the numerical features.
Hilbert Schmidt independence criteria lasso HSICL based feature selection
The HSICL approach performs well on both high and low dimensional samples. Therefore, the framework has been utilized the HSICL method to reduce more irrelevant features or significant features among extracted MRI biomarker features. The HSICL optimization has been given as follows

FSODSNN based classifier
Triple ranking loss
Traditionally, a CNN model trains to predict multiple AD classes. This creates confusion when a new class of AD stages needs to add or removed from the dataset, so the network model needs to retrain again. However, the DSNN learns based on similarity function; therefore, it can see if the two images are identical. This network feature helps to classify new classes of data without training again. The DSNN architecture contains the same configurations with the same parameters and weighted sub-networks, as well as the parameter updating process has been parallelized across both sub-networks. Therefore, the edge of the brain MRI image slices is enhanced, making it ideal for local contrast. One way to tile an image is to slide it around voxel-by-voxel, only updating the histogram for each voxel by adding new rows and taking away the old ones. Histogram computations are reduced from O(N2) to O(N1) (N). The learning in the DSNN can be done with triple loss or constructive loss. In this research, the Triple loss function is taken to compute the loss value. The selected features from the biomarker feature vectors are taken as input to train the DSNN model. During the model training, the DSNN model uses the triple ranking loss function to calculate loss, where a Training Sample image/input (TS) of AD is compared with Actual Positive image/input (AP) and Actual Negative/false image/input (AN). The calculated distance between the training sample and positive sample much be minimized, and the distance between the training sample and negative sample must be maximized. The loss value is calculated using the Triple Loss Function in this study. In training the DSNN model, the biomarker feature vectors are used as input. The DSNN model uses the triple ranking loss function to calculate loss during model training. The mathematical derivative of triple loss is represented as follows,

It is imperative to reduce the distance estimated between the training and the positive samples and maximize this distance between the training and the negative samples. The triple-loss derivative's calculus is depicted here. Equation (13) is used to calculate the Loss value during the model train process. m denotes the maximum distance threshold. The function TS=AP=AN is used to compute the representation for three triplet elements. The maximum and minimum distance-based loss calculating feature of the triplet loss function helps predict loss during the backpropagation. It is used to discover the similarity among stages (MCI/AD/CN/SMC) by comparing feature vectors. Relu activation function performs well with DSNN models, so ReLU is taken for state activation function during the model training.
Relu-activation function
ReLU is less computational complex than sigmoid and tangent activation function and helps avoid vanishing gradient problems. Therefore, ReLU is utilized for the state activation function in the DSNN model. The Rectified Linear Unit ReLU function max and derivative ReLU0 is denoted as follows


The state activation conditions of ReLu has been given in Equation (15) and Equation (15). During the model training, each node's prediction loss and hyperparameters values have ReLu have handled the framework's state activation functionalities. The relu function activates the state when the parameter value is between 1 to 1.
Optimizer
The fish swarm optimizer has been incorporated in the DSNN model to identify new MRI (AD class) samples. It helps update the DSNN model's network nodes' parameters weight during the model training and prediction. The multi-objective behaviour of the fish swarm has been utilized for weight optimization, such as Target behaviour, Random behaviour, Teeming behaviour, Following based convergence, and Random behaviour, to optimize the network nodes parameters weight during the hyperparameter values updating process. The position of the input is represented as 3DMRI image feature vector (attribute) A

In Equation (16), the DSNN model's convolution layers contract the input feature vector A. The fitness function of the target (B) is denoted as Am.

Hence, in Equation (17), the derivation of dmn is used to calculate the distance between n and m of the specific attribute. The target searching behaviour contains two conditions, in the first condition m then, it is considered as convergence. So it takes value towards AmAn, otherwise randomly choose the next state An.

where Equation (20) is used to calculate the following based convergence functions, the fish selects position by visual range; likewise, the input parameter of the DSNN classifier weight updating position value by choosing any one of the local minima of a neighbour as convergence value. Likewise, whenever the new biomarker pattern of AD stages arrives for training or testing, the special new direction-finding behaviours of the fish swarm optimizer in the DSNN classifier helps to predict the possible AD stages. In this research, the above-discussed image processing methodologies and their features are combined to create the FSODSNN model for classifying the T1 MRI brain image based on Alzheimer's disease. The above Pseudocode 1 gives the overall functionalities of the FSODSNN based AD stages detection framework. The collective benefit of early discussed features of this detection system helps to detect AD stages better. The performance of the FSODSNN model has been discussed in the subsequent section.










DETAILED DESCRIPTION OF DIAGRAM

Figure 1: General structure of FSODSNN based AD stages detection
Figure 2: Sample adaptive histogram equalization (AHE) results. (a) Sample image 1 with histogram. (b) Sample image 2 with histogram
Figure 3: Segmented sample biomarkers of CN, SMC, MCI, and AD regions , Claims:1. AlzCare: Personalized Voice Assistant for Alzheimer Patients Leveraging Deep Learning & Azure IoT claims that the evaluation report in the results and discussion section shows that the new convolution-based AD detection framework has outperformed all the evolution metrics compared to comparison approaches.
2. It clearly shows that the new sample training feature of FSO in DSNN helps resolve the uncertainty issues during the AD stages detection.
3. The HSICL based feature reduction technique reduces unwanted feature information during the model training, reducing over-fitting problems. So in AD stages, the detection framework predicts the stages with less error than comparison algorithms.
4. The suitable biomarker feature enhancement and biomarker region identification utilization helps to efficiently train the classification model, which has improved accuracy and error reduction.
5. This research's main objective is to resolve the uncertainty, overfitting, biomarker region preserving and extraction issues.
6. The overall evaluation result shows that the new AD detection model has been achieved the objective of the research reliably.
7. In this research, the AD stages detection has been attained a maximum of 99.89% accuracy rate and 0.11% error rate. The current research achieves the highest accuracy rate in the AD stages detection system; therefore, future research is trying to utilize the heuristic approach based on pixel examination to detect AD stages with maximum forecast rate.
8. The primary goal of this research is to address the challenges of uncertainty, overfitting, biomarker preservation, and extraction. Overall, the results suggest that the novel AD detection model met the study's accuracy goal.
9. Currently, researchers are working to improve AD stage detection systems, and future study is looking to use the heuristic approach based on pixel examination to detect AD stages with the greatest forecast rate.

Documents

NameDate
202441089436-COMPLETE SPECIFICATION [19-11-2024(online)].pdf19/11/2024
202441089436-DRAWINGS [19-11-2024(online)].pdf19/11/2024
202441089436-FORM 1 [19-11-2024(online)].pdf19/11/2024
202441089436-FORM-9 [19-11-2024(online)].pdf19/11/2024
202441089436-POWER OF AUTHORITY [19-11-2024(online)].pdf19/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.