Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
ADAPTIVE HYBRID META HEURISTIC-CNN FRAMEWORK FOR AUTOMATED DETECTION OF MANGO LEAF DISEASES
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 14 November 2024
Abstract
ABSTRACT “ADAPTIVE HYBRID META HEURISTIC-CNN FRAMEWORK FOR AUTOMATED DETECTION OF MANGO LEAF DISEASES” The present invention provides adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases that introduce a novel mango leaf disease prediction in this study: Pre-processing, image segmentation, feature extraction, and disease prediction in the process. The gathered raw picture would first be pre-processed using contrast enhancement and histogram equalisation to eliminate noise and other undesirable artefacts, as well as to improve the image quality. Then, using a Geometric Mean based neutroscopic with fuzzy c-means method, the pre-processed images are segmented. Following that, the most important features are retrieved from the segmented images, including such Upgraded Local Binary Pattern (ULBP), colour features, and pixel features. These features are given into the disease detection phase, which is modeled using a Convolutional Neural Network (CNN) (deep learning model). Figure 1
Patent Information
Application ID | 202431088193 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 14/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Pradeep Kumar Mallick | School of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024 | India | India |
Abanikanta Pattanayak | Gandhi Institute of Excellence Technocrats Bhubaneswar Odisha India 751024 | India | India |
Tamasa Priyadarsin | Gandhi Institute of Excellence Technocrats Bhubaneswar Odisha India 751024 | India | India |
Laxmiparbati Das | Gandhi Institute of Excellence Technocrats Bhubaneswar Odisha India 751024 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Kalinga Institute of Industrial Technology (Deemed to be University) | Patia Bhubaneswar Odisha India 751024 | India | India |
Specification
Description:TECHNICAL FIELD
[0001] The present invention relates to the field of agriculture science, and more particularly, the present invention relates to the adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases.
BACKGROUND ART
[0002] The following discussion of the background of the invention is intended to facilitate an understanding of the present invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known, or part of the common general knowledge in any jurisdiction as of the application's priority date. The details provided herein the background if belongs to any publication is taken only as a reference for describing the problems, in general terminologies or principles or both of science and technology in the associated prior art.
[0003] Mango, also recognized as "The King of Fruits," is indeed a major fruit crop cultivated in several countries around the globe. India is the largest producer of mangoes, and it produces 40% of the world's mangoes. Pests and diseases damage crop production, crushing 30 percent to 40 percent of the crop yield. Unaided eye vision is used to identify the mango plant pathogens, which has less precision. Farmers are unaware of the various diseases that affect mango plants, resulting in lower mango fruit yield. Various diseases wreak havoc on the mango harvest. Uneven colored black spots appear as a result of the disease. These patches occur on the leaf surface or on young fruits. These patches start out small, but soon spread to the whole fruit or leaves, causing the fruit to rot. These illnesses must be diagnosed and monitored in a specific time frame while they are still in their early stages. These diseases are caused by pathogens such as 'champignons, bacteria, and viruses', which result in plant death.
[0004] Identifying plant diseases is the process of agricultural experts inspecting each plant on a regular basis. Farmers must actively track the plant body for this, which is a time-consuming process. Early identification of plant disease needs the use of different technique. Early recognition of disease in the field is the initial step in managing the spread of mango diseases. Traditional disease detection strategies rely on the help of agricultural organizations, but these methods are limited due to a lack of logistical and human resources.
[0005] Using technologies to increase internet access, mobile phones, and UAVs have new instruments for disease detection that rely on automatic image recognition to aid in large-scale detection. With the introduction of CV, ML, and AI technologies, advancement has been achieved in building automated models that enable accurate and timely diagnosis of plant leaf disease. With the advent of a variety of high-performance computer processors and devices in the previous decade, AI and machine learning technologies have sparked a lot of interest. DL has been well acknowledged as being mostly employed in agriculture in recent years.This idea seems crucial when it comes to establishing, regulating, sustaining, and improving agricultural productivity. It is at the core of smart farming technique, which is identified to incorporate new technology, algorithms, and gadgets into farming. Deep learning using Neural Networks is a part of machine learning. The advancement of such computer technologies will assist farmers in monitoring and controlling the plant diseases. Previous research has shown that image recognition can be used to recognize plant disease in maize, apples, and other stable and diseased plants. The detection of mango leaf diseases using automatic image recognition and attribute extraction has shown positive results.But extraction characteristics are computationally intensive and require solid performance expertise. Therefore, the optimized deep learning models are suggested as a promising solution.
[0006] In light of the foregoing, there is a need for Adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases that overcomes problems prevalent in the prior art associated with the traditionally available method or system, of the above-mentioned inventions that can be used with the presented disclosed technique with or without modification.
[0007] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies, and the definition of that term in the reference does not apply.
OBJECTS OF THE INVENTION
[0008] The principal object of the present invention is to overcome the disadvantages of the prior art by providing adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases.
[0009] Another object of the present invention is to provide adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases that Introduces a new Geometric Mean based neutroscopic with fuzzy c-mean for segmenting the diseased region from the normal leaf regions.
[0010] Another object of the present invention is to provide adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases that Extracts Upgraded Local Binary Pattern (ULBP) to train the detection model precisely, which is the enhancement of texture feature.
[0011] Another object of the present invention is to provide adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases that Introduces a new optimized CNN model to detect the presence/absence of leaf disease in mango trees.
[0012] Another object of the present invention is to provide adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases that Introduces a new hybrid meta-heuristic optimization model referred as Cat Swarm Updated Black Widow Model (CSUBW) to optimize the CNN
[0013] The foregoing and other objects of the present invention will become readily apparent upon further review of the following detailed description of the embodiments as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0014] The present invention relates to adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases.
[0015] Mango is a renowned, tasty fruit that also serves as the source of income. When diseases strike the mango plant, output drops dramatically, making it difficult for growers to sell their harvest. This dilemma led researchers to develop new ways for detecting and diagnosing mango plant diseases, as well as an expert system to avoid them. It is critical to control all such hazardous illnesses at an early stage in order to enhance production and quality. The following four key steps are used to introduce a novel mango leaf disease prediction in this study: Pre-processing, image segmentation, feature extraction, and disease prediction in the process. The gathered raw picture would first be pre-processed using contrast enhancement and histogram equalisation to eliminate noise and other undesirable artefacts, as well as to improve the image quality. Then, using a Geometric Mean based neutroscopic with fuzzy c-means method, the pre-processed images are segmented. Following that, the most important features are retrieved from the segmented images, including such Upgraded Local Binary Pattern (ULBP), colour features, and pixel features. These features are given into the disease detection phase, which is modeled using a Convolutional Neural Network (CNN) (deep learning model). Further, to enhance the classification accuracy of CNN, its weights is fine-tuned using a new hybrid optimization model referred as Cat Swarm Updated Black Widow Model (CSUBW). The new hybrid optimization model is developed by hybridizing the standard Cat Swarm Optimization Algorithm (CSO) and Black Widow Optimization Algorithm (BWO). Finally, a performance evaluation is undergone to validate the efficiency of the projected model.
[0016] In this research work, a new mango leaf disease prediction is introduced by following 4 major phases: (a) pre-processing, (b) image segmentation, (c) feature extraction, (d) disease prediction. The architecture of the proposed work is manifested in Fig.1. Initially, the collected raw image is pre-processed via "contrast enhancement and histogram equalization" to remove the noise as well as other unwanted artifacts to enhance the quality of the image. Then, pre-processed images are segmented via proposed Geometric Mean Based Neutroscopic With Fuzzy C-Mean. Subsequently, the most relevant features like ULBP (texture feature), color feature, and pixel features are extracted from the segmented images. These features are fed as the input to the detection phase, which is modeled using a CNN (deep learning model) for disease identification. Further, to enhance the classification accuracy of CNN, its weights is fine-tuned by a new hybrid optimization model. The new hybrid optimization model is developed by hybridizing the standard CSO and BWO.
[0017] This paper proposed a new automatic mango leaf disease prediction model by following 4 major phases: (a) pre-processing, (b) image segmentation, (c) feature extraction, (d) disease prediction. The acquired raw image was first pre-processed using contrast enhancement and histogram equalization to eliminate noise and other undesirable artefacts, as well as to improve the image quality. Then, pre-processed images were segmented via Geometric Mean Based Neutroscopic with Fuzzy C-Mean. Following that, the most important features were retrieved from the segmented pictures, including "texture features like ULBP, colour features, and pixel features." These characteristics were given into the detection phase, which is based on a CNN model for illness detection. Further, to enhance the classification accuracy of CNN, its weights was fine-tuned using CSUBW model. The new hybrid optimization model (CSUBW) was developed by hybridizing the standard CSO and BWO Algorithm. The overall performance of the proposed work had been recorded for both the proposed as well s existing models at 70% of LR. The accuracy of the proposed work is 0.912, which is 24.5%, 23.6%, 23.6%, 23.6%, 33.3%, 52.8%, 24.56% and 30.7% better than the existing models like CSO, BWO, WOA, EHO, SVM, NN, NB and RF, respectively. Thus, from the overall evaluation, it is clear that the proposed work had attained the maximal performance and hence becomes much sufficient for mango leaf disease detection.
[0018] While the invention has been described and shown with reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF DRAWINGS
[0019] So that the manner in which the above-recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may have been referred by embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0020] These and other features, benefits, and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
[0021] Figure 1 shows a Architecture of the proposed work, in accordance with an exemplary embodiment of the present invention;
[0022] Figure 2 shows a Pre-processing phase;
[0023] Figure 3 shows a Feature Extraction Phase; and
[0024] Figure 4 shows a Solution Encoding.
DETAILED DESCRIPTION OF THE INVENTION
[0025] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and the detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim.
[0026] As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein are solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers, or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles, and the like are included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[0027] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element, or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[0028] The present invention is described hereinafter by various embodiments with reference to the accompanying drawing, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, several materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[0029] The present invention relates to adaptive hybrid Meta heuristic-CNN framework for automated detection of mango leaf diseases.
[0030] The phrase "image pre-processing" refers to actions on images at the most fundamental level. If entropy is an information metric, these techniques don't really enhance picture information content, but rather reduce it. The goal of pre-processing is to enhance the picture data by suppressing unwanted distortions or enhancing specific visual characteristics that are important for subsequent analysis and processing. In this research work, the collected input image is pre-processed via contrast enhancement and histogram equalization model. The pre-processing phases are diagrammatically shown in Fig.2.
[0031] Contrast Enhancement: The quality of the input image is enhanced by applying the contrast enhancement technique [38]. Initially, the RGB color channel is converted into HIS. This approach is centered around the intensity parameters and preserves the other hue and saturation values. Then, the intensity is separated into two-step parameter groups: low and high . Mathematically, these two groups can be expressed as per Eq. (1) and Eq. (2), respectively.
= (1)
= (2)
Here, is the trivial threshold intensity value. Further, the enhanced intensity is computed by using Eq. (3).
= (3)
[0032] Here, is the cumulative density computed from histogram. To reduce the inaccuracy, the mean brightness and the input brightness are calculated. The repetition of this procedure continues until the appropriate increased intensity values are found. The result is created by combining the increased intensity with the other initial hue and saturation values and converting them back to RGB colour channel.
[0033] Histogram Equalization: The intensity distribution of an image is graphically represented by a histogram [36]. It essentially indicates the amount of pixels for every pixel intensity that's taken into account. Histogram Equalization is a contrast-enhancing computer image processing method. It does this by effectively spreading out some of the most common intensity values, i.e. widening the image's intensity range. When the useable data is represented by near contrast values, this approach generally boosts the global contrast of images. This enables locations with poor local contrast to get a boost in contrasts. The pixel values inside each color image are indicated by a colour histogram of an image. Histogram equalization can't be applied independently to the image's Red, Green, and Blue components since it causes drastic colour shifts. If the picture is first transformed to another colour space, such as HSL/HSV, the method can be applied to the luminance or value channel without changing the image's hue or saturation.
[0034] The pre-processed image acquired at the end of Histogram Equalization is denoted as ,which is subjected to the proposed Geometric mean with modified fuzzy C-means based neutroscopic segmentation phase.
- Pre-processing phase
- Proposed Image Segmentation phase
- Geometric mean with modified fuzzy C-means based neutroscopic segmentation phase
[0035] The pre-processed image is segmented via Geometric mean with fuzzy C-means based neutroscopic segmentation. The universe of the disclosure is denoted as . The three membership sets, namely true , indeterminacy and false are used to characterize the neutrosophic image . In the image set , a pixel is denoted as . Here, points to the values that varies in , respectively. Further, from the image domain is converted into the neutroscopic domain. This is shown in Eq. (4).
(4)
[0036] The notation denotes the membership values, which are mathematically given in Eq. (5), Eq. (7) and Eq. (9),respectively. Here, points to the pixel intensity value and is the 's local mean value. In addition, is the absolute value corresponding to the difference between the intensity and its (local mean value).
(5)
(6)
(7)
(8)
(9)
[0037] Geometric-mean operation: for , the indeterminacy is computed by using the value of . In order to correlate with , the modifications taking place in and should have an effect on the influence of the element and 's entropy. For the grey space image of , the geometric mean operation is performed rather than the existing operation.
(10)
For , the geometric-mean operation is computed as per Eq. (11)- Eq. (18), respectively.
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
Here, denotes the absolute difference between the (mean intensity) and its means value .
[0038] The difference between the mean intensity and its mean value , after performing the Geometric-mean operation is an absolute value . In this work, . The multi-features (attributes) are extracted from the produced neutrosophic image, which is denoted as . Moreover, to alleviate the segmentation errors, the modified fuzzy C-means (MFCM)segmentation algorithm is introduced in this research work.
[0039] The segmentation error can be mathematically given as per Eq. (19).
(19)
[0040] The segmented image acquired from the geometric mean with fuzzy C-means based neutroscopic segmentation is pointed as . Then, from , the multi-features like ULBP, color features and pixel features are extracted.
[0041] Proposed Feature Extraction Phase: Upgraded LBP, Color Feature and Pixel Feature
[0042] The feature extraction is the major phase, in which the most relevant features like ULBP, color features and pixel features are extracted. All these extracted features are together used to train the deep learning classifier in the disease detection phase. The extracted features are shown in Fig.3.
[0043] Upgraded LBP (ULBP): LBP[37] is a simple and effective texture operator that identifies pixels in an image by thresholding each pixel's neighborhood and interpreting the outcome as a binary integer. The LBP is easier to implement and has a superior discriminative capability. The LBP operator also labels the image pixels along with the decimal numbers. During the labelling phase, each image pixel is calculated with its neighbours by subtracting the centre pixel value. Additionally, the consequent negative values being encoded as 0, whereas the positive and 0 values have been encoded as 1. To obtain a binary number, all of binary codes are concatenated clockwise from the top-left, and these binary numbers are known as LBP codes. The texture descriptor is being used to create the global description, which would be made up of many local descriptions. Furthermore, the distinguishable capability is used to extract characteristics from these texture objects. In the block size, 3*3 of , the LBP is applied. The centre pixel is referred as the threshold. The count of neighboring pixels is denoted as , and neighboring pixel is denoted as .In addition, denotes the neighbouring pixel and radius, respectively. Moreover, the intensity of current and the neighboring pixel is denoted as and , respectively. The newly introduced ULBP model is mathematically shown in Eq. (20).
[0044] Color Feature: The color features extracted from are: R channel of RGB, H channel of HSV and L channel of LUV images. The RGB colour model is indeed an additive colour model[1] wherein "red, green, and blue" light are combined in different ways of generating a wide variety of colours. Whereas if RGB picture is 24-bit, every channel for red, green, and blue contains 8 bits-in other words, the image has been made up of 3 images (one for each channel), each of which may hold discrete pixels with standard brightness levels of 0 to 255. Each channel of a 48-bit RGB picture (extremely high colour depth) has been made up of 16-bit images. The red area of the image appears significantly brighter than the others in the red channel. The HSV (or HSL) colour space is indeed a colour representation paradigm that's also based on (human) colour perception. Color is indeed a term used to describe the hue (mainly red, yellow, green, cyan, blue or magenta). It is indeed usual to draw a circle around the hue and provide the hue's magnitude in degrees (over 360 degrees). In the "LUV" region (where" L stands for luminance, whereas U and V represent chromaticity values of colour images"). The extracted color feature is denoted as
[0045] Pixel Feature: Brightness[35] is a relative term, which refers to the intensity of one pixel compared to another. As an image pixel feature, the brightness of each pixel is calculated. The extracted pixel feature is denoted as . The extracted overall features is denoted as + + = . Using this , the optimized CNN is trained.
[0046] The extracted are given as the input to optimized CNN [31] [32 ][33] for detecting the presence/ absence of mango leaf diseases. The CNN is made up of three major layers: "convolutional, pooling, and fully-connected layers". For the inputs, the feature representations are learnt in the convolutional layer that is madeup of several convolution kernels. The diverse feature maps are computed using these convolution kernels. In the feature map, every neuron in the current layer is connected to previous layer's neighbouring neurons , and this mechanism is denoted as neuron's receptive field. Moreover, the input is convoluted by applying the element-wise nonlinear activation function with the learned kernel in order to achieve a new feature map. several different kernels are being implied to create the complete feature maps. At layer of feature map , the feature value residing in the location can be computed using Eq. (23). The generated feature map is denoted as .
[0047] Here, and points to the weight vector corresponding to feature value residing in the location At layer of feature map .in addition, is the extracted feature, which comes as input to CNN at the location in the layer corresponding to map. Moreover, this weight function is fine-tuned using a new CSUBW model,considering the objective of minimizing the loss (error) while detection (diseases or non-diseased). Within the CNN, the non-linearlity is introduced by the "activation function". The non-linear activation function is denoted as . For convolutional feature , the activation function is computed using Eq. (24).
(24)
[0048] The typical activation functions are sigmoid, tanh and ReLU. The shift-invariance of the feature maps is achieved by lessening its resolution. The poling layer is denoted as . The output from CNN is denoted as , which can be mathematically given as per Eq. (). Here, points tothe local neighborhood that is localized around the location . The CNN's final layer is the output layer with one or more fully-connected layers.
(25)
[0049] There is count of input-output relations . Here, points to the input data and is the targeted label (presence/absence of disese in mango leaf). The output from CNN is denoted as .
[0050] In CNN, Eq. (26) determines the loss function , which needs to be minimized. In this research work, the loss function is minimized by fine-tuning the weight of CNN using a newly developed hybrid optimization model (CSUBW). This is shown in Eq. (27). The solution fed as input to CSUBW model is manifested in Fig.4.
[0051] CNN Training by Proposed CSUBW: The CSO has been developed with the inspiration acquired from the behaviors of cats. The CSO model solves the complex optimization problems with higher convergence. the major two behaviors of cats are: "seeking mode and tracking mode". In addition, the BWO was developed with the inspiration acquired from the "unique mating behavior" of black widow spiders. The BWO model is also good in solving the complex optimization problems, and here too the solution are found to be highly convergent. Moreover, in this case, the search agents find the global solutions within the search space. In literature, it is said that the convergence behavior of the hybrid optimization models are needed higher than the conventional algorithms [24] [25] [26] [27 ][28 ][29 ][30]. in this research work, we've introduced the BWO within the CSA model, and hence have named the proposed hybrid algorithm as Cat Swarm Updated Black Widow model (CSUBW) Model. The steps followed in the CSUBW model is depicted below:
[0052] Initialize population of the search agent in the dimensional space. The velocity of the search agent is denoted as and the position of the search agent is pointed as .
[0053] In the dimensional space, the cats are sprinkled randomly and the value are selected randomly within the maximum and minimum velocity bounds.
[0054] As per the "mixture ratio (MR)", the count of cats are selected and are set into the tracing mode. The rest of the cats are set into seeking mode.
- (a) Seeking Mode: for the present cat , count of copies are made.Here, is the SMP. If SPC value=true, then set =(SMP-1) and set the present cat as the best one.
- (b)As per CDC, the SRD values are randomly plus or minus. Then, replace the old ones with the current ones.
- (c)for all the candidate points, the fitness function is computed using Eq. (27).
- (d) when all are not equivalent, the selecting probability is computed for every candidate point using Eq. (28). When the is equivalent for every candidate point, then the selecting probability is set as 1 for each candidate point.
- (e)in order to move away from the candidate points, the point to move is randomly picked, and the position of the cats are replaced.
[0055] Simulation Setup: The proposed automatic mango leaf disease detection model was implemented in MATLAB. The proposed work has been evaluated with the data collected from [34]. The sample images acquired after the pre-processing phase has been in Fig.5. the segmented images acquired after proposed Geometric mean with modified fuzzy C-means based neutroscopic segmentation is shown in Fig.6.The proposed work has been compared over the existing models like CSO, BWO, WOA, EHO, SVM, NN, NB and RF, respectively. The performance measures like "Accuracy, Sensitivity, Specificity, Precision, Negative Predictive Value (NPV), F1-Score and Mathew's correlation coefficient (MCC), and false positive rate (FPR), False negative rate (FNR), and False Discovery Rate (FDR)" are computed. The proposed model had been trained with 70% of the data, and the rest 30% had been utilized for testing the model. Among this 70% (considered as 100%), 70%, 80% and 90% of the training data are adjusted and the results acquired are recorded.
[0056] The performance of the proposed work (CSUBW+CNN) is compared over the existing molds like CSO,BWO, WOA, EHO,SVM, NN, NB and RF, respectively in terms of positive measures like "accuracy, Sensitivity, Specificity, Precision; negative measures like FPR, FNR, and FDR; other measures like F1-Score and MCC". In order to prove that the proposed model had achieved the best performance, its positive and the other measures needs to be higher, negative measures needs to lower. Inherently, the proposed work had attained the best performance under all the computed measures. All these improvements are owing to two major reasons: (a) extraction of the most relevant features, rather than using the existing one and (b) fine-tuning the parameter of the detection framework (CNN) via the newly introduced hybrid optimization model. In the newly introduced optimization model, we have considered four major parameters: is the inertia weight and is the random velocity that uniformly distributed in the interval [0,1], and are the controlling parameters. All these together aided in boosting up the convergence performance of the projected model. The positive performance of the CSUBW+CNN is manifested in Fig.8. All these evaluations are undergone by varying the learning rate from 70%, 80% and 90%, respectively. On observing the outcomes the CSUBW+CNN had attained the maximal accuracy under all the 3 variation in the learning rate. When the LR=07, the CSUBW+CNN had attained the maximal accuracy as 93%, while the existing models had recorded the lower accuracy ranges as CSO=0.65,BWO=0.68, WOA=0.69, EHO=0.7,SVM=0.6, NN=0.4, NB=0.7 and RF=0.62. Moreover, the specificity, sensitivity as well as precision of the CSUBW+CNN is also found to be higher under all the variation in the LR. At LR=90, the precision of the CSUBW+CNN has achieved the maximal value as 100%, which is the most optimal score. The sensitivity of the CSUBW+CNN at LR=90 is 100%, which is also a best score. On the other hand, the FDR, FNR and FPR performance of the proposed model is shown in Fig.9. The FDR of the CSUBW+CNN had attained the least value below 0.06 for every variation in the LR. At 90th LR, the CSUBW+CNN had attained the least FDR as 0.04, which is 90%, 90.2%, 90.4%, 92.4%, 88.5%, 60%, 89.4% and 92% better than the existing models like CSO,BWO, WOA, EHO,SVM, NN, NB and RF, respectively. The FPR of the CSUBW+CNN had attained the least value for every variation in the LR. In addition, the other performance measures like MCC and F1-score are computed to validate the efficiency of the CSUBW+CNN. On observing the outcomes from Fig. 10, the CSUBW+CNN had attained the maximal value for every variation in the LR. The F1-Score of the CSUBW+CNN at LR=70 is 92%, which is better than the existing models like CSO=70%, BWO=71%, WOA=71.5%, EHO=72%, SVM=75%, NN=10%, NB=72% and RF=70%.moreover, the MCC of the CSUBW+CNN is also higher for every variation in LR. Thus, from the evolutionist is vivid that the proposed work had attained the most favourable performance, and hence much sufficing for detecting the mango leaf disease.
[0057] Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the 5 embodiments shown along with the accompanying drawings but is to be providing the broadest scope consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and appended claims.
, Claims:CLAIMS
We Claim:
1) A method for detecting mango leaf diseases using an adaptive hybrid Meta heuristic-Convolutional Neural Network (CNN) framework, the method comprising the steps of:
- Pre-processing the input image through contrast enhancement and histogram equalization techniques to improve the image quality and highlight relevant features;
- Segmenting the pre-processed image using a geometric mean with modified fuzzy C-means based neutroscopic segmentation approach to identify regions of interest.
2) The method as claimed in claim 1, wherein the pre-processing involves converting the RGB color channels into the HSI color space and enhancing the intensity values to improve image contrast.
3) The method as claimed in claim 1, wherein the segmentation phase utilizes neutrosophic logic to classify image pixels into three membership sets: true, indeterminate, and false, based on pixel intensity values and local mean intensity.
4) A system for automated mango leaf disease detection, the system comprising:
- A processor configured to execute the image pre-processing, segmentation, feature extraction, and CNN-based classification phases;
- A memory unit for storing pre-processed images, segmented images, and extracted features;
- A feature extraction module that computes upgraded Local Binary Pattern (ULBP), color features, and pixel features from the segmented image for disease detection.
5) The system as claimed in claim 4, the system further comprising a hybrid optimization model (CSUBW) for fine-tuning the CNN weights, where the hybrid model combines the Cat Swarm Optimization (CSO) and Black Widow Optimization (BWO) techniques to improve the convergence and accuracy of the disease detection model.
Documents
Name | Date |
---|---|
202431088193-COMPLETE SPECIFICATION [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-DRAWINGS [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-EDUCATIONAL INSTITUTION(S) [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-EVIDENCE FOR REGISTRATION UNDER SSI [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-FORM 1 [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-FORM FOR SMALL ENTITY(FORM-28) [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-FORM-9 [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-POWER OF AUTHORITY [14-11-2024(online)].pdf | 14/11/2024 |
202431088193-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf | 14/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.