Manuscript accepted on : 15-11-2024
Published online on: 22-11-2024
Plagiarism Check: Yes
Reviewed by: Dr. Sonam Sneha
Second Review by: Dr. Sudhanshu Kumar Bharti
Final Approval by: Dr. Wagih Ghannam
Nithyanandh Selvam1* and Eldho Konnammanayil Joy2
1Department of MCA, PSG College of Arts and Science, Coimbatore, Tamilnadu, India.
2Department of Computer Science, Mary Matha Govt Aided Arts and Science College, Mananthavady, Kerala, India.
Corresponding Author E-mail:nknithu@gmail.com
ABSTRACT: Objectives: To suggest a new AI-based model to detect plant leaf diseases at an early stage in order to maximize crop yield. Deep learning with multi-variable feature selection method is employed to boost the accuracy rate of detection and classification. Methods: The artificial intelligence-based mask RCNN is utilized to extract all the multivariable features in plant leaves to predict the type of disease at an early stage. DAEN is employed to denoise the image and learn the leaf image representations from unlabeled data to improve the classification accuracy. Combinational methods like local binary patterns, color histograms, and shape descriptors are employed to identify local and global features of the plant leaf. The PLANT-DOC dataset is used for this research study, which includes 2,590 leaf images and 17 classes of disease with the target attributes of healthy and diseased leaves. The LeafNET architecture is used for pre-processing and to analyze the significant spots in leaf images. To evaluate the performance of the proposed AI-based Mask-RCNN, a MATLAB tool is used, and the results are compared with the prevailing models such as ACO-CNN, I-SVM, KNN, and DL-RPN. Findings: The suggested AI Mask R-CNN model outperforms the existing methods with proven results of 95.06% accuracy, 96.7% sensitivity, 97.24% specificity, 96.4% TPR, 95.91% TNR, 9% FPR, 8.4% FNR, and 94.57% F1 Score, 94.96% detection speed which clearly shows that the model has the ability to classify and detect leaf disease in a robust manner. Novelty: The outcome of the AI Mask-RCNN model enhances the accuracy rate in terms of plant leaf disease detection and classification at an early stage, which helps the agricultural sector maximize crop yields. In terms of performance evaluation, the proposed model outperforms the shortcomings of the existing leaf disease detection models such as ACO-CNN, I-SVM, KNN, and DL-RPN.
KEYWORDS: Auto Encoder Networks; Artificial Intelligence; Classification; Data Processing; Image Segmentation; Mask R-CNN; Plant Leaf Disease Detection
Copy the following to cite this article: Selvam N, Joy J. K. Plant Leaf Disease Detection with Multivariable Feature Selection Using Deep Learning AEN and Mask R-CNN in PLANT-DOC Data. Biotech Res Asia 2024;21(4). |
Copy the following to cite this URL: Selvam N, Joy J. K. Plant Leaf Disease Detection with Multivariable Feature Selection Using Deep Learning AEN and Mask R-CNN in PLANT-DOC Data. Biotech Res Asia 2024;21(4). Available from: https://bit.ly/40XMmDA |
Introduction
Plant leaf disease is a common agriculture problem caused by various environmental factors such as climatic change, pests, bacteria, fungi, etc. Agriculture is the source of the Indian economy, and worldwide, farmers face significant losses in crop yield due to plant disease. Manual identification of disease becomes a very complex process that requires huge human potential. To overcome that, an automated CAD system was designed to detect and classify the disease at an early stage to improve production. Various ML models have been introduced as a problem-solving method to improve disease detection and classification accuracy by focusing on object detection, where the existing systems have minimal drawbacks like low accuracy, poor early detection and classification, handling of complex data, and learning intrinsic features from the image. To overcome the limitations of the existing model, a new AI-based DAEN with Mask R-CNN model has been proposed in this research work to extract the multivariable features from the given input to detect and classify the disease at an early occurrence for a smooth diagnosis. Deep segmentation, root pixel masking, extraction of local and global features, and identification of significant spots using the Leaf-Net architecture are some of the unique methods followed in this study to enhance its robustness. The PLANT-DOC dataset is utilized for this research study, and the same data is used for training and testing purposes. The features are reconstructed, and variations are recorded in all iterations during implementation process. DL models are trained to give the overall solution for plant leaf disease detection and classification with minimal false rates.
Early apple leaf disease detection1 is achieved by employing IFBA, MCSVM, and SVI models, where genetic algorithms are used for feature extraction and pattern learning to compare with the real-time images. The authors used machine learning and deep learning basic combination methods to achieve the objectives. The authors could have improved this model by employing a loss function to increase the TPR and TNR rates, which would have enhanced the accuracy level on utilizing the complex datasets. The authors2 introduced a deep learning-based LW-CNN for ALD classification and prediction to overcome the drawbacks of ML models. In this model, the pattern matching method was followed by three layers. CNN, max-pool, and CFF layers were used to classify five types of diseases on multiple scales. Multiscale FE and FS techniques are followed to perform four convolutions as output values. Alex-Net, Res-Net, VGG-Net, etc. are used for comparative analysis. This model achieved 89% accuracy but had some shortcomings, such as annotation errors and checkpoint errors during iterations. The performance of DL and ML3 was compared by employing the input values into the VGG-Net 120, Res-Net, and Leaf-Net architectures to review the accuracy, TPR, TNR, FPR, FNR, AUC-ROC, etc., whereas the authors gave a clear review of the ML and DL model drawbacks and merits of using the models for the detection and classification of disease in combination with some mathematical models to enhance the algorithm to perform efficiently. The SSMD Computational model4 was proposed to replace VGG-Net and Res-Net for object detection by concatenating feature maps to enhance the performance of SSMD. In this model, the object layers are closely marked with the input image to learn the dynamic and deep spots of the objects and detect the variations and structural regions up to 90% to generate the output in a robust manner. The model4 has one limitation that the network can only be trained on 512×512 pixel size input and cannot be used for 3D images. The QOSIWD-ARP5 model was proposed by the authors to optimize the input data to train the machine learning model to detect dynamic link failures in the network. During the real-time process, dynamic failures are spotted, and the same has been recovered by employing RWS methods, which helps the model enhance its accuracy rate. Software defect prediction models6 along with unbalanced data classification were portrayed by the author which shows that unlabeled data has to be denoised before being used in auto-encoder networks for training and testing purposes. Collaborative ML and DL combinational methods and review of PLD with ML models was proposed7-8 in which scientometric analysis was carried out with the support of 215 documents on the same subject on how the model works efficiently on leaf disease detection and classification at early occurrences. Many ML models, like LR, RAL, supervised learning, etc., are reviewed to get an exact comparative analysis of plant leaf disease detection and classification with sensitivity and specificity and F1-measure analysis. All models projected the results with minimal shortcomings like multivariable feature learning methods, pattern matching, etc., which seems to have less resulted in developing a new model for PLD. Sleep scheduling algorithms were introduced to optimize the network9 for data transmission from source to destination with the help of firefly bio inspired algorithm which deals on AI sensors in the networks for automation. CV and ML models such as I-SVM and KNN10-11 were proposed for plant leaf disease detection in terms of efficient object detection with a deep segmentation process and Res-Net architecture. The process is carried out in 3 layers, where layer 1 takes the input image, layer 2 processes the filtering, and layer 3 decodes the image and gives it as a compressed or reconstructed image to train the model. 90% accuracy is achieved, whereas the model doesn’t work for 3D images, which is the only drawback. The optimized DL ACO-CNN model works on complex data to learn the intrinsic global and local features from the input image with a high pixel value. The TPR and TNR rates were maximized, and the same is reflected in the AUC-ROC region. The DL-RPN12 model was proposed as a smart decision system to recognize and localize the input images in complex data. The Chanvese technique has been employed to learn the features from the leaf both in real time and during training time for target detection. The plant diseases are identified by a transfer learning model, where the network is trained with the help of the CV image segmentation process. 91% accuracy is achieved, whereas early prediction is not achieved, which is considered a minimal flaw. Instance segmentation13-14 and a review of plant disease and pest detection with DL methods were proposed by the authors, where Mask R-CNN was employed to detect cotton leaf disease in real time. The CNN was trained with multiple layers in which the input images were normalized with the help of Alex-Net, Google-Net, and V-Net classifications. Pest’s detection was achieved by masking the reconstructed gray-scale image into various dimensions, and the same was used as input for implementation purposes. 91% accuracy is achieved while using DL methods, whereas the complex data handling slows down the process, which requires the new additional method for automatic detection and classification. A mobile-based CNN leaf disease detection model and a deep learning model review15-16 were proposed by the researchers, and the model classified 38 types of diseases in real time by capturing the plant leaves. 96,000 images are tested in real time with both healthy and infected leaves to check the accuracy rate. The android system was used by the farmers to capture both diseased and infected leaves for training and testing purposes. The automatic segmentation process was carried out, and the system efficiently classifies the healthy and diseased leaves in a robust way. The only drawback of this system is the compatibility and computational complexity of the mobile operating systems to capture the images in various sizes to train the model. The RVSRP optimized model and data augmentation for mask-based leaf segmentation model, review of DL models, and tomato leaf disease detection methods17-20 were introduced by the authors, where the CNN and RNN techniques were employed, where all layers are mapped and concatenated to generate the output value during the real-time process. These methods are a subset of AI where automation is carried out with few limitations. The real-time capture mechanism is not implemented effectively. HP-PLD, HCNN, CRFs, AX-Retina Net, Residual U-Net, and MC-PLD segmentation21-23 were developed as problem-solving methods to classify a variety of diseases in the plant leaves. These models work well in all layers except the encoding and decoding processes where lacks in all existing methods25-28. Overall, 91% accuracy with 29% false rates is marked. All existing models have shown predicted outcomes with shortcomings. To overcome all those shortcomings, a new model is proposed, which is an AI-based DAEN with Mask R-CNN as a problem-solving model to identify the PLD by extracting multi-variable features in a robust manner.
The core objectives of DAEN with Mask R-CNN are: i) plant leaf disease detection with PLANT-DOC input image; ii) classification of type of disease; iii) extracting multivariable features; iv) active usage of the PLANT-DOC leaf dataset24 ; v) employing Leaf-Net architecture; vi) enhancing the prediction and classification accuracy rate, etc. The major steps followed in DAEN with Mask R-CNN are:
Deep Layer Segmentation
The deep layer segmentation process is carried out to enhance the image quality for training and testing.
Pre-Processing, Filtering, and Denoising
Smoothing, sharpening, automatic filtering, and denoising processes are done to train the network model to learn the latent representations in a dynamic manner.
Normalizing the PLANT-DOC image
Normalization is done to remove the noisy images after deploying the Leaf-Net architecture.
Root and Pixel Masking
Mask R-CNN is used for root and pixel level masking to boost the classification and accuracy rate in order to coordinate with DAEN to detect the significant spots.
Capturing Intrinsic Features
Images are compressed on a gray scale to capture the intrinsic features for pattern matching, which supplements the DAEN to predict the disease with high accuracy.
Materials and Methods
The suggested new AI-based automatic deep learning-based DAEN with Mask R-CNN plant leaf disease detection and classification model mainly focuses on the automatic capture of diseased or masked regions in the given input image with the help of robust feature learning methods. Some of the common features like shape, color, pixel value, depth ratio, boundary, region curve, gray scale, edge points, thickness, layer levels, etc. are extracted from the PLANT-DOC dataset. The local and global features were also extracted with the help of LBP, CH, and SD to supplement the auto-encoder and learn leaf patterns to boost prediction and classification accuracy. The images are denoised from unlabeled data to learn the depth ratio and leaf representation from the datasets. Leaf-Net architecture is employed to identify the significant spots in images and used for pre-processing. This novel method enhances multivariable feature selection and also boosts the accuracy level of PLD and classification. The performance is compared with the baseline models such as I-SVM10, KNN10, ACO-CNN11 and DL-RPN12.
Multivariable Feature Learning
The AI-based new DAEN with Mask R-CNN model is specifically designed for multivariable feature learning in the given image with deep identification of local and global spots to improve the accuracy rate of detection and classification of plant leaf disease. As the first step, data pre-processing is done for the input PLANT-DOC image in order to denoise using auto-encoders. The image is annotated for further processing. Once pre-processing is carried out, the model is trained to learn the latent representations from the input images to capture the features and match the significant patterns for efficient detection and classification. Dimensionality reduction is done based on intrinsic feature detection and segmentation by AEN. Apply Mask R-CNN to bifurcate the input image into regions and provide bounding boxes, classification type, and mask value for each detected object or feature in the image. The following are the equation derives for BB, CP and MP.
Let’s assume that, input PLANT-DOC image is X, E X is the output of a compressed leaf image with extracted features. The output of Mask R-CNN is BB which coordinates the diseased region in the image, CP represents the healthy or diseased regions, and MP represents the region highlighted area. Potentially all the learned featured by the model are contrasted with pattern matching to boost the detection and classification accuracy. Loss function is used to train the Mask R-CNN model which includes C-Loss, BB-Loss and MP-Loss. The equation to derive the loss function is,
where, during the implementation the cross-entropy loss, L1 loss and binary CE loss may involve for accurate prediction and classification. An AUC-ROC curve is generated to identify the TPR and TNR rates for all iterations. As the DAEN with Mask R-CNN model uses Leaf-Net architecture, deep segmentation and significant spots are masked to learn the deep patterns (diseased) in the form of shape, colors, descriptors, histograms, layer thickness, bounding sizes, etc.
PLANT-DOC Data Attainment, Acquisition and Pre-Processing
This proposed AI-based novel plant leaf disease detection and classification model with DAEN and Mask R-CNN uses PLANT-DOC leaf datasets, which consist of 2590 leaf images of 13 various species of plants with a classification of 17 types of plant diseases. The prediction and classification results attributed to 0 and 1 represent healthy and diseased leaves. The pixel value of each image is 2576×1934, with clear digital segmentation layers. Table 1 shows the type of leaf, intensity level, pixel value in points, depth ratio, and value of LBP, CH, and SD with detection and classification results. The yielded sample results are provided in the last column. For this research study, 80% of the data is used for training and validation purposes, and 20% is used for the testing set. Figure 1 shows the sample leaf in row 1 and contour-masked images in row 2, where the values are projected in the table.
Table 1: PLANT-DOC dataset of 2590 images and contour values
Type | Int-L | P-Val | DR (1-10) |
PLD | Depth Ratio | Class | Yielded Results |
||
LBP | CH | SD | |||||||
ABR-L0 | 1.2 | 2596 x 1934 Pixel Value | 1 | 1 | 0.44 | -1.06 | 30.19 | 1 | Total: 2590
Class: 17 Healthy: 1340 PLD: 691 Trained: 80% Tested: 20% |
BPB-L0 | 2.6 | 3 | 0 | -2.06 | 0.06 | 177.19 | 1 | ||
BBL-L0 | 4.1 | 3 | 1 | -1.06 | 0.06 | 180.19 | 2 | ||
CPO-L0 | 2.6 | 2 | 1 | -0.31 | -1.81 | 85.19 | 4 | ||
GSC-L0 | 2.6 | 1 | 1 | -1.31 | -0.31 | 5.19 | 4 | ||
GBR-L0 | 4.1 | 4 | 0 | -1.06 | 0.19 | 0.19 | 5 | ||
PEB-L0 | 7.9 | 5 | 1 | 1.69 | -1.56 | 85.19 | 7 | ||
ABR-L1 | 7.2 | 6 | 1 | -0.56 | -1.06 | 101.19 | 6 | ||
BPB-L1 | 7.0 | 1 | 1 | 2.44 | -1.31 | 105.19 | 4 | ||
BBL-L1 | 5.8 | 4 | 1 | 1.69 | -2.31 | 85.19 | 3 | ||
CPO-L1 | 8.9 | 5 | 1 | -0.31 | -1.81 | 100.19 | 2 | ||
GSC-L1 | 7.2 | 4 | 1 | 2.69 | -0.81 | 70.19 | 4 | ||
GBR-L1 | 4.7 | 3 | 0 | 0.94 | -1.56 | 90.19 | 2 | ||
PEB-L1 | 6.9 | 6 | 1 | 1.19 | -1.31 | 95.19 | 5 | ||
BPB-L2 | 2.2 | 7 | 0 | -0.06 | 0.19 | 0.19 | 1 | ||
BBL-L2 | 4.7 | 5 | 1 | -0.31 | -1.31 | 88.19 | 2 | ||
CPO-L2 | 6.9 | 6 | 1 | 1.19 | -1.31 | 85.19 | 4 | ||
GSC-L2 | 2.2 | 4 | 0 | -0.06 | -0.31 | 0.19 | 1 |
(The PLANT-DOC datasets sample images and values)
Figure 1: PLANT-DOC sample images (Normal: 1st Row / PLD Detection & Masking: 2nd Row) The masking image results shows that the leaves are marked with colours, shapes, patterns to identify the diseases with significant spots matching. |
Extraction of Local and Global Features using LBP, CH and SD
The local and global features in the PLANT-DOC images are extracted by employing unique methods called local binary patterns, color histograms, and shape descriptors to associate with DAEN to enhance the accuracy of prediction and classification of leaf disease detection. In order to improve dimensionality reduction and robust feature selection and extraction learning from any complex database, LBP, CH, and SD methods are widely used. Initially, LBP focused mainly on the local patterns of the image pixel and its intensities to capture the depth ratio. The output value of LBP serves as input to AEN or supplementary value for AEN to train the newly proposed model. Colour channels of the leaf images recorded by CH and the model allows learning the complete leaf colour distribution patterns in PLANT-DOC dataset. AEN compresses the color markings into a compact form during the training process. The geometric properties of the images are recorded by SD, which helps AEN encode and decode the features during the testing process. The encoder output equation is derived as follows,
where, X represents the input image (concatenation of LBP, CH and SD). The output of the encoder E X denotes by applying series of iterations. AEN weights and biases are represented as The decoder output equation is derived as follows,
where, X is reconstructed input and D is the decoder output. Normally the AEN involves in concatenation process in some form of fusion before feeding the PLANT-DOC images into the network for training and testing purpose.
AI Mask R-CNN with DAEN Architecture Diagram
The architecture diagram of DAEN with Mask R-CNN clearly shows the pre-processing and segmentation process at the initial level and encoding and decoding with the help of auto encoders, where masking the deep spots of the infected region using Mask R-CNN is followed by conversion mapping (16x16x16) during the implementation process for PLD detection and classification.
Figure 2: Mask R-CNN with DAEN Architecture |
AI-Based Mask R-CNN with DAEN Process
Input: MATLAB settings with PLANT-DOC dataset
Begin: Load the Plant Leaf (PL) Images from PLANT-DOC dataset
Perform preprocessing
for image (i) in PLANT-DOC dataset:
normalized_image = cv2.normalize(image, None, alpha=0, beta=1,
norm_type=cv2.NORM_MINMAX)
return normalized_image
resized_image = cv2.resize(image, target_size)
return resized_image
preprocessed_images = preprocess_images(input_images)
return preprocessed_images
Train with DAEN
Train Classifier using
R-CNN Segmentation using Leaf-Net
Local Feature selection with LBP, CH and SD
Shape identification marking with Mask-RCNN
Disease Prediction
Generate PLD output (0,1)
Output: Leaf Disease detection & Classification
End
PLANT-DOC Image Segmentation using LeafNet
The Leaf-Net architecture is used for pre-processing, which includes partitioning the infected regions from the image after masking and identifying the significant spots in the captured region. After this process, during implementation, the model learns the input image and matches the pattern to the specific region in order to detect and classify the leaf disease at its early occurrence. The loss function is used along with Leaf-Net to measure the dissimilarity between the leaf disease prediction and reality. Two major steps are followed for effective deployment of the model:
With Leaf-Net, identify the significant spots like lesions, spots, discoloration, blisters, mottling, rust, target patterns, streaks, holes, galls, etc.
Target and position the restrained changes in PLANT-DOC dataset.
Real-Time Implementation Process
During the real-time implementation process, 128-bit transformational PLANT-DOC leaf images are utilized with the incorporation of 3D spatial information. The below-mentioned steps are to implement the AI-based Mask R-CNN, and DAEN.
After data collection, convert the images to 16 bits with 3 spatial coordinates for each pixel.
Pre-process the PLANT-DOC 16-bit converted images and Annotate for detailed feature extraction and region marking.
Train on an annotated PLANT-DOC leaf dataset and integrate Mask R-CNN, and DAEN to concatenate the output features extracted and visualize the results.
Repeat the iterations by capturing live 128-bit images and converting them into 16 bit with 3 spatial coordinates.
Performance Evaluation Metrics of Mask R-CNN with DAEN
The AI-based Mask R-CNN with DAEN is implemented and compared against prevailing ML and DL baseline models such as ACO-CNN, I-SVM, KNN, and DL-RPN. The MATLAB simulation tool is used to evaluate the performance of the suggested novel AI-based model for leaf disease detection and classification. The software tool works with various built-in functions, visualization options, and statistical analysis. A 2×2 confusion matrix analysis was performed to breakdown the classification results based on the given PLANT-DOC dataset values.
The Mask R-CNN with DAEN model is tested for various iterations, and the performance results are generated and compared against the existing methods. Predicted results and ground-truth annotations of leaf images are clearly visualized during the simulation process. The strengths and limitations of the models were assessed with the help of the F1 score. The following are the evaluation metrics used in this research study.
where, PLD denotes Plant Leaf Disease Detection and the PEM equation is derived using, T1 = (TPR×TNR-FPR×FNR), T2 = (TPR+FPR), T3 = (TPR+FNR), T4 = (TNR+FPR), and T5 = (TNR+FNR).
Sensitivity and Specificity
Sensitivity calculates all the +ve samples identified by Mask R-CNN with DAEN out of +ve instances, and specificity calculates all the -ve samples identified out of all -ve instances.
Accuracy in Detection and Classification
The overall accuracy in terms of classification and detection of leaf diseases is measured. The percentage of correctly classified samples against the entire PLANT-DOC dataset.
TPR and TNR
The number of TPs appropriately detected as +ve instances is called TPR, and the number of TNs suitably detected as -ve instances are called TNR.
FPR and FNR
The number of FPs inaccurately detected as +ve instances are called FPR, and the number of FNs inaccurately detected as -ve instances are called FNR.
Detection Speed
Compute the epoch processing time during the implementation based on the number of input data points.
F1 Score
Dataset balance between the +ve and -ve samples in the given PLANT-DOC datasets.
Results and Discussions
This section shows the findings and comparative analysis of the proposed AI-based novel Mask R-CNN with DAEN for robust prediction and classification of plant leaf diseases at an early occurrence. The suggested model is compared with the prevailing ML and bio-inspired approaches such as I-SVM 10, KNN10, ACO-CNN11 and DL-RPN12, whereas the Mask R-CNN with DAEN overcomes all the limitations in terms of deep segmentation, detection, and robust classification. The unique feature of the proposed model is that the leaf image is masked and the annotated image is taken as input to match the pattern to identify the target attributes. The findings are shown below with output values and a MATLAB graph from Fig. 3-8, which is plotted with the X and Y axes, which indicate performance metrics and percentage analysis.
Sensitivity and Specificity Comparative Analysis
The sensitivity and specificity comparative analysis of Mask R-CNN with DAEN is showcased in Figure 3. As the new AEN with Masking Image method extracts all the multivariable features in the PLANT-DOC dataset input image after annotation, the filtering and segmentation processes are more commendable. Rigorous image denoising is carried out by the DAEN method. It is noteworthy that the new method outperforms the existing models in terms of early detection and classification of disease in real-time. 96.70% sensitivity and a 97.24% specificity score have been achieved, which is comparatively higher than the baseline versions which is shown in Table 2.
Table 2: Analysis of Sensitivity and Specificity
Metrics / Schemes | I-SVM10 | KNN10 | ACO-CNN11 | DL-RPN12 | Mask R-CNN with D-AEN |
Sensitivity | 75.6% | 80.45% | 84.37% | 90.12% | 96.70% |
Specificity | 76.1% | 82.31% | 85.71% | 91.23% | 97.24% |
Figure 3: Sensitivity and Specificity |
Accuracy Comparative Analysis
Accuracy analysis of Leaf disease detection and classification is presented in Figure 4. The novel AI-based Mask R-CNN with DAEN took less training time and fewer parameters compared against the prevailing methods. Robust computational methods such as LBP, CH and SD are employed to discover the local and global features in the given PLANT-DOC leaf image. The performance of the Mask R-CNN with DAEN is improved with less time and computational complexity. 95.06% accuracy rate is achieved, which is relatively high compared to the existing models. Table 3 shows clearly that the suggested model has finer capability in the detection and classification of leaf disease.
Table 3: Analysis of Accuracy Rate
Metrics / Schemes | I-SVM10 | KNN10 | ACO-CNN11 | DL-RPN12 | Mask R-CNN with D-AEN |
Accuracy (It-1) | 71.31% | 80.56% | 83.71% | 91.15% | 94.61% |
Accuracy (It-N) | 73.16% | 82.45% | 85.87% | 92.56% | 95.06% |
Figure 4: Accuracy |
TPR and TNR Comparative Analysis
Figure 5 shows the TPR and TNR rates out of the actual number of PLANT-DOC test datasets generated by the new Mask R-CNN with the DAEN method. As the method interrogates leaf disease with the LeafNET architecture, the deep segmentation is done with the clear identification of significant diseased spots in the image. The prediction and classification ratings have tremendously increased. Also, the model experimented with color, grayscale, and deep-segmented PLANT-DOC images. It is noted that the novel method outperforms for all three categories of datasets during the process of implementation. Table 4 portrays the results of DAEN, where 96.40% TPR and 95.91% TNR are marked after 50 epochs with a batch size of 40, which is higher than baseline models.
Table 4: Analysis of TPR and TNR Rate
Metrics / Schemes | I-SVM10 | KNN10 | ACO-CNN11 | DL-RPN12 | Mask R-CNN with D-AEN |
TPR Rate | 65.70% | 72.64% | 79.14% | 84.76% | 96.40 |
TNR Rate | 68.91% | 74.56% | 81.46% | 87.51% | 95.91 |
Figure 5: True Positive and True Negative |
FPR and FNR Comparative Analysis
The FPR and FNR analysis rates under various threshold settings are demonstrated in Figure 6. The classifier recognizes the disease with the lowest false and error rates and maximizes the prediction accuracy. The suggested model is compared against existing models such as I-SVM 10, KNN 10, ACO-CNN 11 and DL-RPN 12. Seven classification types are used to carry out the experimental analysis in the LeafNET architecture to evaluate its performance. The contour images are extracted by DAEN and utilized for pattern classification during the testing process. 9.0% FPR and 8.4% FNR are achieved, which is more remarkable than any other machine learning approach. The process of image denoising and segmentation works on every iteration to minimize false rates is clearly portrayed in Table 5.
Table 5: Analysis of FPR and FNR Rate
Metrics / Schemes | I-SVM10 | KNN10 | ACO-CNN11 | DL-RPN12 | Mask R-CNN with D-AEN |
FPR Rate | 34.5% | 28.75% | 21.65% | 18.91% | 9.0% |
FNR Rate | 38.67% | 29.08% | 21.45% | 17.53% | 8.4% |
Figure 6: False Positive and False Negative |
Detection Speed Comparative Analysis
Figure 7 showcases the detection and classification speed analysis of the proposed Mask R-CNN with the DAEM model. As the new method has less time and computational complexity, the training time is remarkably reduced, which maximizes the prediction and classification speed. Normally, for large datasets, more than 80 epochs are needed to achieve accuracy. But in our model, 50 epochs are sufficient to achieve better performance with a detection speed of 94.96% shown in Table 6. This is comparatively higher than the prevailing approaches, I-SVM 10, KNN 10, ACO-CNN 11 and DL-RPN 12.
Table 6: Analysis of Disease Detection Speed
Metrics / Schemes | I-SVM10 | KNN10 | ACO-CNN11 | DL-RPN12 | Mask R-CNN with D-AEN |
D-Speed (It-1) | 75.19% | 81.47% | 84.78% | 89.63% | 93.87% |
D-Speed (It-N) | 76.15% | 82.35% | 87.61% | 90.02% | 94.96% |
F1 Score Comparative Analysis
Figure 8 presents the comparative analysis of the F1-score of the newly suggested AI-based Mask R-CNN with the DAEN leaf disease detection model. It strikes the balance between precision and recall metrics, which shows the minimized error and false rates during the testing and implementation process. F1-score, (2 x precision x recall / precision + recall) is calculated with the range from 0 to 1. The output is generated in percentages. Table 7 showcased the promising results with 94.57% / 0.9 F-Score, which is higher than the other baseline machine learning models.
Table 7: Analysis of F1 Score
Metrics / Schemes | I-SVM10 | KNN10 | ACO-CNN11 | DL-RPN12 | Mask R-CNN with D-AEN |
F1 Score (It-1) | 76.11% | 82.40% | 83.57% | 88.31% | 92.76% |
F1 Score (It-N) | 77.37% | 83.41% | 85.81% | 89.13% | 94.57% |
Figure 7: Detection Speed |
Figure 8: F1 Score |
Conclusion
The novel AI and deep learning-based Mask R-CNN with DAEN method is introduced for efficient plant leaf disease detection and classification. The automated detection method detects leaf diseases with the help of pre-trained feature selection methods. The PLANT-DOC leaf disease dataset is used in this research work. 2,590 samples of various leaf images were taken, of which 80% were used for training and validation and 20% were used for testing. Multivariable features are extracted by Mask R-CNN, and image denoising is done by DAEN. LBP, CH, and SD methods are employed to identify the local and global features in the PLANT-DOC leaf dataset. All the leaf patterns are recorded and matched during the comparative analysis. Deep pre-processing and identification of significant spots are carried out by the LeafNET architecture model. Based on the image representations, the classification of diseases is identified at an early stage. After the iteration of 50 epochs with a batch size of 40, the findings are recorded with the promising results of 95.06% accuracy, 96.7% sensitivity, 97.24% specificity, 96.4% TPR, 95.91% TNR, 9% FPR, 8.4% FNR, 94.96% detection speed and 94.57% F1 Score. The demonstrated results show that the proposed deep learning method overcomes the existing models such as ACO-CNN, I-SVM, KNN, and DL-RPN.
The limitations of AI Mask R-CNN with DAEN are: i) computational complexity where the multi-stage process is very slow; ii) high requirements of annotated datasets; iii) lack of ability to capture 3D representations of leaf images; iv) only fixed input data size is permitted where the model is lacking in handling variable input dimensions; and v) localization accuracy is less. This model may be improvised to incorporate 3D feature sets to focus more on leaf disease depth ratio and intrinsic spot regions to boost automation.
Acknowledgement
The authors would like to express their sincere gratitude to their respective institutions, PSG College of Arts & Science, Coimbatore, Tamil Nadu and Mary Matha Government Aided Arts & Science College, Mananthavady, Kerala for their support throughout this research.
Funding Sources
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Conflict of Interest
The authors do not have any conflict of interest.
Data Availability Statement
This statement does not apply to this article.
Ethics Statement
This research did not involve human participants, animal subjects, or any material that requires ethical approval.
Informed Consent Statement
This study did not involve human participants, and therefore, informed consent was not required.
Author Contributions
Each author has significantly contributed to this research.
Nithyanandh Selvam has led the research work and supervised the application for multivariable feature selection methodology using DAEN and Mask R-CNN model, guided the analysis of the PLANT-DOC dataset, and played a key role in refining the manuscript as a corresponding author.
Eldho Konnammanayil Joy has done a thorough data collection, and provided crucial insights for interpreting the results in MATLAB. Both authors collaborated in writing and revising the paper, ensuring intellectual rigor, and have given their approval for its publication.
References
- Jose D, Santhi K. Early Detection and Classification of Apple Leaf Diseases by utilizing IFPA Genetic Algorithm with MC-SVM, SVI and Deep Learning Methods. Indian Journal of Science and Technology. 2022;15(29):1440-1450.
CrossRef - Fu L, Li S, Sun Y, Mu Y, Hu T, Gong H. Lightweight-Convolutional Neural Network for Apple Leaf Disease Identification. Frontiers in Plant Science. 2022;13:831219.
CrossRef - Sujatha R, Chatterjee JM, Jhanjhi N, Brohi SN. Performance of deep learning vs machine learning in plant leaf disease detection. Microprocessors and Microsystems. 2020;80:103615.
CrossRef - Jeong J, Park H, Kwak N. Enhancement of SSD by concatenating feature maps for object detection. arXiv (Cornell University). 2017; 1705.09587.
CrossRef - Nithyanandh S, Jaiganesh V. Quality of service enabled intelligent water drop algorithm based routing protocol for dynamic link failure detection in wireless sensor network. Indian Journal of Science and Technology. 2020;13(16):1641-1647.
CrossRef - Eldho KJ, Nithyanandh S, Lung Cancer Detection and Severity Analysis with a 3D Deep Learning CNN Model Using CT-DICOM Clinical Dataset. Indian Journal of Science and Technology. 2024; 17(10): 899-910.
CrossRef - Bonkra A, Bhatt PK, Rosak-Szyrocka J, Muduli K, Pilar L, Kaur A, Chahal N, Rana AK. Apple Leave Disease Detection Using Collaborative ML/DL and Artificial Intelligence Methods: Scientometric Analysis. International Journal of Environmental Research and Public Health. 2023; 20(4):3222.
CrossRef - Annabel LSP, Annapoorani T, Deepalakshmi P. Machine Learning for Plant Leaf Disease Detection and Classification – A Review. 2019 International Conference on Communication and Signal Processing (ICCSP). Published online April 1, 2019:0538-0542.
CrossRef - Nithyanandh S, Omprakash S, Megala D, Karthikeyan MP. Energy Aware Adaptive Sleep Scheduling and Secured Data Transmission Protocol to enhance QoS in IoT Networks using Improvised Firefly Bio-Inspired Algorithm (EAP-IFBA). Indian Journal of Science and Technology. 2023;16(34):2753-2766.
CrossRef - Harakannanavar SS, Rudagi JM, Puranikmath VI, Siddiqua A, Pramodhini R. Plant leaf disease detection using computer vision and machine learning algorithms. Global Transitions Proceedings. 2022;3(1):305-310.
CrossRef - Algani YMA, Caro OJM, Bravo LMR, Kaur C, Ansari MSA, Bala BK. Leaf disease identification and classification using optimized deep learning. Measurement Sensors. 2022;25:100643.
CrossRef - Guo, Yan, Zhang, Jin, Yin, Chengxin, Hu, Xiaonan, Zou, Yu, Xue, Zhipeng, Wang, Wei. Plant Disease Identification Based on Deep Learning Algorithm in Smart Farming, Discrete Dynamics in Nature and Society, 2020; 2020:1-11.
CrossRef - Udawant P, Srinath P. Cotton Leaf Disease Detection Using Instance Segmentation. Journal of Cases on Information Technology. 2022;24(4):1-10.
CrossRef - Liu J, Wang X. Plant diseases and pests detection based on deep learning: a review. Plant Methods. 2021;17(1).
CrossRef - Ahmed AA, Reddy GH. A Mobile-Based System for Detecting Plant Leaf Diseases Using Deep Learning. AgriEngineering. 2021;3(3):478-493.
CrossRef - Shoaib Muhammad, Shah Babar, EI-Sappagh Shaker , Ali Akhtar , Ullah Asad , Alenezi Fayadh , Gechev Tsanko , Hussain Tariq , Ali Farman. An advanced deep learning models-based plant disease detection: A review of recent research. Frontiers in Plant Science. 2023; 14.
CrossRef - Nithyanandh S, Jaiganesh V. Dynamic Link Failure Detection using Robust Virus Swarm Routing Protocol in Wireless Sensor Network. International Journal of Recent Technology and Engineering (IJRTE). 2019;8(2):1574-1579.
CrossRef - Barreto A, Reifenrath L, Vogg R, Sinz F, Mahlein AK. Data Augmentation for Mask-Based Leaf Segmentation of UAV-Images as a Basis to Extract Leaf-Based Phenotyping Parameters. KI – Kunstliche Intelligenz. 2023;37(2-4):143-156.
CrossRef - Li L, Zhang S, Wang B. Plant Disease Detection and Classification by Deep Learning—A Review. IEEE Access. 2021;9:56683-56698.
CrossRef - Jeong S, Bong J. Detection of Tomato Leaf Miner Using Deep Neural Network. Sensors. 2022;22(24):9959.
CrossRef - Liu Y, Liu J, Cheng W, Chen Z, Zhou J, Cheng H, Lv C. A High-Precision Plant Disease Detection Method Based on a Dynamic Pruning Gate Friendly to Low-Computing Platforms. Plants. 2023; 12(11):2073.
CrossRef - Rezk NG, Attia AF, El-Rashidy MA, El-Sayed A, Hemdan EED. An Efficient Plant Disease Recognition System Using Hybrid Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs) for Smart IoT Applications in Agriculture. International Journal of Computational Intelligence Systems. 2022;15(1).
CrossRef - Bao W, Fan T, Hu G, Liang D, Li H. Detection and identification of tea leaf diseases based on AX-RetinaNet. Scientific Reports. 2022;12(1).
CrossRef - Abinaya S, Kumar KU, Alphonse AS. Cascading Autoencoder With Attention Residual U-Net for Multi-Class Plant Leaf Disease Segmentation and Classification. IEEE Access. 2023;11:98153-98170.
CrossRef - Davinder Singh, Naman Jain, Pranjali Jain, Pratik Kayal, Sudhakar Kumawat, and Nipun Batra. PlantDoc: A Dataset for Visual Plant Disease Detection. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD (CoDS COMAD 2020). Association for Computing Machinery, New York, NY, USA. 2020;249-253.
CrossRef - Nithyanandh S, Jaiganesh V. Reconnaissance Artificial Bee Colony Routing Protocol to Detect Dynamic Link Failure in Wireless Sensor Network. International Journal of Scientific & Technology Research. 2019; 10(10):3244–3251.
- Arularasan R, Balaji D, Garugu S, Jallepalli V R, Nithyanandh S, Singaram G. Enhancing Sign Language Recognition for Hearing-Impaired Individuals Using Deep Learning. 2024 International Conference on Data Science and Network Security, Tiptur, India. 2024; 10690989:1-6.
CrossRef - Devi PA, Megala D, Paviyasre N, Nithyanandh S. Robust AI Based Bio Inspired Protocol using GANs for Secure and Efficient Data Transmission in IoT to Minimize Data Loss. Indian Journal of Science and Technology. 2024; 17(35):3609-3622.
CrossRef - Nithyanandh S, Jaiganesh V. Dynamic Link Failure Detection using Robust Virus Swarm Routing Protocol in Wireless Sensor Network, International Journal of Recent Technology and Engineering. 2019; 8(2): 1574-1578.
CrossRef
This work is licensed under a Creative Commons Attribution 4.0 International License.