1. Introduction
Agriculture is considered as one of the main sources of the economy in India. Like human beings, plants also suffer from diseases which affect the normal growth of a plant [1]. Diseases can be in any part of the plant including leaf, flower, fruit and root. Due to the complexity and huge number of crops and cultivated plants, the number of diseases is also large [2]. Thus, a pathologist may often fail to diagnose a disease accurately. The precise and timely diagnosis of plant diseases protects crops from quantitative and qualitative loss [3,4,5]. Most farmers have a lack of knowledge about the effective detection of plant diseases [6]. The identification of plant disease by the naked eye is also time-consuming, requires continuous monitoring and is less accurate in nature. The automated identification of diseases reduces human effort and also provide accurate results [7]. An automated plant disease detection is highly beneficial to farmers, since they know less about plant diseases.
In the current era, many works are ongoing in the domain of machine learning, which can be used effectively in the field of health monitoring, the identification of diseases in plants, etc. This kind of system provides reliable, precise results and reduces the time, cost and manpower for maintaining and ensuring quality in real-time applications. In the field of agriculture, there are a lot of opportunities for researchers to apply machine learning techniques in many aspects, such as the identification of plants, early detection of diseases, pesticide, nutrition requirement, etc. In this paper, we consider the diseases which occur on the leaves of the plant. Several machine learning techniques are discussed in this paper, which were proposed by different researchers based on color, shape, texture features and deep learning models for detecting diseases in plant leaves.
The automated detection of diseases in plants has been studied largely in recent times. The identification of diseases in plants requires accurate and precise information regarding the quantitative measurement of diseases [8]. In [9,10], the authors studied potato and tomato diseases and showed how these crops were affected by viruses. In [11], authors surveyed several papers on the classification of rice diseases and also considered different criteria such as the dataset used, disease classes, preprocessing and segmentation techniques along with the classifier used. Prajapati et al. [12] conducted a survey on the classification of cotton plant diseases using machine learning techniques. Iqbal et al. [13] surveyed the classification of citrus plant diseases using image processing. Kaur et al. [14] conducted a survey on the identification and classification of plant diseases through leaf images. These studies discussed in [11,12,13,14] are based on handcrafted features. To classify the diseases using handcrafted features, there is a need for the preprocessing, segmentation and extraction of features from the images, which is laborious and time-consuming.
With the technological advancements, machine-learning-based artificial intelligence has gained a lot of attention in the development of new techniques and models in computer vision [15]. Deep learning models are used in fields such as image recognition [16], voice recognition [17,18] and other complex applications such as self-driving cars, machine interpretation, etc. The application of deep learning in agriculture [19] and particularly in the plant disease detection [20] domain is very much new and limited. In [21], the authors surveyed the identification of plant diseases based on deep learning techniques and essentially focused on the data sources, models and preprocessing techniques used in the proposed CNN models. In [22], the authors reviewed research works on the identification of diseases using several types of deep learning techniques. In these papers, the authors discussed mainly the different CNN models used in plant disease identification. However, the comparative advantages and disadvantages were not clearly highlighted in these works.
In this work, we survey the different methodologies for the identification of plant diseases using both handcrafted-features-based and deep-learning-features-based identifications. We also discuss several segmentation techniques used in the identification of plant diseases along with their advantages and disadvantages. This paper aims to address the drawbacks of the existing works on the identification of diseases based on both handcrafted features and deep learning approaches. We also consider the recent works on the identification of plant diseases which are based on deep learning models. We point out some of the challenging issues in the identification of diseases along with the advantages and disadvantages of using deep learning models.
This paper is organized as follows: Section 2 provides the basic steps in the identification of plant diseases from leaf images. Section 3 represents a comprehensive review on the identification of plant diseases along with their relative advantages and disadvantages. In Section 4, we discuss the different techniques and advantages of deep-learning- over handcrafted-features-based approaches. Different challenges that are faced during the identification of diseases and the areas that need to be focused on are discussed in Section 5. Finally, Section 6, provides the conclusion and future directions in the classification of plant diseases.
2. Basic Steps in Identification of Diseases from Leaf Images
For the effective identification of plant diseases from the leaves of a plant, several steps are required, and among all those, data collection and preprocessing are the first steps. After preprocessing, the next step in the identification of diseases is the extraction of features. Finally, the features are fit into different classifiers for classification.
2.1. Data Collection
The first step in plant disease identification is the collection of image data. Several standard plant diseases datasets are available online such as the PlantVillage dataset [23], Cassava dataset [24,25], Hops dataset [26], Cotton disease dataset [27] and Rice disease dataset [28,29]. The PlantVillage dataset consists of 38 different classes of 14 different plant species (vegetable and fruits) such as apple, blueberry, cherry, corn, grape, orange, peach, pepper, raspberry, potato, soybean, squash, strawberry and tomato. The diseases include apple scab, apple black rot, cedar apple rust, apple healthy, blueberry healthy, cherry healthy, cherry powdery mildew, corn gray leaf spot, corn healthy, corn northern leaf blight, grape black rot, grape black measles, grape healthy, grape leaf blight, orange huanglongbing, peach bacterial spot, peach healthy, pepper/bell bacterial spot, pepper/bell healthy, potato early blight, potato healthy, potato late blight, raspberry healthy, soybean healthy, squash powdery mildew, strawberry healthy, strawberry leaf scorch, tomato bacterial spot, tomato early blight, tomato healthy, tomato late blight, tomato leaf mold, tomato septoria leaf spot, tomato spider mites, tomato target spot, tomato mosaic virus and tomato yellow leaf curl virus. All the images were taken in a laboratory setup condition. The Cassava disease dataset consist of five different classes of diseases and the images are real-time field-captured images. Diseases in the Cassava dataset includes cassava mosaic disease, cassava bacteria blight, cassava brown streak disease, cassava green mite and cassava healthy. The Hops dataset consists of five different classes of diseases with nonuniform background conditions. Diseases include downy, powdery, healthy, nutrient and pest diseases. The Cotton dataset consists of healthy and diseased cotton leaves and plants. The Rice disease dataset consists of four different classes of diseases captured in field conditions. Diseases in the Rice disease dataset are bacterial blight, blast, brown spot and tungro. Some of the researchers built their own diseases dataset in their work. Table 1 shows the available standard datasets of images of plant diseases along with image environment.
2.2. Preprocessing
Preprocessing is one of the most important steps in the identification of plant diseases. Several preprocessing steps exist such as the resizing of the images to fit the model, the removal of noises, color transformation, morphological operations, the segmentation of the disease region, etc.
Different filtering techniques, such as the Wiener filter, median filter [30] and Gaussian filter [31], are used to remove the noises in the disease-affected image. Different color spaces are used in image processing, such as RGB, HSV, CIEL*a*b* [32], YCbCr. To find the region of interest (ROI)/disease area in the leaf images, different segmentation techniques are used such as color thresholding [33,34], the Sobel edge detector [35], Otsu’s segmentation [36,37] and K-means clustering [38,39,40].
2.3. Feature Extraction
Features play an important role in machine learning. Features are used to describe the disease information in mathematical form, which makes the classification easier. For an effective classification, a feature should contain the necessary information that is required to differentiate the classes. Different types of features are used for the identification of diseases, and they can be classified as color features, shape features [31,41], texture features [35,41,42] and deep-learning-based features. Color features define the different color values of the disease region. The area, perimeter, minor/major axis length, eccentricity, etc., are some of the shape features. Texture-based features such as local binary pattern (LBP) [43], gray-level co-occurrence matrix (GLCM) [36], gray-level run-length method (GLRLM), Gabor texture features [32] are used for the identification of diseases. Figure 1 shows some of the features that are used in classification of plant diseases.
2.4. Classification
Classification is the numerical analysis of various image features, and it organizes the leaf image data into some of the disease categories. Classification is categorized as supervised and unsupervised classification. Some of the commonly used classification techniques are K-nearest neighbor (KNN) [32], support vector machine (SVM) [34,36,44], logistic regression (LR), random forest (RF), decision tree (DT) [37], naive Bayes (NB), artificial neural network (ANN) [43] and probabilistic neural network (PNN) [43].
3. Different Existing Machine Learning Based Techniques for Plant Disease Detection
Numerous works have been conducted to date related to the identification of plant diseases. In this section, we discuss different methodologies that have been proposed by researchers for the detection of different plant diseases. It is found that the disease detection techniques based on machine learning can be classified by color-, shape-, texture-based features and deep learning models. Figure 2 shows the basic steps in the identification of plant diseases.
3.1. Color-Features-Based Disease Detection
Disease detection by extracting color features was performed by Chaudhary et al. [45]. In their approach, they implemented YCbCr-, HSI- and CIE L*a*b*-based color models on a leaf image to extract the color features of a diseased leaf and then compared all these methods. From all of these color models, they chose the “A” component of the CIE L*A*B model for the initial segmentation of the diseased leaf. A median filter was used for preprocessing purpose. Hence, this method was affected by less noise from different sources. Finally, the diseased area of the leaf was segmented by applying Otsu’s threshold on the “A” component of the color space.
Singh [46] used a color slicing technique to detect the blast disease of paddy. In their method, firstly they convert the RGB image to HSI and used color slicing to extract the diseased area and neutralize the rest of the portion. They compared their technique with disease boundary detection using the Sobel and Canny methods and obtained an accuracy rate of 96.6%.
Sghair et al. [47] used different color models to identify diseases in plants. Firstly, they transformed the images into different color models, followed by a noise reduction using a median filtering technique and finally segmented the diseased region from the leaf. They used three different color models: YCbCr, HIS and CIELAB. In their approach, they applied Kapur’s threshold on the Cr component of the YCbCr model, the H component of the HIS model and the A component of the CIELAB model to segment the diseased spot.
Husin et al. [48] identified chili leaf diseases using different color features. They extracted the yellow, green and cyan color components from the leaves and used color matching techniques to identify the diseased and healthy leaves.
Pugoy et al. [49] identified two different rice diseases (brown spot, leaf scald) using a color analysis of the image. Threshold segmentation was used to segment the abnormalities, followed by a histogram intersection to isolate the segmented region. K-means clustering was used to assign the pixels into different clusters based on R, G and B color values. The classification of disease was done by comparing and matching the color values. Majid et al. [50] used fuzzy entropy to identify four different paddy (rice) diseases. A PNN was used as the classifier, and it obtained an accuracy rate of 91.1%. One of the major issue in their approach was that in the preprocessing step, they needed to crop the diseased region manually before extracting the features.
Shrivastava et al. [51] identified four different rice plant diseases using 172 color features. In their approach, the authors used 13 different color spaces and from each color channel, they extracted different color features such as mean, standard deviation, kurtosis and skewness. Seven different classifiers SVM, DC (discriminant analysis), KNN, NB, DT, RF and LR were used to classify the diseases and compare the performances. Among all the classifiers, SVM gave better a performance accuracy with 94.65%. The main issue in their work was that the images were already preprocessed and had a black uniform background.
The main disadvantage of disease detection using color values is that using a color component is not sufficient to detect all types of diseases. Table 2 summarizes the method with segmentation techniques and features used in the detection of diseases based on color features.
3.2. Shape- and Texture-Based Disease Detection
In addition, diseases in plant can be detected by extracting the shape features of leaves. Dey et al. [52] used the number of pixels in the disease-affected area to detect the rot disease in betel leaf. Firstly, they converted the acquired RGB image into the HSV color space. Then, a threshold value was calculated by applying Otsu’s method on the “H” component of the HSV color space for segmentation purpose. The segmented binary image consisted of the rotten area with white pixels. They calculated the total number of pixels in this rotten portion to detect the affected disease.
Phadikar et al. [53] used color and shape features for the detection of rice diseases such as leaf brown spot, rice blast, sheath rot, bacterial blight, etc. For segmentation purpose, they used a Fermi energy based technique and genetic algorithm (GA) for the extraction of a diseased leaf’s shape features. Fermi energy based region extraction had an advantage over selecting a proper threshold value. For classification purpose, they used a rule generation technique. The advantages of the implemented method included a smaller computational complexity as it did not require a gain calculation of the rules.
Yao et al. [36] used both shape and texture features for detecting diseases in the rice plant. After segmentation, shape features such as area, perimeter, long axis length and width, along with texture features including contrast, uniformity, entropy, inverse difference, linearity and correlation were extracted from the GLCM of each image of a leaf from every orientation angle. For classification, they used an SVM classifier and got an accuracy rate of 97.2%. Though this method was effectively able to identify the diseases, it failed to correctly identify diseases having a similar texture, and thus performance decreased.
The method proposed by Islam et al. [33] aimed to detect and recognize potato diseases (late blight, early blight, healthy leaf). First, they masked out the background as well as the green region of the image using thresholding, by analyzing the color and luminosity component of a different region in L*a*b* color spaces. Then, they extracted the ROI which contained the disease-affected region. They used the GLCM to extract texture features such as correlation, contrast, homogeneity and energy. The mean, entropy, standard deviation, skew and energy were calculated from the histograms of the color planes. A multiclass SVM was used for the classification of potato diseases from the PlantVillage dataset. In the segmentation, it was difficult to set the threshold value in their method and the number of training image was also small. One of the disadvantages included that the images in the dataset were captured on a uniform background.
Camargo et al. [54] developed a disease segmentation algorithm to identify the diseased area by using the distribution of intensities in the histogram and finding the threshold according to the position in the histogram. Furthermore, they [31] identified the diseases in cotton plants using an SVM classifier. A set of features such as shape, color, texture and fractional dimension was extracted from the diseased area, and they obtained an accuracy rate of 93.1%. One of the disadvantages was that the extraction of features was time-consuming and identifying the proper set of features from a set of 54 features was challenging. The number of images in the dataset was small.
In [55,56], the authors identified different cotton plant diseases using color, shape and texture features. Chaudhari et al. [55] used K-means clustering for the segmentation and a wavelet transform for the feature extraction. To reduce the number of features and speed up the computation, a PCA was used as a feature reduction technique. A backpropagation neural network was used for the classification, and it obtained an accuracy rate of 97%. The advantage of using a wavelet transform was that it worked well on low-frequency components and also in high-frequency transients. Bhimte et al. [56] identified three different types of cotton plant diseases using image processing. A K-means clustering technique was used to segment the images into three clusters as background, foreground and diseased regions. Different color, shape and texture features were extracted from the segmented images and an SVM was used to classify the images and it achieved an accuracy rate of 98.46%. The number of images used to train and test the model was small. The selection of a proper set of features by the classifier was an important issue in the identification.
Wang et al. [57] identified two different grape diseases (grape downy mildew and grape powdery mildew) and two different wheat diseases (wheat stripe rust and wheat leaf rust) using a backpropagation (BP) network. In their approach, K-means clustering algorithm was used to segment the images and they extracted 21 color features, 4 shape features and 25 texture features from the segmented image, which were used for classification. One of the major issues in their paper was that the images were captured in a fixed condition. Seven different groups of feature combinations were used for the identification and therefore, finding the proper set of features combinations to get the optimal result was an important challenge.
Two different diseases in grapes, namely, downy and powdery mildew diseases, were identified by Padol et al. [41]. In that paper, firstly, the diseased region was extracted using K-means clustering techniques with three clusters. After the segmentation, nine color features and nine texture features were extracted from each cluster and an SVM was used for the classification. To train and test the model, they used 137 captured grape leaf images and achieved an accuracy rate of 88.89%. Later on, the authors [42] extended their work identifying the same diseases using both SVM and ANN classifiers. In order to improve the classification, a fusion classification was performed, which created an ensemble of classifiers from the SVM and ANN and reported a recognition accuracy of 100% for both downy and powdery mildew diseases. The feature set used for the classification consisted of nine color and nine texture features, as used in [41]. One of the issues was that only two different disease categories were considered, and the dataset used was too small. The extraction of features was time-consuming as the system used 54 different features for the identification.
Es-saady et al. [58] identified three pest insects damages (leaf miners, thrips and Tuta absoluta) and three pathogen symptoms (early blight, late blight and powdery mildew) using a serial combination of SVM classifiers. In that paper, the authors used a K-means clustering technique for segmenting the diseased area. From the segmented image, they extracted 18 color, 30 texture and 11 shape features. In their system, they used two SVM classifiers where the first SVM took the color features and the second SVM took the texture and shape features to classify the diseases. To train and test the system, they used 284 images captured on a uniform background and achieved an accuracy rate of 87.8%. Issues regarding their model were that the images were captured on a uniform background and the dataset size was very small.
In [59,60], the authors used a genetic algorithm (GA) to segment the diseased area from the leaf image. They showed that the GA had advantages over a simple thresholding-based segmentation or K-means clustering segmentation technique as the GA did not require user input or the number of clusters at the time of segmentation. In [60], the authors identified five different plant diseases using the minimum distance criterion (MDC) and an SVM classifier. Firstly, the classification was achieved using the MDC with a K-Mean clustering and they obtained an accuracy rate of 86.54%. The performance accuracy improved to 93.63% using the GA. Secondly, the classification was done using an SVM where the performance improved to 95.71%.
Two tomato diseases (TSWV, TYLCV) were identified in [61] with the help of some geometric and histogram-based features; the authors classified a diseased image using an SVM classifier with different kernel values. The dataset used was 200 captured images of both diseased and healthy leaves and they obtained an accuracy rate of 90%. One of the disadvantages in their approach was that the images containing more than one leaves needed to be cropped manually to extract a single leaf image, which made the process complex. In [37], Sarbol et al. identified six different types of tomato diseases from images of the leaf and the stem. Using Otsu’s segmentation technique, they extracted the diseased area and different color, shape and texture features. They classified the leaves using a decision tree classifier and obtained a classification accuracy rate of 97.3%.
Six different tomato diseases were identified by Hlaing et al. [30] using model-based statistical features. From the preprocessed images, they extracted the SHIFT features and to reduce the computational time and complexity, they reduced the dimensions using a generalized extreme value (GEV) distribution. They used a 10-fold cross-validation to analyze the performance accuracy and reported an accuracy rate of 84.7% using a quadratic SVM. Furthermore, their proposed method took 56.832 s to train the classifier and achieved 12,000 predictions per seconds. The authors extended their work from [30] and identified the diseases using SHIFT features with a Johnson SB distribution as a dimension reduction technique [62]. They achieved a better performance accuracy of 85.1% and it took 33.889 s to train the classifier. In both papers, they used tomato disease images from the PlantVillage dataset, where the images were captured in a laboratory setup condition. Due to the loss of information for the classification, they did not perform a segmentation technique. One more advantage was that the implemented model was robust to various resolutions.
Later on, Titan et al. [63] developed an SVM-based multiple-classifier system (MCS) to improve the accuracy of classification, and they used color, shape and texture features for the identification of leaf diseases in wheat plant. They got an accuracy rate of 96.1%. The selection of appropriate features among all extracted features which gave the best classification accuracy was an important challenge in their approach.
Chouhan et al. [1] identified and classified plant diseases using bacterial-foraging-optimization-based radial basis function neural network (BRBFNN). They used the region-growing algorithm for searching and grouping the seeded region having common attributes which were used for the feature extraction. They worked on some fungal diseases such as common rust, cedar apple rust, late blight, leaf curl, leaf spot and early blight. The advantage of using region growing was the grouping of seed points having similar attributes, which increased the efficiency of the network. Bacterial foraging optimization assigned the optimal weight to the radial basis network, which made the network faster and also increased the accuracy of the network. The main issues in their approach were that it worked only for fungal diseases and it could not be extrapolated to identify other diseases in plants.
Kaur et al. [44] used a color feature, texture feature and a combination of color and texture features for detecting several diseases of soybeans, such as downy mildew, frog eye, septoria leaf blight, etc. They used three clustering techniques for training. In the first cluster, they found whether the leaf was affected by disease or healthy by calculating the number of connecting components. If a more-connected component was there, then its four neighbors corresponded to unhealthy leaf. They used an SVM classifier for classification. Training the network with real-time images was an important area that needed to be done.
Masazhar et al. [64] identified the palm oil leaf disease by extracting 13 texture features from the GLCM, and a multiclass SVM was used for classification purpose. First, they converted the RGB image to the L*a*b* color space and for segmentation purpose, they used the K-means clustering algorithm and evaluated the features from the segmented image. The main issues in their method were that it was limited to palm oil leaf disease only and the dataset used was very small.
Chuanlei et al. [34] used three different types of features such as shape, color, and texture features in the detection of diseases in apple leaf. Firstly, they removed the background using a histogram, and region growing was applied to separate the diseased leaf spots from the leaf image. They extracted 38 different features and reduced the dimensionality of the feature space by selecting the most valuable features from the combination of a GA and a correlation-based feature selection (CFS). Diseases were classified using an SVM classifier and they obtained an accuracy rate above 94%. One of the advantages of the model was that the dimensionality reduction reduced the time complexity of the model.
Pujari et al. [43] detected different fungal diseases that occur in different crops such as fruit, vegetables and commercial plants. For each category, they used different segmentation techniques to identify the diseased area, extract different features and also to identify the diseases. For fruit crops, they used K-means clustering, for vegetables, they used Chan–Vese and for commercial crops, they used a GrabCut segmentation technique. In the case of fruit crops, they used the GLCM and GLRLM for feature extraction. For classification, they used a nearest neighbor classifier and obtained an accuracy rate of 91.3%. For the identification of diseases in vegetable crops, they extracted LBP features from the disease-affected leaves and used an ANN for the classification; they obtained 95.1% accuracy. In their method, performance decreased in the case of a high variability among the diseases.
Zhang et al. [65] designed an automatic system to identify and classify the cucumber leaf diseases. Firstly, a superpixel operation was performed to divide the images into several compact regions. Secondly, logarithmic-frequency pyramid histogram of orientation gradients (PHOG) were extracted as features from the segmented lesion image, which was obtained using an expectation maximization (EM) segmentation algorithm. They achieved an accuracy rate of 91.48% using SVM classifier. Later on [66], the authors extended their work to identify the diseases through the Internet of things (IoT). In that work, the authors combined the architecture of the superpixel clustering and k-means clustering algorithms to segment the diseased regions. PHOG-based image descriptors were used, and they reported an accuracy rate of 92.15%. The running time of the implemented model was less, since the original image was divided into many small compact regions using superpixels and since the number of extracted features were reduced using PCA. Zhang et al. [67] proposed another work to segment the diseased region from the image using superpixels and EM techniques.
Dandawate et al. [68] identified soybean plant diseases from mobile captured images. In their paper, a color- and cluster-based segmentation and extracted SHIFT features were used. They recorded an accuracy rate of 93.79% using an SVM classifier. Some of the identification challenges such as background clutter, illumination, shadow, scale and orientation were addressed. One of the issues in their paper was that they did not consider individual disease classes. They considered only two classes, healthy and diseased.
Prajapati et al. [69] identified three different rice plant diseases using color, shape and texture features. Before extracting the features, they used different background removal and segmentation techniques and for the classification they used an SVM and achieved an accuracy rate of 83.80% and 88.57% using 5- and 10-fold cross-validations, respectively. Three different segmentation techniques were used, namely, Otsu’s segmentation, LAB-color-space-based K-means clustering, and HSV-color-space-based K-means clustering. Among the used techniques, the HSV-color-space-based K-means clustering performed better. To evaluate the model performance, they extracted 88 features and built three different models with different feature combinations.
Prasad et al. [32] proposed a novel approach to identify the plant diseases using mobile devices. They used CIE’s L*a*b*-color-based unsupervised segmentation technique. A Gabor wavelet transform (GWT) and the GLCM were used to represent the image mathematically and they achieved a maximum accuracy of 96.3% using a KNN classifier. The issues regarding their paper were that they considered only uniform background images. Segmenting the leaf images captured in a complex background with different lighting conditions may be a challenging issue for their method. The feature vector used was very dense and the computation cost was high.
Singh et al. [60] identified the diseases in five steps as follows. First, they took the input image, preprocessed the image followed by masking the green pixel, then segmented the diseased area using GA; finally, they computed the feature using the co-occurrence matrix and classified using an SVM classifier. We summarize the shape- and texture-based methods as follows:
Before the extraction of features, a lot of preprocessing is required, which makes the model complex.
Segmenting the diseased region from the images with a background object is challenging.
The extraction of features and selecting the proper set of feature set giving the optimal result is an important issue.
The dataset used in the majority of the paper is small and only a few disease categories is considered.
The extraction of features in a large dataset is time-consuming as well as a laborious task.
Table 3 summarize the different segmentation techniques that were used to segment the diseased region along with their advantages and disadvantages. Table 4 summarizes the detection of diseases in plants using shape- and texture-based features with the preprocessing techniques, classifier and dataset used.
3.3. Deep-Learning-Based Identification of Diseases
Recently, deep learning (DL) has achieved an exponential growth in the field of computer vision tasks such as object detection, pattern recognition, classification and biometry. DL models exhibit outstanding performance in image recognition task such as the ImageNet challenge. This image recognition idea extended to the agricultural field plant identification [71], disease detection [2,72,73,74], pest recognition [75,76], fruit identification [77,78] and weed detection [79]. In DL, there is no need for segmentation and feature extraction as a DL model has the ability to learn the features automatically from the input images.
Kawasaki et al. [80] identified two different types of cucumber diseases, i.e., melon yellow spot virus (MYSV) and zucchini yellow mosaic virus (ZYMV) using a CNN. In their paper, they used a rotation of images to increase the data size and showed that increasing the number of images increased the performance. The accuracy obtained with the proposed DL model was 94.9%. In [81], the authors extended their previous work to identify seven different types of cucumber diseases using two CNN architectures with a large dataset. In the dataset, they considered the a variety of images aspects such as the distance, angle, background, and a nonuniform lighting condition. Three data augmentation techniques, namely, shifting, rotation and image mirroring were used to increase the size of the dataset. The CNN configuration was based on VGG Net [82] and the Caffe framework [83]. The performance of CNN-1 decreased more in the case of a bad image condition than that of CNN-2 as the CNN-2 model was trained with both good and bad condition images. They attained an accuracy rate of 82.3% with fourfold cross-validation.
Sladojevic et al. [84] used pretrained fine-tuned CaffeNet model for the identification of 13 different plant diseases. To train and evaluate the performance of the model, they used 4483 internet-downloaded images and a data augmentation technique to increase the data size. A 10-fold cross-validation technique was used to evaluate the performance accuracy of model and they achieved an accuracy rate of 96.3%. Pretrained AlexNet and VGG16 net were used by Rangarajan et al. [85] to identify six different tomato leaf diseases. The classification accuracy obtained was 97.29% for VGG16 net and 97.49% for the AlexNet architecture.
Mohanty et al. [72] used two eminent established architecture of CNN, AlexNet and GoogLeNet, in the classification of 26 different diseases in 14 crop species using 54,306 images. To keep track of model overfitting, they split the dataset into different sets of training and testing ratios. Three different image data (color, grayscale, segmented) were used and the highest training accuracy of 99.35% was achieved on RGB images using GoogLeNet. Several issues existed in their method, including that the entire process was exclusively done in laboratory setup images, not in real-time images from cultivation fields. Performance decreased to 31% when the model was tested on images from different sources. One more limitation was that these models were based on the classification of front-facing single leaves on homogeneous background images.
Multiple CNN architectures were used by Nachtigall et al. [86] to identify six different apple leaf diseases. The best result were obtained using the AlexNet [87] architecture and an accuracy rate of 97.3% was achieved. In order to compare the CNN results, they used a multilayer perceptron (MLP) as an MLP can achieve a high recognition accuracy in image classification [88]. The Caffe [83] and DIGITS tools were used to design the CNN architecture. A LeNet [89] based deep CNN architecture was used by Amara et al. [90] to identify banana leaf diseases. Their proposed approach was effective under challenging conditions such as the illumination, a complex background, different resolutions, sizes, poses, and orientations of real scene images. To evaluate the performance, they used both color and grayscale images and recorded an accuracy rate of 98.61% and 94.44%, respectively.
The fusion of shape- and texture-based features such as Hu’s moments [91], Zernike moments [92] with a CNN was used by [93] to identify olive leaf diseases. After 300 epochs, the authors [93] reported an accuracy rate of 98.61%. A fine-tuned AlexNet architecture was used by Atole et al. [94] to identify rice plant diseases and they achieved an accuracy rate of 91.23%.
Ferentinos et al. [2] used five different deep CNN architectures (AlexNet, AlexNetOWTBn, GoogLeNet, Overfeat and VGG) for the identification of plant diseases through leaf images of healthy and diseased leaves. The dataset used consisted of 58 different classes of images and they achieved the highest accuracy rate of 99.53% using the VGG [82] architecture. The dataset used was the largest available plant disease dataset, which included partial shading on leaves, images with different objects such as hands, finger, shoes, etc. The average time required to train these models was much more. One of the issues related to their method was the expansion of the existing database to incorporate a wider variety of plant species and diseases. Another issue was that the testing dataset used for the classification of the models was part of the same database that constituted the training set.
Five different eggplant diseases were identified by Rangarajan et al. [95] using a pretrained VGG16 network. To evaluate the result, they also used three different color spaces of the images, namely, HSV, YCbCr and grayscale. They obtained an accuracy rate of 99.4% with the RGB and YCbCr images. Images used to train and test the network were captured by mobile devices in both laboratory and field conditions. Six different pretrained network were used in [96] to identify 10 categories of several crop (eggplant, beans, lime, ladies finger) diseases. Among all the models, VGG16 gave the highest accuracy of 90%. We summarize the above papers as follows:
The use of a pretrained deep learning model eliminates the preprocessing and feature extraction in the identification of disease.
A fine-tuned and transfer-learning approach where the model is pretrained with a large dataset performs better than learning from scratch.
RGB images give better performance accuracies than other formats of images.
The number of parameters used in LeNet, AlexNet, VGG and GoogLeNet is large and hence the computation takes longer.
The required training time is much longer in these models and requires high-power GPUs to train the model.
Lee et al. [97] used the VGG16, InceptionV3 and GoogLeNetBN architectures to identify diseases in plants. In their paper, they examined and compared the performances of these models based on a transfer-learning approach. They proved that disease detection using a pretrained model reduced the overfitting impact. Picon et al. [98] extended the work from [99] for the identification of three different wheat diseases based on mobile devices. A modified ResNet50 architecture was used in that paper. Firstly, a convolution layer of ResNet50 was replaced by two consecutive convolution layers. Secondly, a dense layer with a softmax activation function was replaced by a sigmoid function, which was able to detect multiple diseases on the same leaf. They obtained an improved accuracy rate of 87% using a superpixel segmentation approach and images containing the artificial background used in training.
Fuentes et al. [73] designed a practical and applicable solution in the real-time detection of tomato diseases and pests recognition using a robust deep-learning-based method. Furthermore, their model was able to deal with some of the complexities such as illumination conditions, object dimension and background variation. In their approach, they used a meta-architecture of CNNs, such as Faster R-CNN, SSD and R-FCN models; for the extraction of features, VGG16 [82], ResNet-50 [100], ResNet-101 models were used. A smooth L1 loss function was also used. They obtained an accuracy rate of 83%. In their model, they also included the stage of a disease on the leaves and the position where it occurred.
Ramcharan et al. [24] used an image-based DL method for the identification of diseases in cassava plants. They used a transfer-learning approach on the InceptionV3 model to train the network for the identification of three diseases and two pest damages. They used two different datasets, namely, the original cassava dataset consisting of multiple leaves on a single image and the leaflet cassava dataset consisting of single leaf images. There was an improvement in the accuracy for the leaflet dataset in comparison with the original cassava dataset. To analyze the performance, they used a softmax layer, SVM and KNN classifiers and obtained the maximum accuracy of 98% using an SVM classifier.
Ahmad et al. [101] identified four different tomato diseases using pretrained deep learning models, namely, VGG16, VGG19, ResNet and InceptionV3. They also fine-tuned the network to obtain the optimal result. The authors used two datasets, one of images in laboratory conditions, and another of self-collected field images; They observed that the laboratory images performed better. Among the DL models, InceptionV3 gave the best performance accuracy of 99.60% and 93.70% on laboratory and field images, respectively, after fine-tuning the parameters.
Oyewola et al. [25] identified five different cassava plant diseases using a plain convolution neural network (PCNN) and a deep residual network (DRNN) and showed that the DRNN outperformed the PCNN by a margin of 9.25%. A MobileNet CNN-based model was used by Elhassouny et al. [102] to identify the 10 most common types of tomato diseases and they obtained an accuracy rate of 90.3%.
Li et al. [103] identified ginkgo leaf diseases using VGG16 and InceptionV3 with images under both laboratory and field conditions. Between the two models, InceptionV3 gave better a performance accuracy on field images and VGG16 gave a better performance accuracy on laboratory images. The InceptionResNetV2 architecture was used by Yong Ai et al. [104] to identify different crop diseases and insect pests in crops. They built an Internet of things (IoT) platform in remote areas, such as mountains, and identified diseases and insect pests with an accuracy rate of 86.1%. We summarize the above the papers as follows:
Extracting multiple features from different filter sizes in parallel improves the model performance.
A CNN with residual connection can train a large model without increasing the error rate.
A residual connection handles the vanishing gradient issue using identity mapping.
A pipeline CNN architecture was used by DeChant et al. [105] to identify maize plant diseases. Three layers of CNNs were trained: firstly, several CNNs were trained to classify small regions of images that contains the lesions; secondly, the predictions of the first layer were combined into a separate heat map; and finally, they were fed into the CNN layer to classify whether it was affected by the disease or not. A Faster R-CNN was used by Ozguven et al. [106] to identify sugar beet diseases and a correct classification rate of 95.48% was obtained.
Oppenheim et al. [107] detected four different potato diseases using a deep CNN. In that paper, the database of images used contained potatoes of different shapes and sizes and the images were labeled manually by experts. Several dropout layers were used to deal with the problem of overfitting, and the dataset was split into different training and testing ratios; the best accuracy of 96% was achieved with a 90%–10% training and testing ratio.
A nine-layer deep convolution neural network was used by Geetharamani et al. [108] to identify different plant diseases. In that paper, they showed that increasing the size of the dataset using data augmentation techniques such as image flipping, noise injection, gamma correction, color augmentation, scaling and rotation increased the validation accuracy from 91.43% to 96.46%. They also compared their results with traditional machine-learning-based approaches and showed that a deep-learning-based approach outperformed traditional approaches.
Wang et al. [109] used a deep convolution network fine-tuned by transfer learning for the detection of apple leaf diseases. They compared two architectures, namely, building a shallow network from scratch and transfer learning by fine-tuning. The shallow network consisted of some convolution layer with some filters, and there were two fully connected layers and one softmax layer to predict the output. Transfer learning is a useful approach to build powerful classification network using few data, by fine-tuning the parameters of a network pretrained on a large dataset, such as ImageNet [110].
The CNN-based architectures GoogLeNet and Cifar10 were used in [111] to identify nine different types of maize leaf diseases. In that paper, the authors used data augmentation techniques to increase the data size and improved some hyperparameters by changing the pooling combinations, adding dropout operations and rectified linear unit functions. In the GoogLeNet model, the average identification accuracy obtained was 98.9%, and the Cifar10 [112] model achieved an average accuracy of 98.8%.
A novel deep convolution network based on the AlexNet and GoogLeNet architectures was used by Liu et al. [113] to identify four different apple leaf diseases using leaf images. In their model, they replaced the fully connected layer of AlexNet by a convolution layer and an inception layer, which reduced the model parameter by a large number with a higher accuracy rate of 97.62%. The optimizer used in their paper was Nesterov’s accelerated gradient (NAG).
Later on Ramcharan et al. [114] extended the work in [24] into a mobile-based cassava disease detection. They utilized the single-shot multibox (SSD) model with the MobileNet detector and classifier, which was pretrained on the COCO dataset [115]. To evaluate the performance, they used both image and video files of diseased leaf images. In total, 2415 images of six different diseases were used to train the CNN network and they obtained an accuracy of 80.6% and 70.4% in the case of image and video, respectively.
Toda et al. [116] designed a deep CNN based on the InceptionV3 [117] architecture. In their approach, they used attention map and identified and removed several layers which were not contributing to the identification. The removal of the layers reduced the number of parameters by 75% without affecting the classification accuracy and a top accuracy of 97.1% was achieved.
Lu et al. [118] proposed a novel deep CNN method inspired from LeNet and AlexNet to identify 10 different rice diseases. To achieve the optimal result, different convolution filter sizes and pooling operations were carried out. The maximum accuracy obtained using stochastic pooling was 95.48% and 93.29% using a convolutional filter. Stochastic pooling has the advantages of max-pooling and also prevents the model from overfitting. One of the advantages of these models is that their computation time decreases as the number of layer used is less.
A SqueezeNet [119] architecture was used by Durmus et al. [120] to identify tomato leaf diseases. They used a robot to detect the diseases on the plants autonomously in the field or in the greenhouse and obtained an accuracy rate of 94.3% using the SqueezeNet architecture. They compared their performance with that of the AlexNet architecture, whose size is 227.6 MB, while the size of SqueezeNet is 2.9 MB.
A modified Cifar10 quick CNN model was used by Gensheng et al. [121] to identify four different tea leaf diseases. In their paper, the standard convolution was replaced by a depthwise separable convolution, which reduced the number of parameters. They also compared their results with traditional machine-learning-based techniques and some classical CNN models such as LeNet-5 [122], AlexNet and VGG16 [109], and achieved an improved accuracy rate of 92.5%.
Bi et at. [123] proposed a low-cost mobile deployable model to identify two different common types of apple leaf diseases. They used the MobileNet deep learning model and compared its performance with that of ResNet152 and InceptionV3. The dataset used were collected by agricultural experts. The authors achieved accuracy rates of 73.50%, 75.59%, and 77.65% for MobileNet, InceptionV3 and ResNet152, respectively. The average handling time in MobileNet was much less than that of InceptionV3.
Rice and maize leaf diseases were identified by Chen et al. [74] using the INC-VGGN method. In their approach, they replaced the last convolution layer of VGG19 with two inception layers and one global average pooling layer. In their model, basic features were extracted using a pretrained model and high-dimensional features by the inception layer. They obtained an accuracy rate of 92% and 80.38% in rice and maize, respectively.
Atila et al. [124] used an EfficientNet architecture to identify different diseases in plant. The performance of their model was compared with that of other CNN models such as AlexNet, ResNet50, VGG16 and InceptionV3 and it showed that EfficientNet outperformed the other CNN models. The highest accuracy rate of 99.91% was obtained using EfficientNetB5 on the original dataset and 99.97% using EfficientNetB4 on the original dataset. The number of parameters generated in the EfficientNet model was much less that that of the other deep learning models and hence it required less time to train the network.
A. Tuncer [125] used a hybrid CNN approach to identify plant leaf diseases. In that paper, the author used an inception network with a depthwise separable convolution, which reduced the number of parameters and computational cost of the model. Using a k-fold cross-validation, the model achieved a maximum accuracy of 99.27% and an average accuracy of 99% on the PlantVillage dataset. We summarize the above papers as follows:
Removing convolution layers, changing the filter sizes, replacing the standard convolution by a depthwise separable convolution reduce the number of parameters.
An attention network which focuses on a particular region reduces the complexity of the network.
The time required to train the network is much less.
It is easy to implement on small devices and the computation time is reduced.
In [126], the authors used CNN, VGG and Inception architectures to identify plant leaf diseases. In their approach, they used 15% images of the PlantVillage dataset and some real-time captured images to evaluate the accuracy and obtained an accuracy rate of 98% and 95%, respectively, with the CNN architecture. Pretrained AlexNet and GoogleNet [127] were used in [128] to detect three different soybean diseases from healthy leaf images with some modified hyperparameters such as minibatch size, max epoch and bias learning rate. In [129], the authors classified the maize leaf diseases from healthy leaves using deep forest techniques. In their approach, they varied the hyperparameters of the deep forest, such as the number of trees, number of forests, number of grains, and compared their results with traditional machine learning models such as SVM, RF, LR and KNN. The deep forest model achieved an accuracy rate of 96% and a maximum F1 score of 0.96 among all other classifiers.
Using the principles of deep learning, a fully connected CNN model was built by Sibiya et al. [130] to classify maize leaf diseases. The model was able to recognize three different types of maize diseases with an accuracy rate of 92.85%. A multilayer CNN was used by Singh [131] to identify mango leaf diseases and obtained an accuracy rate of 97.13%.
Different CNN architectures, such as AlexNet, VGG16, VGG19 and ResNet, were used in [132] to identify the diseases in plant. In their approach, they used the camera-captured images of eight different diseases to train the model. For the feature extraction, they used these CNN models and for classification purposes, they used different classifiers such as KNN, SVM and extreme learning machine (ELM). They achieved a maximum accuracy rate of 97.86% using the ResNet architecture. A NASNet-based deep CNN architecture was used in [133] to identify leaf diseases in plants and they obtained an accuracy rate of 93.82%.
A shallow CNN (SCNN) was used by Yang Li et al. [134] for the identification of maize, apple and grape diseases. First, they extracted the features from the CNN and then classified the diseases using SVM and RF classifiers. In their approach, they claimed that the combination of a shallow CNN and classic machine learning classification had a good ability to identify plant diseases and the kernel SVM and random forest had the ability to overcome overfitting. A number of deep CNN architectures were used by Sethi et al. [29] for the identification of four different rice diseases. In their approach, they extracted the features from the deep learning model and classified the diseases using an SVM classifier; they showed that the SVM performed better compared with the deep learning classifier. We summarize the above papers as follows:
The extraction of features using a CNN model and the classification using different machine learning classifiers also give higher performance accuracies.
A CNN model extracts better features, which make a classifier such as an SVM or RF give better performance results.
An SVM and RF can tackle the overfitting issues.
A CNN model is used only for extracting the features and hence the training of the model is not required.
Table 5 summarizes the DL models along with the dataset used to identify the diseases in plants with their class and labels. Table 6 shows the limitations of some of the implemented deep-learning-based techniques for the identification of plant diseases. From Table 6 it is seen that most of the researchers used the same dataset to train and test the DL models. It is also seen that the number of works on the identification of plant diseases having multiple diseases on a single leaf is also relatively small.
4. Discussion
From our survey, we showed that deep-learning-based techniques outperformed the traditional classification approaches such as KNN, SVM, RF, LR, ANN and others. In deep learning, features are learned automatically from the networks, which is more effective and gives more accurate results at the time of classification than the traditional feature extraction approaches relying on color, shape, SIFT, texture-based, GLCM, histogram, Fourier description, etc., features. A large number of deep learning architectures are used for the identification of plant diseases. We summarized these deep learning models in Figure 3. From Figure 3, it is seen that AlexNet, VGG16, GoogleNet and InceptionV3 are the most frequently used DL models. VGG19 and ResNet50 are the next most used DL models. A summary of different DL models along with the number of layers, number of parameters and the size of each DL model is shown in Table 7. Figure 4 gives the number of research papers with respect to individual plant classes. Multiple plants are included in the PlantVillage dataset, which consists of 14 different plant species and 38 categories of diseases and is the most frequently used plant dataset by researchers. Rice and tomato are the next most-used plants in plant diseases identification areas. Figure 5 shows the histogram representation of the discussed papers published from 2009 to 2021 in the field of the identification of plant diseases. From Figure 5, it is seen that the identification of plant diseases has gained much attention after 2016. There are several advantages of using a DL model over a handcrafted-features-based approach. The extraction of hand-engineered traditional features requires extra effort and is time-consuming; moreover, searching for features that give the most precious results is not an easy task. DL-based features reduce this effort and gives the best results [109]. A DL model is robust under some challenging issues such as a complex background, illumination, size and orientation [73]. DL models are robust in scenarios containing challenging images, several intra and extraclass variations and they have the ability to deal with complex scenarios from a plant’s surrounding area.
5. Challenges
The identification of diseases in plants from the leaf image faces some challenges. Resolving these challenges and issues is a key point to design a practical plant disease identification system on real-time images with diverse field conditions. In this section, we discuss some of the unresolved issues in the identification of diseases in plants.
5.1. Dataset of Insufficient Size and Variety
In many papers and articles, the main limitation is the dataset used to train the CNN network, which leads to a worse performance accuracy for the identification of disease. In DL, there is a need for a large dataset with a wide variety of images. The PlantVillage [23] and Image Database of Plant Disease Symptoms (PDDB) [137] dataset are the only freely available large diseases dataset at the present time. The images available are from a laboratory setup and were captured with a uniform background. However, the collection of images from the field is expensive, and it requires agriculture expertise for the accurate identification of diseases.
5.2. Image Segmentation
Segmentation consists in finding the region of interest from the image. Two approaches exist in segmentation, a traditional approach and a soft-computing-based approach. K-means clustering and color thresholding are traditional and fuzzy logic, artificial neural network and region growing are soft-computing-based segmentation techniques. Segmenting a leaf image from a complex background is a challenging issue for the identification of diseases. The segmentation of the leaf region can improve the performance accuracy. Images with many illegitimate elements often cause difficulties in the identification.
5.3. Identification of Diseases with Visually Similar Symptoms
Some of the diseases have similar symptoms, which even experts often fail to distinguish properly by the naked eye. Sometimes one disease symptom may vary due to geographic locations, crop development stage and weather condition. Until now, no work has been found in the literature that incorporates these issues in the identification of plant diseases.
5.4. Simultaneous Occurrence of Multiple Diseases
Most of the plant disease identification model assumes that there is only one type of diseases in the image. However, multiple diseases as well as some nutritious disorders may occur simultaneously. This can affect the identification of diseases. From the survey, we can see that few works exist in the field that identify multiple diseases. Fuentes et al. [73] only considered the identification of multiple diseases in tomato leaves.
5.5. Identification of Diseases from Real-Time Images
From the literature, we observed that most papers are based on the identification of diseases using laboratory images. The performance of a model decreases in the case of a real-time identification of diseases. In [72], the authors obtained an accuracy rate of 99.35% on the PlantVillage dataset and the model performance decreased to 31% when the model was tested with a different dataset. In [2], the authors recorded an accuracy rate of 99.53% on a wide variety of datasets. When the model was trained solely on laboratory images and identified field-captured images, the success rate decreased to 66%. Therefore, the effective identification of diseases in real-time field images is an important challenging issue.
5.6. Design a Light Deep Learning Model
Most of the deep learning architectures that were implemented in the literature are based on AlexNet, VGG, GoogleNet, ResNet, DenseNet and InceptionV3. Deep learning requires high-performance computing devices, expensive GPUs and hundreds of machines. This increases the cost to the users. Small CNN models will be highly desirable especially in embedded, robotic and mobile applications where real-time performance and a low computational cost are required. It requires a very large quantity of data in order to perform better than other techniques. It is extremely expensive to train due to complex data models.
6. Conclusions and Future Directions
In this paper, we presented a survey of different machine learning approaches for the identification of plant diseases using leaf images. As in humans, plants suffer from different diseases which affect their normal growth. This survey consisted of the identification of diseases using handcrafted-features-based method and DL-based methods. We compared the performance in terms of the preprocessing and segmentation techniques used, the features used to classify the diseases, along with the dataset used in each paper. Through the survey of the identification of diseases using shape- and texture-based features, we can conclude that preprocessing and segmentation techniques play a major role in increasing accuracy. The SVM was the most widely used classification technique for the identification of diseases. From the survey, it was observed that the performance of deep learning models outperformed traditional handcrafted-features-based techniques. From the accuracy of different deep learning models, we can say that the ResNet50, InceptionV3 and DenseNet201 architectures are suitable for the identification of plant diseases. MobileNetV2 and SqueezeNet are suitable architectures for lightweight devices such as mobile phones.
The early detection of diseases would help farmers to improve the crop yield and the problem of the expensive domain expert. Several gaps are there in the existing literature and some are highlighted as future research work for the identification of diseases in plants. The collection of large datasets with a wide variety of images and images from different geographical locations is an important research issue. From the survey, we also conclude that if the disease symptom changes significantly during different stages of infection, then the reliability of detecting diseases will be less. Future work includes developing a reliable lightweight deep CNN model and adopting these models for mobile devices.
Conceptualization, S.M.H., A.K.M. and K.A.; methodology, S.M.H. and A.K.M.; software, S.M.H. and A.K.M.; validation, A.K.M., K.A. and T.N.; formal analysis, S.M.H. and A.K.M.; investigation, S.M.H., A.K.M. and M.J.; writing—original draft preparation, S.M.H.; writing—review and editing, A.K.M., M.J. and E.J.; supervision, A.K.M., K.A., M.J., Z.L. and E.J.; funding acquisition, T.N.; project administration, A.K.M. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Limited data are available on request due to the large size of the data.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Details of the datasets used.
Dataset Description | Image Environment | Link |
---|---|---|
PlantVillage dataset: 54,304 images of |
Captured in laboratory |
|
Rice leaf diseases: 120 images of three |
Captured on uniform |
|
Rice disease dataset: 5477 images of 3 |
Captured on white |
|
Rice disease dataset: 5932 images of |
Field images | |
Cassava dataset: 24,395 images of 5 different |
Field images with |
|
Hops disease dataset: 1101 images of 5 different |
Field images with |
|
Cucumber disease dataset: 695 images of disease- |
Field images | |
Cotton disease dataset: 2310 images of healthy |
Field images | |
Corn disease dataset: 4188 images of four |
Laboratory condition | |
Plant Disease dataset: 125,000 images of 10 different |
Laboratory condition |
|
New Plant Diseases dataset (Augmented): 87,000 |
Laboratory condition |
Summary of implemented details along with features based on color based method.
Author | Plant/Disease | Segmentation | Feature Extraction | Dataset | Accuracy |
---|---|---|---|---|---|
Pugoy et al. |
Rice | Thresholding | R, G, B color values | NA | NA |
Chaudhary et al. |
Disease-affected |
Otsu’s |
Different color |
NA | NA |
Husin et al. |
Chili leaf |
Color |
Yellow, green and |
107 captured |
NA |
Majid et al. |
Rice | NA | Fuzzy entropy with |
NA | 91.4 |
Sghair et al. |
NA | Kapur’s |
NA | NA | NA |
Singh et al. |
Blast disease | Thresholding | Different color values |
100 captured |
96.6 |
Shrivastava et al. |
Rice | NA | 172 color features | 619 captured |
94.6 |
In this table, NA indicates the information is not available.
Summary of different image segmentation techniques.
Segmentation | Type | Complexity | Advantages | Disadvantages |
---|---|---|---|---|
Color thresholding | Thresholding | Medium | Simple and powerful technique, |
Difficult to set the threshold value, |
K-means clustering | Clustering | Low | Suitable for a large number of |
Need to mention the clusters (K) at |
Sobel edge detection | Thresholding | Low | Simple and can detect the edges, |
For multiple edges, it does not give |
Otsu’s segmentation | Thresholding | High | For two-class problem such as foreground |
It considers only two classes in the histogram, |
Genetic algorithm |
Stochastic | High | It supports multiobjective optimization, |
Time-consuming, |
Fermi energy based | Thresholding | Low | Separates the infected and uninfected |
Calculating the energy value at each pixel |
Summary of implemented methods based on shape and texture.
Author | Preprocessing | Features | Classifier | Dataset | Accuracy (%) |
---|---|---|---|---|---|
Qing et al. |
Resizing, Otsu’s |
Area, perimeter, |
SVM | 216 |
97.2 |
Camargo et al. |
Color transformation, |
Color features, |
SVM | 117 |
93.1 |
Anthonys et al. |
Thresholding, |
Color differences, area, |
Membership |
50 |
70 |
Bashish et al. |
Color transformation, |
Angular moment, |
ANN | 192 |
93 (precision) |
Tian et al. |
Thresholding-based |
Color features, |
NA | 200 |
95.16 |
Arivazhagan et al. |
Color transformation, |
GLCM |
MDC |
500 |
94 |
Phadikar et al. |
Fermy energy |
Color features, |
Rule |
500 |
94.21 |
Chaudhari et. al |
Resizing, |
avelet transform | BP | NA | 97 |
Mokhtar et al. |
Resizing, K-means- |
Geometric features, |
SVM | 200 captured |
90 (SVM) |
Dandawate et al. |
Resizing, color |
SHIFT features | SVM | 120 captured |
93.79 |
Pujari et al. |
K-means clustering for |
GLCM, |
SVM |
Not |
Fruit |
Singh et al. |
Filtering, contrast |
Entropy, |
SVM | IRRI |
82 |
Anand et al. |
Histogram equalization, |
GLCM, |
ANN | NA | NA |
Prasad et al. |
Color space transform, |
Gabor wavelet transform |
KNN | NA | 93 |
Es-saady et al. |
Resizing, filtering |
Color features, |
two SVM | 284 captured |
87.80 |
Padol et al. |
Resizing, Gaussian |
Shape features, |
SVM | 137 captured |
88.89 |
Padol et al. |
Resizing, Gaussian |
Shape features, |
SVM |
137 captured |
88.33 (SVM) |
Sabrol et al. |
Otsu’s segmentation |
Shape features, |
Decision |
383 captured |
97.3 |
Hlaing et al. |
Color-thresholding- |
Color statistics features, |
SVM | 3474 images |
84.7 |
Mishra et al. |
Remove distortion, |
Texture features | MDC |
NA | 93.63 (MDC) |
Monzurul et al. |
Masking, color |
GLCM, |
SVM | 300 |
95 |
Prajapati et al. |
Cropping, resizing, image |
Color features, |
SVM | 120 captured |
88.57 |
Zhang et al. |
Superpixel, |
Pyramid of |
SVM | 300 captured |
51.83 |
Chuanlei et al. |
Color transformation, |
Color features, |
SVM | 90 |
90 |
Zhang et al. |
Superpixel clustering, |
PHOG | SVM | 150 apple, |
85.64 (apple) |
Bhimte et.al |
Cropping, resizing, |
GLCM | SVM | 130 captured |
98.46 |
Hlaing et al. |
Color-thresholding- |
Color statistics features, |
SVM | 3535 images |
85.1 |
Kaur et al. |
Resize, color |
Color features, |
SVM | 4775 |
90 |
In this table NA denotes the information is not available. The term accuracy defines the ratio of the number of correct predictions to the total number of images used in the dataset.
Summary of deep-learning-based implemented methods.
Author | Plant/Disease | Model | Dataset | Class | Accuracy |
---|---|---|---|---|---|
Mohanty et al. |
Multiple | AlexNet, |
54,306 images |
38 | 99.35 |
Sladojevic et al. |
Apple, |
Fine-tuned |
4483 internet- |
4 | 96.3 |
Nachtigall et al. |
Apple | AlexNet | 1450 captured |
6 | 97.3 |
Wang et al. |
Apple | VGG 16 with |
2086 images |
1 | 90.4 |
Fuentes et al. |
Tomato | Faster R-CNN |
5000 images |
10 | 83 |
Durmus et al. |
Tomato | Alexnet, |
Images of tomato |
10 | 95.65 (AlexNet), |
Lu et al. |
Rice | Multistage CNN | 500 captured |
10 | 95.48 |
Cruz et al. |
Olive | Lenet hybridized |
299 captured |
3 | 98.60 |
DeChant et al. |
Maize | Layers of CNN |
1796 captured |
2 | 96.7 |
Amara et al. |
Banana | LeNet | 3700 captured |
3 | 98.61 (color image), |
Ramcharan et al. |
Cassava | Inception V3 |
2756 |
6 | 98 |
Ferentinos et al. |
Multiple | AlexNetOWTBn, |
87,848 |
58 | 99.49 (AlexNet), |
Atole et al. |
Rice | AlexNet | 857 captured |
3 | 91.23 |
Rangarajan et al. |
Tomato | AlexNet, |
13,262 images |
6 | 97.29 (ALexNet), |
Liu et al. |
Apple | AlexNet with |
13,689 captured |
4 | 97.62 |
Ramcharan et al. |
Cassava | MobileNet | 2415 |
7 | 80.6 |
Adedoja et al. |
Multiple | NASNet | 54,306 images |
38 | 93.8 |
Turkoglu et al. |
8 different plant |
Different DL model |
1965 captured |
8 | 95.5 (ALexNet), |
Ozguven et al. |
Beet | Faster R-CNN | 155 captured |
4 | 95.48 |
Gensheng et al. |
Tea | Modified Cifar10 | 134 captured |
4 | 92.5 |
Singh et al. |
Mango | Multilayer CNN | 1070 captured |
2 | 97.13 |
Elhassouny et al. |
Tomato | MobileNet | 7176 images |
10 | 90.3 |
Arora et al. |
Maize | Deep Forest | 400 image | 4 | 96.25 |
Lee et al. |
Multiple | VGG16, |
54,306 images |
38 | 99.09(GoogLeNetBN), |
Zeng et al. |
Rice, |
SACNN | AES-CD9214, |
6 | 95.33 (AES-CD9214), |
Chen et al. |
Rice, |
INC-VGGN | 500 rice images, |
9 | 92.00 |
Li et al. |
Cotton pest | CNN | NBAIR | 50 | 95.4 |
Sethy et al. |
Rice | Different DL model |
5932 | 4 | 98.38 (F1-score) |
Li et al. |
Maize | Shallow CNN with |
2000 images |
4 | 94 |
Ahmad et al. |
Tomato | VGG16, |
2364 laboratory |
6 | 93.40 (lab), |
Bi et al. |
Apple | MobileNet | 334 captured |
2 | 73.50 |
Atila et al. |
Multiple | EfficientNet | 55,448 images |
39 | 99.91 |
Oyewola et al. |
Cassava | DRNN | 5656 images of |
5 | 96.75 |
Tuncer et al. |
Multiple | Hybrid CNN | 50,136 images |
30 | 99 |
Limitation of deep learning models for the identification of plant diseases.
Author | Limitations | |||||
---|---|---|---|---|---|---|
Large |
Large |
Accuracy |
Multiple |
Consider |
Train/Test |
|
Mohanty et al. [ |
yes | yes | low | × | × | × |
Ferentinos et al. [ |
yes | yes | low | × | × | × |
Liu et al. [ |
× | × | × | × | × | × |
Amara et al. [ |
× | × | × | × | × | × |
Fuentes et al. [ |
× | × | yes | yes | yes | × |
Geetharamani |
yes | yes | × | × | yes | × |
Barbedo et al. [ |
yes | yes | low | yes | × | × |
Cruz et al. [ |
× | × | × | × | × | × |
Sladojevic et al. [ |
× | yes | × | × | yes | × |
Brahimi et al. [ |
yes | yes | × | × | yes | × |
Ozguven et al. [ |
× | × | × | × | × | × |
Wang et al. [ |
× | × | × | × | × | × |
lee et al. [ |
yes | yes | yes | × | yes | × |
DeChant et al. [ |
× | × | × | × | × | × |
Ramcharan et al. [ |
× | yes | yes | × | yes | yes |
Oyewola et al. [ |
× | yes | × | × | yes | × |
Ramcharan et al. [ |
× | yes | × | × | yes | × |
In this table large images in a dataset are considered yes when the number of images is more than 1000/class.
Different DL models with respect to number of layers and parameters and size.
Model | No. of Layer | Parameters (Million) | Size |
---|---|---|---|
LeNet | 5 | 0.06 | - |
AlexNet | 8 | 60 | 240 MB |
VGG16 | 23 | 138 | 528 MB |
VGG19 | 26 | 143 | 549 MB |
InceptionV1 | 27 | 7 | 51 MB |
InceptionV3 | 48 | 23.85 | 93 MB |
Xception | 126 | 22.91 | 88 MB |
ResNet50 | 50 | 23 | 98 MB |
ResNet101 | 101 | 50 | 171 MB |
ResNet152 | 152 | 44 | 232 MB |
InceptionResNetV2 | 572 | 55.87 | 215 MB |
DenseNet121 | 121 | 8.06 | 33 MB |
DenseNet201 | 201 | 20.24 | 80 MB |
NASNetMobile | - | 5.32 | 23 MB |
Squeezenet | 69 | 1.23 | 5 MB |
Shuffle Net | - | 3.4 | - |
MobileNetV1 | 88 | 4.2 | 16 MB |
MobileNetV2 | 88 | 3.37 | 14 MB |
EfficientNet B0 | - | 5.33 | 29 MB |
EfficientNet B1 | - | 7.85 | 31 MB |
References
1. Chouhan, S.S.; Kaul, A.; Singh, U.P.; Jain, S. Bacterial foraging optimization based Radial Basis Function Neural Network (BRBFNN) for identification and classification of plant leaf diseases: An automatic approach towards Plant Pathology. IEEE Access; 2018; 6, pp. 8852-8863. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2800685]
2. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric.; 2018; 145, pp. 311-318. [DOI: https://dx.doi.org/10.1016/j.compag.2018.01.009]
3. Bharate, A.A.; Shirdhonkar, M. A review on plant disease detection using image processing. Proceedings of the 2017 International Conference on Intelligent Sustainable Systems (ICISS); Palladam, India, 7–8 December 2017; pp. 103-109.
4. Das, R.; Pooja, V.; Kanchana, V. Detection of diseases on visible part of plant—A review. Proceedings of the 2017 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR); Chennai, India, 7–8 April 2017; pp. 42-45.
5. Bock, C.; Poole, G.; Parker, P.; Gottwald, T. Plant disease severity estimated visually, by digital photography and image analysis, and by hyperspectral imaging. Crit. Rev. Plant Sci.; 2010; 29, pp. 59-107. [DOI: https://dx.doi.org/10.1080/07352681003617285]
6. Tlhobogang, B.; Wannous, M. Design of plant disease detection system: A transfer learning approach work in progress. Proceedings of the 2018 IEEE International Conference on Applied System Invention (ICASI); Chiba, Japan, 13–17 April 2018; pp. 158-161.
7. Mutka, A.M.; Bart, R.S. Image-based phenotyping of plant disease symptoms. Front. Plant Sci.; 2015; 5, 734. [DOI: https://dx.doi.org/10.3389/fpls.2014.00734] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25601871]
8. Nutter, F.W.; Esker, P.D.; Netto, R.A.C. Disease assessment concepts and the advancements made in improving the accuracy and precision of plant disease data. Eur. J. Plant Pathol.; 2006; 115, pp. 95-103. [DOI: https://dx.doi.org/10.1007/s10658-005-1230-z]
9. Munyaneza, J.E.; Crosslin, J.M.; Buchman, J.L.; Sengoda, V.G. Susceptibility of different potato plant growth stages to purple top disease. Am. J. Potato Res.; 2010; 87, pp. 60-66. [DOI: https://dx.doi.org/10.1007/s12230-009-9117-8]
10. Díaz-Pendón, J.A.; Cañizares, M.C.; Moriones, E.; Bejarano, E.R.; Czosnek, H.; Navas-Castillo, J. Tomato yellow leaf curl viruses: Ménage à trois between the virus complex, the plant and the whitefly vector. Mol. Plant Pathol.; 2010; 11, pp. 441-450. [DOI: https://dx.doi.org/10.1111/j.1364-3703.2010.00618.x]
11. Shah, J.P.; Prajapati, H.B.; Dabhi, V.K. A survey on detection and classification of rice plant diseases. Proceedings of the 2016 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC); Bangalore, India, 10–11 March 2016; pp. 1-8.
12. Prajapati, B.S.; Dabhi, V.K.; Prajapati, H.B. A survey on detection and classification of cotton leaf diseases. Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT); Chennai, India, 3–5 March 2016; pp. 2499-2506.
13. Iqbal, Z.; Khan, M.A.; Sharif, M.; Shah, J.H.; ur Rehman, M.H.; Javed, K. An automated detection and classification of citrus plant diseases using image processing techniques: A review. Comput. Electron. Agric.; 2018; 153, pp. 12-32. [DOI: https://dx.doi.org/10.1016/j.compag.2018.07.032]
14. Kaur, S.; Pandey, S.; Goel, S. Plants disease identification and classification through leaf images: A survey. Arch. Comput. Methods Eng.; 2019; 26, pp. 507-530. [DOI: https://dx.doi.org/10.1007/s11831-018-9255-6]
15. Lee, S.H.; Chan, C.S.; Wilkin, P.; Remagnino, P. Deep-plant: Plant identification with convolutional neural networks. Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP); Quebec City, QC, Canada, 27–30 September 2015; pp. 452-456.
16. Cireşan, D.C.; Meier, U.; Masci, J.; Gambardella, L.M.; Schmidhuber, J. Flexible, High Performance Convolutional Neural Networks for Image Classification. Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence; Barcelona, Spain, 16–22 July 2011; pp. 1237-1242. [DOI: https://dx.doi.org/10.5591/978-1-57735-516-8/IJCAI11-210]
17. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N. et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag.; 2012; 29, pp. 82-97. [DOI: https://dx.doi.org/10.1109/MSP.2012.2205597]
18. Wen, T.; Zhang, Z. Deep convolution neural network and autoencoders-based unsupervised feature learning of EEG signals. IEEE Access; 2018; 6, pp. 25399-25410. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2833746]
19. Carranza-Rojas, J.; Goeau, H.; Bonnet, P.; Mata-Montero, E.; Joly, A. Going deeper in the automated identification of Herbarium specimens. BMC Evol. Biol.; 2017; 17, 181. [DOI: https://dx.doi.org/10.1186/s12862-017-1014-z] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28797242]
20. Yang, X.; Guo, T. Machine learning in plant disease research. Eur. J. BioMed. Res.; 2017; 3, pp. 6-9. [DOI: https://dx.doi.org/10.18088/ejbmr.3.1.2017.pp6-9]
21. Nagaraju, M.; Chawla, P. Systematic review of deep learning techniques in plant disease detection. Int. J. Syst. Assur. Eng. Manag.; 2020; 11, pp. 547-560. [DOI: https://dx.doi.org/10.1007/s13198-020-00972-1]
22. Li, L.; Zhang, S.; Wang, B. Plant Disease Detection and Classification by Deep Learning—A Review. IEEE Access; 2021; 9, pp. 56683-56698. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3069646]
23. Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv; 2015; arXiv: 1511.08060
24. Ramcharan, A.; Baranowski, K.; McCloskey, P.; Ahmed, B.; Legg, J.; Hughes, D.P. Deep learning for image-based cassava disease detection. Front. Plant Sci.; 2017; 8, 1852. [DOI: https://dx.doi.org/10.3389/fpls.2017.01852]
25. Oyewola, D.O.; Dada, E.G.; Misra, S.; Damaševičius, R. Detecting cassava mosaic disease using a deep residual convolutional neural network with distinct block processing. PeerJ Comput. Sci.; 2021; 7, e352. [DOI: https://dx.doi.org/10.7717/peerj-cs.352]
26. Hops Disease Dataset. Available online: https://www.kaggle.com/scruggzilla/hops-classification (accessed on 17 January 2021).
27. Cotton Disease Dataset. Available online: https://www.kaggle.com/singhakash/cotton-disease-dataset (accessed on 22 January 2021).
28. Rice Disease Dataset. Available online: https://www.kaggle.com/shayanriyaz/riceleafs (accessed on 20 January 2021).
29. Sethy, P.K.; Barpanda, N.K.; Rath, A.K.; Behera, S.K. Deep feature based rice leaf disease identification using support vector machine. Comput. Electron. Agric.; 2020; 175, 105527. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105527]
30. Hlaing, C.S.; Zaw, S.M.M. Model-based statistical features for mobile phone image of tomato plant disease classification. Proceedings of the 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT); Taipei, Taiwan, 18–20 December 2017; pp. 223-229.
31. Camargo, A.; Smith, J. Image pattern classification for the identification of disease causing agents in plants. Comput. Electron. Agric.; 2009; 66, pp. 121-125. [DOI: https://dx.doi.org/10.1016/j.compag.2009.01.003]
32. Prasad, S.; Peddoju, S.K.; Ghosh, D. Multi-resolution mobile vision system for plant leaf disease diagnosis. Signal Image Video Process.; 2016; 10, pp. 379-388. [DOI: https://dx.doi.org/10.1007/s11760-015-0751-y]
33. Islam, M.; Dinh, A.; Wahid, K.; Bhowmik, P. Detection of potato diseases using image segmentation and multiclass support vector machine. Proceedings of the 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE); Windsor, ON, Canada, 30 April–3 May 2017; pp. 1-4.
34. Chuanlei, Z.; Shanwen, Z.; Jucheng, Y.; Yancui, S.; Jia, C. Apple leaf disease identification using genetic algorithm and correlation based feature selection method. Int. J. Agric. Biol. Eng.; 2017; 10, pp. 74-83.
35. Anthonys, G.; Wickramarachchi, N. An image recognition system for crop disease identification of paddy fields in Sri Lanka. Proceedings of the 2009 International Conference on Industrial and Information Systems (ICIIS); Peradeniya, Sri Lanka, 28–31 December 2009; pp. 403-407.
36. Yao, Q.; Guan, Z.; Zhou, Y.; Tang, J.; Hu, Y.; Yang, B. Application of support vector machine for detecting rice diseases using shape and color texture features. Proceedings of the International Conference on Engineering Computation; Vancouver, BC, Canada, 29–31 August 2009; pp. 79-83.
37. Sabrol, H.; Satish, K. Tomato plant disease classification in digital images using classification tree. Proceedings of the 2016 International Conference on Communication and Signal Processing (ICCSP); Melmaruvathur, India, 6–8 April 2016; pp. 1242-1246.
38. Al Bashish, D.; Braik, M.; Bani-Ahmad, S. A framework for detection and classification of plant leaf and stem diseases. Proceedings of the 2010 International Conference on Signal and Image Processing (ICSIP); Chennai, India, 15–17 December 2010; pp. 113-118.
39. Singh, A.K.; Rubiya, A.; Raja, B. Classification of rice disease using digital image processing and svm classifier. Int. J. Electr. Electron. Eng.; 2015; 7, pp. 294-299.
40. Anand, R.; Veni, S.; Aravinth, J. An application of image processing techniques for detection of diseases on brinjal leaves using k-means clustering method. Proceedings of the 2016 International Conference on Recent Trends in Information Technology (ICRTIT); Chennai, India, 8–9 April 2016; pp. 1-6. [DOI: https://dx.doi.org/10.1109/ICRTIT.2016.7569531]
41. Padol, P.B.; Yadav, A.A. SVM classifier based grape leaf disease detection. Proceedings of the 2016 Conference on Advances in Signal Processing (CASP); Pune, India, 9–11 June 2016; pp. 175-179.
42. Padol, P.B.; Sawant, S. Fusion classification technique used to detect downy and Powdery Mildew grape leaf diseases. Proceedings of the 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC); Jalgaon, India, 22–24 December 2016; pp. 298-301.
43. Pujari, J.D.; Yakkundimath, R.; Byadgi, A.S. Image processing based detection of fungal diseases in plants. Procedia Comput. Sci.; 2015; 46, pp. 1802-1808. [DOI: https://dx.doi.org/10.1016/j.procs.2015.02.137]
44. Kaur, S.; Pandey, S.; Goel, S. Semi-automatic leaf disease detection and classification system for soybean culture. IET Image Process.; 2018; 12, pp. 1038-1048. [DOI: https://dx.doi.org/10.1049/iet-ipr.2017.0822]
45. Chaudhary, P.; Chaudhari, A.K.; Cheeran, A.; Godara, S. Color transform based approach for disease spot detection on plant leaf. Int. J. Comput. Sci. Telecommun.; 2012; 3, pp. 65-70.
46. Singh, A.; Singh, M.L. Automated blast disease detection from paddy plant leaf—A color slicing approach. Proceedings of the 2018 7th International Conference on Industrial Technology and Management (ICITM); Oxford, UK, 7–9 March 2018; pp. 339-344.
47. El Sghair, M.; Jovanovic, R.; Tuba, M. An Algorithm for Plant Diseases Detection Based on Color Features. Int. J. Agric. Sci.; 2017; 2, pp. 1-6.
48. Husin, Z.B.; Shakaff, A.Y.B.M.; Aziz, A.H.B.A.; Farook, R.B.S.M. Feasibility study on plant chili disease detection using image processing techniques. Proceedings of the 2012 Third International Conference on Intelligent Systems Modelling and Simulation; Kota Kinabalu, Malaysia, 8–10 February 2012; pp. 291-296.
49. Pugoy, R.A.D.; Mariano, V.Y. Automated rice leaf disease detection using color image analysis. Proceedings of the Third International Conference on Digital Image Processing (ICDIP 2011), International Society for Optics and Photonics; Chengdu, China, 15–17 April 2011; Volume 8009, 80090F.
50. Majid, K.; Herdiyeni, Y.; Rauf, A. I-PEDIA: Mobile application for paddy disease identification using fuzzy entropy and probabilistic neural network. Proceedings of the 2013 International Conference on Advanced Computer Science and Information Systems (ICACSIS); Sanur Bali, Indonesia, 28–29 September 2013; pp. 403-406.
51. Shrivastava, V.K.; Pradhan, M.K. Rice plant disease classification using color features: A machine learning paradigm. J. Plant Pathol.; 2021; 103, pp. 17-26. [DOI: https://dx.doi.org/10.1007/s42161-020-00683-3]
52. Dey, A.K.; Sharma, M.; Meshram, M. Image processing based leaf rot disease, detection of betel vine (Piper betle L.). Procedia Comput. Sci.; 2016; 85, pp. 748-754. [DOI: https://dx.doi.org/10.1016/j.procs.2016.05.262]
53. Phadikar, S.; Sil, J.; Das, A.K. Rice diseases classification using feature selection and rule generation techniques. Comput. Electron. Agric.; 2013; 90, pp. 76-85. [DOI: https://dx.doi.org/10.1016/j.compag.2012.11.001]
54. Camargo, A.; Smith, J. An image-processing based algorithm to automatically identify plant disease visual symptoms. Biosyst. Eng.; 2009; 102, pp. 9-21. [DOI: https://dx.doi.org/10.1016/j.biosystemseng.2008.09.030]
55. Chaudhari, V.; Patil, C. Disease detection of cotton leaves using advanced image processing. Int. J. Adv. Comput. Res.; 2014; 4, 653.
56. Bhimte, N.R.; Thool, V. Diseases Detection of Cotton Leaf Spot using Image Processing and SVM Classifier. Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS); Madurai, India, 14–15 June 2018; pp. 340-344.
57. Wang, H.; Li, G.; Ma, Z.; Li, X. Image recognition of plant diseases based on backpropagation networks. Proceedings of the 2012 5th International Congress on Image and Signal Processing; Agadir, Morocco, 28–30 June 2012; pp. 894-900.
58. Es-saady, Y.; El Massi, I.; El Yassa, M.; Mammass, D.; Benazoun, A. Automatic recognition of plant leaves diseases based on serial combination of two SVM classifiers. Proceedings of the 2016 International Conference on Electrical and Information Technologies (ICEIT); Tangiers, Morocco, 4–7 May 2016; pp. 561-566.
59. Singh, V.; Misra, A. Detection of unhealthy region of plant leaves using image processing and genetic algorithm. Proceedings of the 2015 International Conference on Advances in Computer Engineering and Applications; Ghaziabad, India, 19–20 March 2015; pp. 1028-1032.
60. Vijai, S.; Misra, A. Detection of plant leaf diseases using image segmentation and soft computing techniques. Inf. Process. Agric.; 2017; 4, pp. 41-49.
61. Mokhtar, U.; Ali, M.A.; Hassanien, A.E.; Hefny, H. Identifying two of tomatoes leaf viruses using support vector machine. Information Systems Design and Intelligent Applications; Springer: Berlin, Germany, 2015; pp. 771-782.
62. Hlaing, C.S.; Zaw, S.M.M. Tomato plant diseases classification using statistical texture feature and color feature. Proceedings of the 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS); Singapore, 6–8 June 2018; pp. 439-444.
63. Tian, Y.; Zhao, C.; Lu, S.; Guo, X. Multiple classifier combination for recognition of wheat leaf diseases. Intell. Autom. Soft Comput.; 2011; 17, pp. 519-529. [DOI: https://dx.doi.org/10.1080/10798587.2011.10643166]
64. Masazhar, A.N.I.; Kamal, M.M. Digital image processing technique for palm oil leaf disease detection using multiclass SVM classifier. Proceedings of the 2017 IEEE 4th International Conference on Smart Instrumentation, Measurement and Application (ICSIMA); Putrajaya, Malaysia, 28–30 November 2017; pp. 1-6.
65. Zhang, S.; Zhu, Y.; You, Z.; Wu, X. Fusion of superpixel, expectation maximization and PHOG for recognizing cucumber diseases. Comput. Electron. Agric.; 2017; 140, pp. 338-347. [DOI: https://dx.doi.org/10.1016/j.compag.2017.06.016]
66. Zhang, S.; Wang, H.; Huang, W.; You, Z. Plant diseased leaf segmentation and recognition by fusion of superpixel, K-means and PHOG. Optik; 2018; 157, pp. 866-872. [DOI: https://dx.doi.org/10.1016/j.ijleo.2017.11.190]
67. Zhang, S.; You, Z.; Wu, X. Plant disease leaf image segmentation based on superpixel clustering and EM algorithm. Neural Comput. Appl.; 2019; 31, pp. 1225-1232. [DOI: https://dx.doi.org/10.1007/s00521-017-3067-8]
68. Dandawate, Y.; Kokare, R. An automated approach for classification of plant diseases towards development of futuristic Decision Support System in Indian perspective. Proceedings of the 2015 International Conference on ADVANCES in computing, Communications and Informatics (ICACCI); Kochi, India, 10–13 August 2015; pp. 794-799.
69. Prajapati, H.B.; Shah, J.P.; Dabhi, V.K. Detection and classification of rice plant diseases. Intell. Decis. Technol.; 2017; 11, pp. 357-373. [DOI: https://dx.doi.org/10.3233/IDT-170301]
70. Arivazhagan, S.; Shebiah, R.N.; Ananthi, S.; Varthini, S.V. Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features. Agric. Eng. Int. CIGR J.; 2013; 15, pp. 211-217.
71. Lee, S.H.; Chan, C.S.; Mayo, S.J.; Remagnino, P. How deep learning extracts and learns leaf features for plant classification. Pattern Recognit.; 2017; 71, pp. 1-13. [DOI: https://dx.doi.org/10.1016/j.patcog.2017.05.015]
72. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci.; 2016; 7, 1419. [DOI: https://dx.doi.org/10.3389/fpls.2016.01419] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27713752]
73. Fuentes, A.; Yoon, S.; Kim, S.; Park, D. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors; 2017; 17, 2022. [DOI: https://dx.doi.org/10.3390/s17092022] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28869539]
74. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A. Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agric.; 2020; 173, 105393. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105393]
75. Li, Y.; Yang, J. Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric.; 2020; 169, 105240. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105240]
76. Ren, F.; Liu, W.; Wu, G. Feature reuse residual networks for insect pest recognition. IEEE Access; 2019; 7, pp. 122758-122768. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2938194]
77. Farjon, G.; Krikeb, O.; Hillel, A.B.; Alchanatis, V. Detection and counting of flowers on apple trees for better chemical thinning decisions. Precis. Agric.; 2019; 21, pp. 503-521. [DOI: https://dx.doi.org/10.1007/s11119-019-09679-1]
78. Bresilla, K.; Perulli, G.D.; Boini, A.; Morandi, B.; Corelli Grappadelli, L.; Manfrini, L. Single-shot convolution neural networks for real-time fruit detection within the tree. Front. Plant Sci.; 2019; 10, 611. [DOI: https://dx.doi.org/10.3389/fpls.2019.00611]
79. Trong, V.H.; Gwang-hyun, Y.; Vu, D.T.; Jin-young, K. Late fusion of multimodal deep neural networks for weeds classification. Comput. Electron. Agric.; 2020; 175, 105506. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105506]
80. Kawasaki, Y.; Uga, H.; Kagiwada, S.; Iyatomi, H. Basic study of automated diagnosis of viral plant diseases using convolutional neural networks. International Symposium on Visual Computing; Springer: Berlin, Germany, 2015; pp. 638-645.
81. Fujita, E.; Kawasaki, Y.; Uga, H.; Kagiwada, S.; Iyatomi, H. Basic investigation on a robust and practical plant diagnostic system. Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA); Anaheim, CA, USA, 18–20 December 2016; pp. 989-992.
82. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv; 2014; arXiv: 1409.1556
83. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. Proceedings of the 22nd ACM International Conference on Multimedia; Orlando, FL, USA, 3–7 November 2014; pp. 675-678.
84. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci.; 2016; 2016, 3289801. [DOI: https://dx.doi.org/10.1155/2016/3289801]
85. Rangarajan, A.K.; Purushothaman, R.; Ramesh, A. Tomato crop disease classification using pre-trained deep learning algorithm. Procedia Comput. Sci.; 2018; 133, pp. 1040-1047. [DOI: https://dx.doi.org/10.1016/j.procs.2018.07.070]
86. Nachtigall, L.G.; Araujo, R.M.; Nachtigall, G.R. Classification of apple tree disorders using convolutional neural networks. Proceedings of the 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI); San Jose, CA, USA, 6–8 November 2016; pp. 472-476.
87. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst.; 2012; 25, pp. 1097-1105. [DOI: https://dx.doi.org/10.1145/3065386]
88. Hara, Y.; Atkins, R.G.; Yueh, S.H.; Shin, R.T.; Kong, J.A. Application of neural networks to radar image classification. IEEE Trans. Geosci. Remote Sens.; 1994; 32, pp. 100-109. [DOI: https://dx.doi.org/10.1109/36.285193]
89. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput.; 1989; 1, pp. 541-551. [DOI: https://dx.doi.org/10.1162/neco.1989.1.4.541]
90. Amara, J.; Bouaziz, B.; Algergawy, A. A deep learning-based approach for banana leaf diseases classification. Proceedings of the Datenbanksysteme für Business, Technologie und Web (BTW 2017)-Workshopband; Stuttgart, Germany, 6–10 March 2017.
91. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory; 1962; 8, pp. 179-187.
92. Zhenjiang, M. Zernike moment-based image shape analysis and its application. Pattern Recognit. Lett.; 2000; 21, pp. 169-177. [DOI: https://dx.doi.org/10.1016/S0167-8655(99)00144-0]
93. Cruz, A.C.; Luvisi, A.; De Bellis, L.; Ampatzidis, Y. X-FIDO: An effective application for detecting olive quick decline syndrome with deep learning and data fusion. Front. Plant Sci.; 2017; 8, 1741. [DOI: https://dx.doi.org/10.3389/fpls.2017.01741]
94. Atole, R.R.; Park, D. A multiclass deep convolutional neural network classifier for detection of common rice plant anomalies. Int. J. Adv. Comput. Sci. Appl.; 2018; 9, pp. 67-70.
95. Rangarajan, A.K.; Purushothaman, R. Disease classification in eggplant using pre-trained vgg16 and msvm. Sci. Rep.; 2020; 10, pp. 1-11.
96. Rangarajan Aravind, K.; Raja, P. Automated disease classification in (Selected) agricultural crops using transfer learning. Automatika Časopis za Automatiku Mjerenje Elektroniku Računarstvo i Komunikacije; 2020; 61, pp. 260-272. [DOI: https://dx.doi.org/10.1080/00051144.2020.1728911]
97. Lee, S.H.; Goëau, H.; Bonnet, P.; Joly, A. New perspectives on plant disease characterization based on deep learning. Comput. Electron. Agric.; 2020; 170, 105220. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105220]
98. Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric.; 2019; 161, pp. 280-290. [DOI: https://dx.doi.org/10.1016/j.compag.2018.04.002]
99. Johannes, A.; Picon, A.; Alvarez-Gila, A.; Echazarra, J.; Rodriguez-Vaamonde, S.; Navajas, A.D.; Ortiz-Barredo, A. Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case. Comput. Electron. Agric.; 2017; 138, pp. 200-209. [DOI: https://dx.doi.org/10.1016/j.compag.2017.04.013]
100. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
101. Ahmad, I.; Hamid, M.; Yousaf, S.; Shah, S.T.; Ahmad, M.O. Optimizing Pretrained Convolutional Neural Networks for Tomato Leaf Disease Detection. Complexity; 2020; 2020, 8812019. [DOI: https://dx.doi.org/10.1155/2020/8812019]
102. Elhassouny, A.; Smarandache, F. Smart mobile application to recognize tomato leaf diseases using Convolutional Neural Networks. Proceedings of the 2019 International Conference of Computer Science and Renewable Energies (ICCSRE); Agadir, Morocco, 22–24 July 2019; pp. 1-4.
103. Li, K.; Lin, J.; Liu, J.; Zhao, Y. Using deep learning for Image-Based different degrees of ginkgo leaf disease classification. Information; 2020; 11, 95. [DOI: https://dx.doi.org/10.3390/info11020095]
104. Ai, Y.; Sun, C.; Tie, J.; Cai, X. Research on Recognition Model of Crop Diseases and Insect Pests Based on Deep Learning in Harsh Environments. IEEE Access; 2020; 8, pp. 171686-171693. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3025325]
105. DeChant, C.; Wiesner-Hanks, T.; Chen, S.; Stewart, E.L.; Yosinski, J.; Gore, M.A.; Nelson, R.J.; Lipson, H. Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology; 2017; 107, pp. 1426-1432. [DOI: https://dx.doi.org/10.1094/PHYTO-11-16-0417-R]
106. Ozguven, M.M.; Adem, K. Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms. Phys. A Stat. Mech. Its Appl.; 2019; 535, 122537. [DOI: https://dx.doi.org/10.1016/j.physa.2019.122537]
107. Oppenheim, D.; Shani, G. Potato disease classification using convolution neural networks. Adv. Anim. Biosci.; 2017; 8, 244. [DOI: https://dx.doi.org/10.1017/S2040470017001376]
108. Geetharamani, G.; Pandian, A. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng.; 2019; 76, pp. 323-338.
109. Wang, G.; Sun, Y.; Wang, J. Automatic image-based plant disease severity estimation using deep learning. Comput. Intell. Neurosci.; 2017; 2017, 2917536. [DOI: https://dx.doi.org/10.1155/2017/2917536] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28757863]
110. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA, 20–25 June 2009; pp. 248-255.
111. Zhang, X.; Qiao, Y.; Meng, F.; Fan, C.; Zhang, M. Identification of maize leaf diseases using improved deep convolutional neural networks. IEEE Access; 2018; 6, pp. 30370-30377. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2844405]
112. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training gans. arXiv; 2016; arXiv: 1606.03498
113. Liu, B.; Zhang, Y.; He, D.; Li, Y. Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry; 2018; 10, 11. [DOI: https://dx.doi.org/10.3390/sym10010011]
114. Ramcharan, A.; McCloskey, P.; Baranowski, K.; Mbilinyi, N.; Mrisho, L.; Ndalahwa, M.; Legg, J.; Hughes, D.P. A mobile-based deep learning model for cassava disease diagnosis. Front. Plant Sci.; 2019; 10, 272. [DOI: https://dx.doi.org/10.3389/fpls.2019.00272] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30949185]
115. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. European Conference on Computer Vision; Springer: Berlin, Germany, 2014; pp. 740-755.
116. Toda, Y.; Okura, F. How convolutional neural networks diagnose plant disease. Plant Phenomics; 2019; 2019, 9237136. [DOI: https://dx.doi.org/10.34133/2019/9237136] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33313540]
117. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 2818-2826.
118. Lu, Y.; Yi, S.; Zeng, N.; Liu, Y.; Zhang, Y. Identification of rice diseases using deep convolutional neural networks. Neurocomputing; 2017; 267, pp. 378-384. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.06.023]
119. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv; 2016; arXiv: 1602.07360
120. Durmuş, H.; Güneş, E.O.; Kırcı, M. Disease detection on the leaves of the tomato plants by using deep learning. Proceedings of the 2017 6th International Conference on Agro-Geoinformatics; Fairfax, VA, USA, 7–10 August 2017; pp. 1-5.
121. Hu, G.; Yang, X.; Zhang, Y.; Wan, M. Identification of tea leaf diseases by using an improved deep convolutional neural network. Sustain. Comput. Inform. Syst.; 2019; 24, 100353. [DOI: https://dx.doi.org/10.1016/j.suscom.2019.100353]
122. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE; 1998; 86, pp. 2278-2324. [DOI: https://dx.doi.org/10.1109/5.726791]
123. Bi, C.; Wang, J.; Duan, Y.; Fu, B.; Kang, J.R.; Shi, Y. Mobilenet based apple leaf diseases identification. Mob. Netw. Appl.; 2020; 27, pp. 172-180. [DOI: https://dx.doi.org/10.1007/s11036-020-01640-1]
124. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform.; 2021; 61, 101182. [DOI: https://dx.doi.org/10.1016/j.ecoinf.2020.101182]
125. Tuncer, A. Cost-optimized hybrid convolutional neural networks for detection of plant leaf diseases. J. Ambient. Intell. Humaniz. Comput.; 2021; 12, pp. 8625-8636. [DOI: https://dx.doi.org/10.1007/s12652-021-03289-4]
126. Chohan, M.; Khan, A.; Chohan, R.; Katpar, S.H.; Mahar, M.S. Plant Disease Detection using Deep Learning. Int. J. Recent Technol. Eng.; 2020; 9, pp. 909-914. [DOI: https://dx.doi.org/10.35940/ijrte.A2139.059120]
127. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA, 7–12 June 2015; pp. 1-9.
128. Jadhav, S.B.; Udupi, V.R.; Patil, S.B. Identification of plant diseases using convolutional neural networks. Int. J. Inf. Technol.; 2020; 13, pp. 2461-2470. [DOI: https://dx.doi.org/10.1007/s41870-020-00437-5]
129. Arora, J.; Agrawal, U.; Sharma, P. Classification of Maize leaf diseases from healthy leaves using Deep Forest. J. Artif. Intell. Syst.; 2020; 2, pp. 14-26. [DOI: https://dx.doi.org/10.33969/AIS.2020.21002]
130. Sibiya, M.; Sumbwanyambe, M. A computational procedure for the recognition and classification of maize leaf diseases out of healthy leaves using convolutional neural networks. AgriEngineering; 2019; 1, pp. 119-131. [DOI: https://dx.doi.org/10.3390/agriengineering1010009]
131. Singh, U.P.; Chouhan, S.S.; Jain, S.; Jain, S. Multilayer convolution neural network for the classification of mango leaves infected by anthracnose disease. IEEE Access; 2019; 7, pp. 43721-43729. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2907383]
132. TÜRKOĞLU, M.; Hanbay, D. Plant disease and pest detection using deep learning-based features. Turk. J. Electr. Eng. Comput. Sci.; 2019; 27, pp. 1636-1651. [DOI: https://dx.doi.org/10.3906/elk-1809-181]
133. Adedoja, A.; Owolawi, P.A.; Mapayi, T. Deep learning based on nasnet for plant disease recognition using leave images. Proceedings of the 2019 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD); Winterton, South Africa, 5–6 August 2019; pp. 1-5.
134. Li, Y.; Nie, J.; Chao, X. Do we really need deep CNN for plant diseases identification?. Comput. Electron. Agric.; 2020; 178, 105803. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105803]
135. Zeng, W.; Li, M. Crop leaf disease recognition based on Self-Attention convolutional neural network. Comput. Electron. Agric.; 2020; 172, 105341. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105341]
136. Liu, T.; Chen, W.; Wu, W.; Sun, C.; Guo, W.; Zhu, X. Detection of aphids in wheat fields using a computer vision technique. Biosyst. Eng.; 2016; 141, pp. 82-93. [DOI: https://dx.doi.org/10.1016/j.biosystemseng.2015.11.005]
137. Barbedo, J.G.A.; Koenigkan, L.V.; Halfeld-Vieira, B.A.; Costa, R.V.; Nechet, K.L.; Godoy, C.V.; Junior, M.L.; Patricio, F.R.A.; Talamini, V.; Chitarra, L.G. et al. Annotated plant pathology databases for image-based detection and recognition of diseases. IEEE Lat. Am. Trans.; 2018; 16, pp. 1749-1757. [DOI: https://dx.doi.org/10.1109/TLA.2018.8444395]
138. Brahimi, M.; Arsenovic, M.; Laraba, S.; Sladojevic, S.; Boukhalfa, K.; Moussaoui, A. Deep learning for plant diseases: Detection and saliency map visualisation. Human and Machine Learning; Springer: Berlin, Germany, 2018; pp. 93-117.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Early detection and identification of plant diseases from leaf images using machine learning is an important and challenging research area in the field of agriculture. There is a need for such kinds of research studies in India because agriculture is one of the main sources of income which contributes seventeen percent of the total gross domestic product (GDP). Effective and improved crop products can increase the farmer’s profit as well as the economy of the country. In this paper, a comprehensive review of the different research works carried out in the field of plant disease detection using both state-of-art, handcrafted-features- and deep-learning-based techniques are presented. We address the challenges faced in the identification of plant diseases using handcrafted-features-based approaches. The application of deep-learning-based approaches overcomes the challenges faced in handcrafted-features-based approaches. This survey provides the research improvement in the identification of plant diseases from handcrafted-features-based to deep-learning-based models. We report that deep-learning-based approaches achieve significant accuracy rates on a particular dataset, but the performance of the model may be decreased significantly when the system is tested on field image condition or on different datasets. Among the deep learning models, deep learning with an inception layer such as GoogleNet and InceptionV3 have better ability to extract the features and produce higher performance results. We also address some of the challenges that are needed to be solved to identify the plant diseases effectively.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details






1 Department of Computer Science and Engineering, Gandhi Institute of Technology and Management, Bengaluru 561203, Karnataka, India
2 Department of Information Technology, North Eastern Hill University, Shillong 793022, Meghalaya, India
3 Department of Electrical Engineering Fundamentals, Wrocław University of Science and Technology, 50-370 Wrocław, Poland
4 Department of Operations Research and Business Intelligence, Wrocław University of Science and Technology, 50-370 Wrocław, Poland
5 Department of General Electrical Engineering, Faculty of Electrical Engineering and Computer Science, VSB—Technical University of Ostrava, 708 33 Ostrava, Czech Republic