1. Introduction
Plant diseases and pests represent longstanding challenges in agricultural production, causing significant impacts on crop yields, product quality, and ecological environments [1]. Global climate change and globalization have altered the patterns of plant disease and pest occurrences, intensifying their harmful effects and underscoring the increasing importance of effective management strategies [2,3]. Traditionally, people have primarily relied on manual inspections and experiential judgment to monitor and control plant diseases and pests [4,5]. While manual pest scouting remains widely used due to its accessibility, it is often hindered by subjectivity, variability in accuracy, and inefficiency, particularly in large-scale agricultural contexts. In contrast, deep learning models have demonstrated superior performance, offering higher accuracy, consistency, and scalability in pest and disease detection. These advantages highlight the necessity of exploring and synthesizing research on deep learning applications in this field, addressing critical gaps in traditional methods and providing a foundation for future advancements.
In recent years, artificial intelligence technology has developed rapidly, achieving significant advancements in fields such as Natural Language Processing (NLP), image processing, and decision analysis [6,7,8,9,10,11]. Certain deep learning models have demonstrated accuracy in semantic segmentation and object detection tasks that rival human-level performance [12,13,14]. Some of these models have been successfully applied in plant pest and disease recognition. The Convolutional Neural Network (CNN), leveraging mechanisms such as hierarchical feature learning, local perception, and weight sharing, effectively extracts complex and discriminative features from raw images, enabling the identification and segmentation of plant pests and diseases [15]. Building upon the CNN, models such as VGGNet and InceptionNet have been developed by enhancing network depth and convolutional kernel size. VGGNet, with its deep network structure, achieves high accuracy in classifying pest and disease images [16], while InceptionNet introduces convolutional kernels of different sizes to simultaneously capture local details and global features at the same level, making it particularly effective for handling multi-scale pest and disease morphologies, such as leaf spots and lesion boundaries [17]. Deep Belief Networks (DBNs) and Multilayer Perceptron (MLP) further improve detection accuracy through efficient feature learning and classification. DBNs, trained layer by layer, learn the latent structural features of input data and are particularly effective in processing high-dimensional remote sensing images by reducing dimensionality and extracting the most distinctive features [18]. MLP classifies these high-level features to achieve accurate recognition of different types of plant pests and diseases [19]. In addition to traditional models such as CNNs, VGGNet, InceptionNet, and DBNs, various improved models have also demonstrated strong performance in plant pest and disease detection. Convolutional Correlation Neural Networks (CCNNs) combine convolutional operations with correlation learning mechanisms, enabling the extraction of dynamic features associated with pests and diseases in more complex environments [20]. Adaptive Deep Belief Networks (ADDBNs) dynamically adjust network structures, allowing them to flexibly adapt to different datasets and task requirements, thereby enhancing detection accuracy and robustness [21]. The integration and innovation of deep learning models have significantly advanced the automation and intelligence of plant pest and disease detection. However, the integration of deep learning technologies in this field still faces challenges. Training deep learning models requires large-scale annotated datasets and computational resources. Additionally, the interpretability and generalization capabilities of these models need further investigation and improvement to enhance their adaptability and reliability in complex natural environments. Therefore, deep learning-based methods for plant pest and disease detection hold substantial academic research value.
Given the lack of a comprehensive and detailed discussion on deep learning-based methods for plant pest and disease detection, this study reviews and synthesizes relevant literature, with a particular emphasis on research advancements over recent years. The primary objective is to provide a thorough overview of the applications of deep learning technologies in the field of plant pest and disease detection.
The main contents of this study are as follows: firstly, an introduction to the datasets used in this field, along with an overview of traditional methods for plant pest and disease detection and their associated limitations. Subsequently, the study delves into the applications of remote sensing data in plant pest and disease research, the advantages of deep learning in image recognition and classification, and the synergistic effects arising from the integration of these two approaches. Finally, the study discusses the potential applications and challenges of deep learning in the domain of plant pest and disease detection, providing insights and guidance to drive innovation in monitoring and control technologies for plant pests and diseases. By integrating remote sensing technology and deep learning methods, we aim to achieve more accurate and efficient outcomes in the field of plant disease and pest management, further promoting sustainable agricultural development and environmental protection.
2. Method
To ensure the systematic and transparent nature of this review, a rigorous literature selection and data extraction process was followed. Clear inclusion and exclusion criteria were established. The review focused on studies related to the identification, classification, detection, or monitoring of plant diseases and pests, employing deep learning techniques. Eligible studies included experimental research, case analyses, and review articles. Articles were excluded if they did not relate to plant disease or pest management, did not employ deep learning methods, or were opinion pieces or unsupported reviews without empirical data.
The literature search was conducted using a systematic and reproducible strategy. We performed searches in the China National Knowledge Infrastructure (CNKI) and Google Scholar databases, utilizing the following keywords: “plant disease and pest identification”, “plant disease and pest classification”, “plant disease and pest object detection”, “remote sensing large models”, and “transfer learning”. The search covered all relevant articles without time restrictions. Results were initially screened based on titles, keywords, abstracts, and methodologies, with irrelevant articles being excluded. After the screening process, a total of 182 eligible studies were included.
The literature screening and data extraction process was independently carried out by four authors. During the initial screening, we focused on evaluating the titles and abstracts to determine whether they met the inclusion criteria. For studies with unclear relevance, full-text articles were further reviewed. In the data extraction phase, the authors extracted relevant information from the selected studies based on predefined criteria, including research design, deep learning models used, datasets, experimental methods, results, and applications.
This review primarily focuses on the performance of deep learning models in plant disease and pest identification, classification, and detection, with an emphasis on metrics such as accuracy, recall, F1-score, computational efficiency (e.g., training and inference time), and the overall model performance (e.g., robustness and accuracy).
As a literature review, this study did not conduct a formal risk of bias assessment for individual studies. However, throughout the literature selection and data extraction processes, we adhered to strict standards to ensure the quality and reliability of the included studies. To ensure the accuracy and scientific rigor of the results, multiple performance metrics were used for data synthesis and presentation, including identification accuracy (accuracy, recall, F1-score), computational efficiency (time consumption, resource usage), and comparisons across different deep learning models.
To further analyze the research trends and hotspots, we utilized VOSviewer for keyword visualization (Figure 1). VOSviewer is an open-source software tool that effectively maps relationships between documents and helps identify research trends. In this study, the keyword “deep learning” appeared 56 times with a link strength of 242, followed by “convolutional neural network,” which appeared 38 times with a link strength of 160. These visual analyses provided insights into the development of deep learning in the field of plant disease and pest detection.
Although formal meta-analysis was not conducted in this review, we examined the heterogeneity among the studies through VOSviewer and performed a keyword clustering analysis to identify different research directions within the field. Additionally, to ensure the robustness of the results, sensitivity analyses were performed by excluding low-quality studies and assessing the consistency of findings across the remaining research. We used the PRISMA checklist (Supplementary Materials) when writing our report [22].
3. Plant Pest and Disease Datasets and Traditional Detection Methods
To establish the foundation for the research, this paper first explores traditional plant disease and pest detection methods and commonly used datasets. By summarizing the applications and limitations of traditional methods in the field of detecting plant pests and diseases, the review effectively emphasizes the advantages of deep learning techniques in the field of plant disease and pest detection. Datasets are the fundamental basis for deep learning model training and are important for the application of deep learning technology in the field of plant diseases and pests. Through these two parts, we aim to construct a clearer logical framework and foundation for the review. Figure 2 shows examples of common plant diseases and pest scenarios.
3.1. Introduction to Datasets
Detection of plant pests and diseases using deep learning techniques is mainly trained by a large number of healthy and infested plant images, the detection of plant pests and diseases is accomplished using image data. Therefore, it is vital to construct datasets that apply to the field of plant diseases and pests. Datasets used in previous studies can be divided into three types [15,24,25,26,27,28,29,30]: The first category is open-source datasets, which usually have a large amount of data and contain a large number of image categories, such as ImageNet, which collects over 20,000 different categories of images. There are also open-source datasets that fo cus on a certain category, such as the WDD2017 dataset, which only contains wheat images. The second type is self-built datasets. For instance, Lu et al. conducted classification using images of rice diseases captured in Heilongjiang [26]. The last type comprises non-open sources, academic-specific, and privacy-sensitive datasets. Additionally, hyperspectral plant data are not extensively employed for classification purposes. Table 1 presents information on the most commonly used datasets and their respective development teams.
Currently, the majority of datasets used for researching plant diseases and pests involve multiple steps, including data collection, image processing, annotation, and organization. Creators ensure the diversity of the dataset by collecting images of different types of plant diseases and pests, covering various plant species, different disease and pest types, different growth stages, and various environmental conditions. By adjusting image size and resolution, removing unnecessary backgrounds or interference, and standardizing image color and lighting conditions, each image is normalized. Annotations are applied to each image, accurately specifying information such as plant species, disease or pest type, and the severity of the condition. Finally, the images and annotation information are integrated into a structured dataset, commonly stored, and transferred into formats like CSV, JSON, etc. As the dataset size increases, it may be necessary to partition it into training, validation, and test sets for training and evaluating deep learning models. It is crucial to provide clear and comprehensive documentation of the dataset, including information on data sources, collection dates, image descriptions, and annotation guidelines, etc. This will facilitate understanding, usage, sharing, and communication among researchers. Additionally, data augmentation techniques, such as rotation, translation, flipping, cropping, etc., can be applied to enhance the dataset and improve the model’s generalization capabilities [39,40,41].
3.2. Traditional Methods for Plant Disease and Pest Image Recognition
In the initial stages of plant disease and pest image recognition, researchers primarily employed traditional machine learning methods. The recognition process typically involves two main stages: training and testing. Traditional approaches rely on manually extracted image features and classical machine learning algorithms, necessitating the design of imaging schemes based on the distinct characteristics of plant diseases and pests. Conventional image processing algorithms or manually selected features combined with classifiers are used for recognition [42].
During the training phase, images undergo preprocessing, which includes noise reduction, image enhancement, and other steps. Some researchers who focus on the characteristics of plant disease and pest images start by performing color conversion on RGB images. Then, they adjust the color segmentation threshold to discard some green pixels, reducing background interference and enhancing disease and pest recognition capabilities. Through image preprocessing operations, the effectiveness and accuracy of plant disease and pest recognition and classification can be effectively improved [43]. Once preprocessing is complete, segmented operations are applied to separate the parts of the image containing disease and pests. These operations may be carried out using threshold segmentation or clustering segmentation. Segmentation operations also facilitate the extraction of features required by subsequent classifiers [44]. Common features used in the field of plant disease and pests include shape, color, and texture [45], among others. These features can be calculated and extracted using various image processing algorithms and statistical methods. For example, Padol and Yadav initially used K-means clustering segmentation to obtain the diseased area and extracted the color and texture features of the lesions. They then used a support vector machine classifier to detect leaf disease types, achieving an accuracy of 88.89% in 137 images [46]. Some researchers have also classified infected and healthy citrus leaves based on the HIS (Hue, Intensity, and Saturation) color model combined with the Color Co-occurrence Matrix (CCM) method and statistical classification algorithms [47].
During the training step, researchers also need to build classifiers based on extracted feature vectors. Common classifiers include Support Vector Machines (SVMs) and Artificial Neural Networks, etc. These classifiers are trained and adjusted by learning feature patterns and class labels in the training dataset, to distinguish and classify disease and pest images from normal ones. Some research results in traditional methods have demonstrated that the Otsu thresholding algorithm has excellent segmentation performance, a convenient calculation process, and wide applicability, making it widely used in image processing [48]. For example, Chakraborty et al. used the Otsu thresholding segmentation algorithm and histogram equalization techniques to segment images, combined with Support Vector Machines for classification, achieving a 96% accuracy rate in identifying apple leaf diseases in 500 images [49]. Similarly, Shen et al. employed the Otsu method for leaf segmentation, utilizing the H component of HIS for segmenting lesions to reduce the interference of illumination changes, segmenting lesion areas using the Sobel operator, and finally calculating the ratio of lesion area to leaf area for disease and pest grading. This method can accurately identify and grade plant diseases and pests [50].
During the testing phase, widely used machine learning algorithms include Support Vector Machines (SVMs), Random Forest, and Principal Component Analysis (PCA), among others. These algorithms classify the extracted feature vectors to achieve accurate identification of plant diseases and pests. For instance, Li et al. compressed the visible and near-infrared spectral bands of rice leaves using Principal Component Analysis, followed by disease and pest identification using a probability neural network. The study demonstrated that the combination of these two methods can rapidly and accurately identify rice diseases and pests, with an accuracy rate of up to 95.65% [51]. Huang et al. constructed seven vegetation indices and coupled K-means and Relief algorithms to select remote sensing features for wheat powdery mildew. They combined a Support Vector Machine with wavelet features (GaborSVM) to monitor wheat powdery mildew occurrence. The overall accuracy of GaborSVM reached 86.7%, which exceeds the performance of traditional SVM methods, rendering it suitable for large-scale disease monitoring based on satellite remote sensing images. Additionally [52], Lu et al. elucidated the technique of geographic weighted regression [53]. Chaudhary et al. improved the Random Forest classifier, consisting of the Random Forest machine learning algorithm, attribute evaluator method, and instance filter method, and applied it to the classification of multiple peanut diseases. The results showed a classification accuracy rate of up to 97.80% [54].
In summary, traditional methods for plant disease and pest image recognition, comprising preprocessing, segmentation, and feature extraction, combined with classical machine learning algorithms, have achieved recognition and classification of disease and pest images with good accuracy under certain conditions. However, the performance of traditional methods is overly dependent on manual feature extraction, segmentation, and algorithm selection. The research process is intricate, subjective, and challenging to obtain satisfactory results in more complex natural environments. The application of these methods is limited by their inability to generalize and be applied in a wider range of contexts [55,56,57,58]. In contrast, deep learning methods, with their end-to-end learning approach, can automatically extract features from images, offering better generalization and performance capabilities.
4. Comprehensive Review of the Application of Image Classification and Target Recognition in Plant Disease and Pest Management
Image classification and target recognition of plant diseases and pests are crucial applications in the fields of agricultural production and plant protection. Through image classification technology, farmers and experts can accurately identify the types of plant diseases and pests, enabling timely implementation of control measures. Target recognition technology, on the other hand, allows for the precise determination of the location of plant diseases and pests in images, facilitating the extraction of targets related to plant diseases and pests. This further improves the precision and effectiveness of prevention and control measures. In this section, we will delve into the methods and applications of image classification and target recognition for vegetation diseases and pests, as well as the development and application prospects of related algorithms and technologies.
4.1. Plant Disease and Pest Image Classification
Plant disease and pest image classification refers to the classification of parts of plant images that have been affected by diseases or pests, to determine whether plants are impacted by diseases or pests. This process assists agricultural experts and researchers in identifying and distinguishing different types of plant diseases and pests, thereby enabling appropriate control measures to be taken. The most critical step in the classification of vegetation pest images is feature extraction. By analyzing features such as color, texture, and shape of plant images, classification models are constructed to differentiate between different diseases and pests. In recent years, deep learning technology has been widely researched and applied in the field of vegetation disease and pest classification. Deep learning techniques automatically learn image features and perform classification by training models, while also leveraging auxiliary methods such as hyperspectral technology [59] and image processing to improve the sensitivity and accuracy of disease detection.
Researchers have introduced traditional image classification models into the field of plant disease and pest recognition and have made innovative improvements to these models. Among them, methods based on CNNs have shown outstanding performance in accurately classifying different types of plant diseases and pest images (Figure 3). Wang et al. improved upon the VGG16 neural network and combined it with hyperspectral image processing techniques for the classification and diagnosis of citrus diseases and pests, achieving high accuracy and providing effective support for citrus disease and pest monitoring and control [60]. Similarly, CCNN, an improved version based on VGG16, is a cascaded convolutional neural network that uses VGG16 to construct a region of interest detection network for spot lesion detection and integrates a region segmentation network to achieve precise segmentation of lesion areas. Such model designs enable the segmentation of plant disease leaf images in complex environments [61]. VGG-INCEP is a neural network structure used for multi-scale feature extraction, which adds two inception modules to VGG16. This improvement addresses the problem of detecting disease spots of different sizes on the same leaf and VGG-INCEP. Combined with Rainbow, VGG-INCEP is used for real-time detection of apple leaf diseases [62]. In Yadav et al.’s study, they used SVM and SoftMax as classification outputs in the VGG16 network to classify ulcer disease leaflets and citrus fruit diseases [63]. In these studies, the classic image classification network VGG16 was used, and researchers made various improvements to this convolutional neural network, but all utilized CNNs for feature extraction and combined them with other modules to classify disease areas.
Similarly, CNNs such as AlexNet, GoogLeNet, LeNet, and ResNet have also been improved and applied to the classification of plant diseases and pests [64]. PD-Net, based on the ResNet50 convolutional neural network, introduces attention modules and cross-layer non-local modules to extract key features and fuse multi-scale features. It demonstrates superior accuracy and recall rates when dealing with large-scale, multi-class images of plant diseases and pests [65]. Shrivastava et al. replaced the last layer of the AlexNet network with SVM for rice disease recognition and achieved a high accuracy [66]. Mohanty et al. fine-tuned the AlexNet and GoogLeNet network structures for the recognition of 26 diseases across 14 crops, mainly for smartphone-assisted crop disease detection [33]. A deep convolutional neural network (DCNN), with a structure similar to LeNet, utilizes SoftMax to compute the probability of disease classification. The trained model can directly recognize and classify cucumber diseases and pests in complex field environments [67]. Sladojevic et al. focus on segmenting plant leaves from the background environment, They designed the deep convolutional neural network, CaffeNet, that enables automatic classification and detection of 13 different classes of plant pests and diseases [68].
There is some other research, such as Adaptive Discriminative Deep Belief Networks (ADDBNs) based on Deep Belief Networks (DBNs) that is able to predict the probability of cotton diseases and pests, where the model uses an adaptive learning rate to improve the training speed [21]. Multilayer Perceptron (MLP) and Partial Least Squares Discriminant Analysis (PLS-DA) were utilized to distinguish between biotic and abiotic stresses that cause plant diseases [69,70]. Gabor filtering methods combined with visual features extracted from neural networks were used for corn leaf disease diagnosis [71]. The integration of radar data and multispectral data to classify grain crops, a method of combining visible and radar data, can improve the accuracy of crop classification [72]. Hyperspectral images were converted into text sets and then probabilistic topic models were used to automatically track the development of three foliar diseases of barley in a study by Mirwaes et al. [73]: (1) Detecting the soil environmental quality of agricultural soils using an error index approach [74]; (2) detecting Vigna Radiata pests and diseases using deep and transfer learning, where the proposed model can be deployed directly on smartphones for detection [75]; and (3) detecting rice crop disease and corn diseases based on convolutional neural networks [76,77].
The application of deep learning technology has improved the accuracy of plant disease detection and accelerated real-time and automation processes, providing effective technical support for the intelligence and precision of agricultural production [78]. Deep learning holds vast potential for practical agricultural applications. From rice and citrus to fruits and vegetables [79,80,81], diseases and pests on different crops can be effectively identified and classified using current deep learning models. However, there are still challenges in the field of plant disease and pest image classification using deep learning, such as the generalization ability of models, processing speed, and adaptability to extreme climatic conditions, which require further exploration and breakthroughs in future research.
4.2. Plant Disease and Pest Object Detection
The advancement of deep learning technology has led to significant progress in the field of computer vision, with object detection becoming a widely utilized tool in various domains. In agriculture, the precise detection of plant diseases and pests is crucial for ensuring crop health, increasing yield, and reducing costs. Object detection methods based on deep learning, which mine image features, can automatically identify and locate plant diseases and pests, thereby improving detection efficiency and accuracy. In recent years, scholars have achieved promising results in pest detection using object detection algorithms. The most commonly utilized pest detection algorithms can be broadly categorized into two types: two-stage algorithms based on region proposals and single-stage algorithms based on regression.
The two-stage algorithm (Figure 4) based on region proposals represents a significant advancement in the field of object detection. It divides the object detection process into two stages, generating candidate target regions and precisely locating and classifying them. This method effectively incorporates the advantages of deep learning feature extraction and traditional machine learning methods, striking a balance between the accuracy and efficiency of target recognition. For instance, in the context of rice disease detection, Lu et al. proposed a rice disease detection method based on deep convolutional neural network (CNN) technology. By training deep convolutional neural networks on rice stems and leaf images, the method achieved a high accuracy of 95.48%, which surpassed traditional machine learning models [26]. Similarly, research by Liang demonstrated that CNN-based methods were more effective in rice blast disease recognition than traditional handcrafted features [82]. Additionally, for the detection of bitter gourd leaf diseases and crop diseases, Li et al. utilized an improved Faster R-CNN method and deep convolutional neural network. Through the selection and enhancement of the feature extraction network, combined with a feature pyramid network, the method effectively improved the precision of object detection [83]. In the domain of crop disease detection using drones, Zhang et al. employed a deep convolutional neural network combined with hyperspectral images. The autonomous operation of the method, conducted using drones, yielded a performance that outperformed traditional approaches, with an overall accuracy (0.85) that was higher than that of a random forest classifier (0.77) [84]. In the detection of large-scale pine wood nematode disease and agricultural pests, Deng et al. utilized the Faster R-CNN deep learning framework with a Region Proposal Network (RPN) and ResNet residual neural network. This approach, which integrated unmanned aerial vehicle remote sensing and artificial intelligence technology, proposed a large-scale pine wood nematode disease detection and localization method, enhancing the accuracy of disease detection and localization [85]. Simultaneously, some improved object detection algorithms, such as an Improved Feature Pyramid Network (FPN), ROI Align algorithm, and attention modules, have been introduced to further enhance the performance of small object detection [86]. Albert et al. achieved a precision of 90.54% in detecting agricultural pests in farmland using the Faster R-CNN algorithm, with the model tested using an improved Inception network [87]. Li et al. combined the Transformer model and the Cascade R-CNN model with attention modules (Swin model) and used image processing-based data augmentation techniques, such as Mixup and Cutmix algorithms. This allowed them to establish linear relationships between samples for detecting Huanglongbing in citrus trees, achieving higher accuracy [88].
The single-stage algorithm (Figure 5) based on regression is an important method in the field of object detection, performing both target localization and classification simultaneously in a single stage. This method achieves target localization by predicting the coordinates of bounding boxes while also making category predictions. This approach simplifies the object detection task into a regression problem, as seen in algorithms like You Only Look Once (YOLO) and Single Shot Multibox Detector (SSD). While these algorithms have faster detection speeds, their localization accuracy in scenarios with small or densely packed targets may be slightly lower compared to two-stage algorithms based on region proposals. To enhance the localization accuracy of single-stage regression-based algorithms in scenarios with small or densely packed targets, researchers have proposed a series of improvement methods. Hong et al. proposed a universal multimodal deep learning framework, MDL-RS, which effectively integrates remote sensing images from different sources, including HS, LiDAR, MS, and SAR for image classification tasks. This framework enhances the representation of multi-source information in images, thereby improving the performance of target detection [89]. Lv et al. enhanced the information about the locations of small targets at the lower levels by adding residual units to the YOLOv3 network and incorporating ResNet units as part of the feature pyramid network [90]. Tian et al. employed CycleGAN for image data augmentation and optimized the YOLOv3 model’s feature layers by integrating a DenseNet. This method outperformed several other advanced networks in terms of detection performance [91]. Li et al. proposed a deep learning method based on an improved CNN. By introducing new network structures and feature fusion modules, they achieved more accurate and faster object detection [92]. To address the challenges of detecting small objects against complex backgrounds, Zhang et al. proposed a method based on the improved baseline model (YOLOv4). They introduced a context-guided module to fuse effective features with contextual information and used a multiscale mixed attention mechanism to focus on pest regions, achieving an average precision improvement of 7.2% over the YOLOv4 model [93]. Hu et al. introduced a deep neural network called YOLO-GBS. By adding a detection head, incorporating a global context attention mechanism, and utilizing methods such as BiFPN and Swin Transformer based on YOLOv5, they achieved the detection and classification of various rice pests from digital images [94]. Some methods also introduced spatial pyramid pooling modules to improve the detection accuracy of the model. For example, Li et al. used hourglass feature extraction modules, deep hourglass feature extraction modules, and spatial pyramid pooling modules to effectively capture image features, demonstrating outstanding detection accuracy in jute disease image detection [95]. Tang et al. combined improved CNNs and YOLOv4 methods, utilizing a squeeze-excite attention mechanism and a cross-stage multi-feature fusion method to enhance the feature pyramid and path-aggregation grid structure, thereby improving the accuracy of agricultural pest detection [96]. To clearly present the improvements made to YOLO models and their application outcomes, these studies have been summarized and compiled into Table 2.
Moreover, in recent years, lightweight models have made significant strides in the field of object detection. Several studies have proposed a variety of improved lightweight models. Examples include the improved lightweight model ds-YOLOv3-tiny [97], an enhanced version of SSD [98], the lightweight MobileNetv2-YOLOv4 [99,100], an adaptive spatial feature fusion and lightweight detection model named ASFL-YOLOX [101], and an optimized lightweight YOLOv5 model [102]. These models, by introducing various optimization measures such as altering feature extraction networks, employing more efficient activation functions, concatenating feature scales, clustering anchor points, and utilizing pyramid pooling modules, have achieved improvements in feature extraction and detection capabilities while maintaining real-time performance. These optimization measures have demonstrated significant improvements in accuracy and performance in areas such as forestry pest detection, detection of damaged trees in drone images, pest detection in rice crops, and protection equipment for orchard plants. In the future, these lightweight models with fewer parameters can be deployed on edge devices to enable real-time detection of plant diseases and pests. Their application in complex natural environments will provide robust technical support for precision agriculture [103].
In summary, both the two-stage algorithm based on region proposals and the single-stage algorithm based on regression can be widely applied in different scenarios of crop disease detection. The former, employing a two-stage strategy, achieves high-accuracy target recognition by generating candidate target regions. The latter, operating in a single stage, performs target localization and classification simultaneously, exhibiting faster detection speeds. Recent studies have introduced attention mechanisms and Transformer-based model architectures into plant pest and disease detection systems to enhance detection accuracy, particularly for small targets and in complex scenarios [104,105,106,107]. These advancements have significantly improved the practical applicability of deep learning models in real-world plant pest and disease detection. Future research is expected to focus on integrating lightweight models to enable real-time detection of plant pests and diseases [108,109], deploying deep learning models onto edge devices with limited computational and storage resources, and facilitating plant pest and disease detection in real-world environments. These efforts are poised to contribute to the intelligent and precise development of modern agriculture.
5. Review of Applications in Semantic Segmentation and Change Detection of Plant Diseases and Pests
5.1. Semantic Segmentation of Vegetation Diseases and Pests
The occurrence of plant diseases and pests poses a significant challenge to agriculture. Traditional manual sampling methods for disease and pest detection have various limitations, including high time costs, low efficiency, and susceptibility to subjective factors. In recent years, with the rapid development of technologies such as unmanned aerial vehicles (UAVs), sensors, and deep learning, new research directions have been opened up for the automation of vegetation disease and pest detection [110]. Remote sensing technology, with its advantages of high temporal and spatial resolution [111], has been widely applied in vegetation monitoring. However, the relatively low image resolution of payloads on satellite platforms limits their applicability in some situations [112]. In comparison, low-altitude visible light remote sensing with UAVs has features such as low cost, high efficiency, and flexible applications, achieving widespread success in multiple crop phenotype characteristics [113,114]. The rapid development of deep learning provides new opportunities for remote sensing image processing, simplifying the process of manually constructing features and continuously improving the learned feature representations [115]. Semantic segmentation involves classifying each pixel in an image based on learned features, enabling accurate identification of plant diseases and pests within the image. The process of deep learning-based semantic segmentation primarily consists of three stages: feature extraction, semantic segmentation, and post-segmentation processing, as illustrated in Figure 6.
In this field, both domestic and international research has yielded a plethora of studies. Stewart et al. utilized the Mask R-CNN model to validate the effectiveness of semantic segmentation in lesion recognition. Through practical examples, they achieved the automated identification of lesions, highlighting the value of semantic segmentation in quantitative disease analysis [116]. Fuentes et al. provided valuable insights and support for the application of semantic segmentation technology in disease and pest identification [117]. They emphasized that combining semantic segmentation algorithms could further improve the precise localization and identification capabilities of disease areas. Mohanty et al. demonstrated the potential applications of deep learning in image-based crop disease detection, inspiring the use of semantic segmentation techniques in monitoring crop diseases and pests [33]. Ke et al. demonstrated that deep learning-based semantic image segmentation techniques can provide comprehensive information for complex agricultural scenarios, highlighting their potential to analyze intricate visual data in agricultural applications [118]. Ahmad et al. identified semantic segmentation as a commonly used deep learning technique in plant disease recognition, particularly effective for segmenting diseased areas and quantitatively assessing disease severity [119]. Liu et al. further explored the transformation of disease and pest detection tasks into semantic and instance-level segmentation of lesion and normal areas, providing case studies that illustrate the practical application and effectiveness of semantic segmentation techniques in this field [42]. These studies collectively emphasize the role of semantic segmentation in enhancing the accuracy and precision of plant disease and pest detection, offering valuable insights for future research. Luo et al. demonstrated that the integration of deep learning technology into the agricultural domain significantly enhances the accuracy and robustness of semantic segmentation, addressing the limitations of traditional methods in distinguishing complex backgrounds and pest or disease features. [120]. Tassis et al. conducted a study on coffee leaf diseases, proposing a comprehensive approach that utilized Mask R-CNN for instance segmentation, UNet and PSPNet for semantic segmentation, and ResNet for classification. This integrated model effectively automated the detection and segmentation of diseased coffee leaves, achieving an accuracy rate exceeding 90%. [121]. Similarly, Rezk et al. developed an IoT-based plant pest and disease recognition system, introducing a hybrid approach combining CNNs and fully connected Conditional Random Fields (CRFs). This system directly processes images captured by IoT terminal cameras, enabling automated and intelligent recognition. Their integrated framework improved detail recognition and boundary restoration in the semantic segmentation of plant pests and diseases. [122]. Ji et al. proposed a deep learning semantic segmentation-based method for monitoring and analyzing the severity of grape black rot [123]. Zhu et al. introduced a semantic segmentation model LD-DeepLabv3+ for segmenting apple leaves and lesions in complex scenes, thoroughly exploring the application methods of semantic segmentation in disease and pest recognition [124]. These studies collectively emphasize the crucial role of semantic segmentation technology in precise localization and real-time monitoring of disease areas.
Researchers have effectively applied deep learning technology to tackle key challenges in plant disease and pest detection. By leveraging low-altitude visible light remote sensing using unmanned aerial vehicles (UAVs), researchers have demonstrated significant improvements in disease and pest detection, including enhanced spatial resolution, flexibility, and cost-effectiveness [125]. Numerous studies have showcased the outstanding performance of semantic segmentation technology in addressing these challenges, particularly in tasks such as precise localization of lesions, quantitative assessment of disease severity, and real-time monitoring of affected areas [126]. From lesion recognition in complex agricultural images to applications in intelligent agriculture for disease and pest identification, semantic segmentation has proven to be a pivotal tool. These advancements collectively provide robust methodological and technological support for improving the accuracy, precision, and real-time capabilities of plant disease and pest detection, paving the way for the development of smart and sustainable agricultural practices.
5.2. Detection of Changes in Plant Diseases and Insect Pests
The application of change detection in remote sensing imagery has been widely adopted across various fields. The powerful learning capabilities and unsupervised characteristics of deep learning have contributed to the growing trend of integrating it with change detection based on remote sensing imagery. In recent years, the rapid development of remote sensing and deep learning has led to the combination of these technologies with the detection of changes related to plant diseases and pests, offering new perspectives for change detection in plant diseases and pests.
Gong et al. introduced the BP neural network algorithm from artificial neural networks and the intrusion detection technique CDAN from the field of information security into the detection of plant diseases and pests in cotton leaves. They developed a cotton disease and pest intrusion detection and forecasting system based on CDAN technology and the BP neural network, thereby enhancing the efficiency and intelligence of cotton disease and pest forecasting [127]. Once leaf surface information beyond the normal range is detected, it reports pest invasion. Xu et al. proposed the CTCD-Net change detection model, which combines convolutional neural networks and transformers to enhance the network’s ability to learn image details [128]. Liu et al. introduced a CNN-Transformer network with multiscale context aggregation, effectively achieving the identification of agricultural field changes [129]. Liu et al. proposed a CNN-Transformer network with multi-scale contextual aggregation, referred to as MSCANet. Their model employs three CNN classifiers to extract change information from images, enhancing the ability to capture deep image features. This design significantly improves detection accuracy, particularly in terms of edge and morphological features. [130]. Similarly, Wang et al. introduced a supervised change detection method based on a deep Siamese convolutional network. Their model comprises a feature extraction network and a change decision network, where the feature extraction network integrates two heterogeneous convolutional modules and two standard convolutional modules to capture multi-level features. This structural design leads to a significant improvement in detection accuracy compared to other deep learning models [131]. In the field of plant disease and pest detection, Das et al. conducted a comprehensive study on 52 rice fields. They used Sentinel-2 images and the VGG16 model to develop a model for assessing rice blast disease conditions in the field. This model effectively evaluated the actual occurrence of leaf blasts [132].
Researchers have successfully applied deep learning models to the monitoring and assessment of vegetation diseases and pests through various innovative approaches. These studies provide robust methods and technological support for achieving real-time, accurate, and efficient plant disease monitoring. Researchers have addressed specific issues related to different vegetation diseases and pests by proposing various deep learning models, such as VGG16, Siamese convolutional networks, and CTCD-Net, among others. These models, through learning from remote sensing images, effectively detect changes in vegetation diseases and pests. Moreover, the application of deep learning extends beyond optical images to encompass multiple sources of data, including Synthetic Aperture Radar (SAR) images. The integration of such multisource data provides more comprehensive information for plant disease and pest change detection, enhancing the accuracy and comprehensiveness of monitoring. In addition, researchers have focused on details in the design of deep learning models, introducing techniques such as multiscale context aggregation and object analysis. The application of these techniques enhances the model’s capacity to learn about image details and contextual information, leading to more accurate detection of changes in plant diseases and pests.
These studies collectively demonstrate the potential and superiority of deep learning in plant disease and pest change detection. The application of deep learning technology offers new methods and technological support for achieving real-time, accurate, and efficient plant disease and pest monitoring. This technology provides valuable references for future research and practical applications in related fields.
6. Remote Sensing Large Model and Transfer Learning
6.1. Plant Pest and Disease Prediction
In the early stages of research on the prediction of plant diseases and pests, statistical and regression methods, machine learning techniques such as BP, LVQ, and RBF neural networks [133], and ecological niche models coupling various algorithms such as ENFA, BIOCLIM, Maxent, GARP, etc. [134], were primarily employed. However, with the emergence of deep learning technology, especially the widespread application of Long Short-Term Memory (LSTM) networks, deep learning has become a new trend in disease and pest prediction research. In this field, Mahenge comprehensively reviewed the potential of artificial intelligence and deep learning technologies in the detection of common bean crop diseases and the prediction of harmful pests in 2023, providing clear insights for subsequent researchers [135].
LSTM, one of the widely applied techniques in deep learning, has found considerable use in the prediction of plant diseases and pests (Figure 7). In 2018, Xiao et al. utilized LSTM to forecast the occurrence of cotton diseases and pests. They formalized the problem as a time series prediction issue, establishing an LSTM-based predictive model using meteorological factors, which demonstrated superior performance compared to traditional machine learning methods [136]. In 2020, Wahyono proposed an improved LSTM method for predicting crop pest occurrences. Through the use of a sliding window approach and parameter adjustments, the efficacy of the method was enhanced [137]. In the same year, Chen combined climate and atmospheric circulation to explore the application of LSTM in predicting cotton pests and diseases [138]. In 2020, Zhang fused knowledge graphs into Bi-LSTM, thereby improving the accuracy of wheat stripe rust prediction [139]. In 2021, Jain and Ramesh introduced a rice disease prediction model based on CNN-LSTM, which combined the strengths of CNNs and LSTM to provide an effective approach for disease prediction [140].
In addition to LSTM, researchers have also explored the use of deep learning technologies to establish systems for predicting plant diseases and pests. In 2013, Patil and Mytri employed a feedforward multilayer perceptron neural network to develop a dynamic prediction system, which outperformed other systems in predicting cotton pests [141]. In 2021, Saleem’s team proposed a cotton whitefly prediction system based on the Internet of Things (IoT) and deep learning. Using wireless networks and cloud servers for prediction, they achieved an accuracy of 82.88% [142]. In 2023, they extended this prediction system to forecast pests for entire crops, achieving an overall accuracy of 94% [143]. In 2018, Wang et al. introduced a cotton disease and pest prediction model based on an adaptive discriminative deep belief network, which demonstrated a prediction improvement of over 19% compared to traditional methods [65]. In 2021, Grünig et al. employed deep learning to classify apple leaf images, establishing a framework for predicting crop diseases and pests. They successfully predicted the occurrence cycle of the leaf miner moth [144].
These studies collectively demonstrate the extensive application of deep learning technology in predicting plant diseases and pests, providing robust support for accurate prediction and scientific pest control. Initially, research on the prediction of plant diseases and pests primarily utilized statistical and regression methods, machine learning techniques, and various ecological niche models. However, with the rise in deep learning technology, especially the widespread adoption of LSTM, a novel trend has emerged in the field of disease and pest prediction. Researchers have conducted a comprehensive review of the potential of artificial intelligence and deep learning technologies in detecting crop diseases and predicting pests, thereby providing clear directions for the study of plant diseases and pests. In addition to LSTM, other deep learning technologies such as CNN-LSTM and multilayer perceptron neural networks, have also demonstrated significant potential in plant disease and pest research. The combination of the Internet of Things and deep learning in the cotton whitefly prediction system, as well as the cotton disease and pest prediction model based on an adaptive discriminative deep belief network, showcases the outstanding performance of deep learning in improving predictive accuracy. These studies not only provide a more accurate means of predicting pests and diseases for agricultural production but also guide the future application of deep learning in the agricultural sector. Deep learning injects new vitality into the research on predicting plant diseases and pests, offering powerful support for achieving precision agriculture and effective pest management.
6.2. Pre-Training Large Remote Sensing Models
Artificial intelligence (AI) technology is gradually replacing traditional methods in the field of vegetation disease diagnosis to address issues such as time-consuming processes, high costs, low efficiency, and subjectivity. Deep learning, as a mainstream AI approach, has significantly elevated the level of detection and diagnosis of vegetation diseases in precision agriculture. However, the majority of contemporary diagnostic methods for plant disease rely on pre-trained deep learning models, which are typically derived from computer vision datasets rather than specialized datasets in plant pathology. This result in a lack of domain knowledge, which can be addressed by Dong et al. through the development of a series of pre-trained models specifically designed for the diagnosis of plant diseases. Through experimental analysis, it was demonstrated that these models exhibited higher accuracy in the diagnosis of plant diseases while requiring shorter training times [145].
Currently, deep learning models are still primarily trained based on pre-training parameters from the ImageNet dataset. Coulibaly et al. introduced a pre-training feature extraction method based on ImageNet, utilizing the VGG16 model to construct a pearl millet downy mildew recognition system, achieving an accuracy of 95.00% [146]. Chen et al. employed VGGNet pre-trained on ImageNet and Inception modules, proposing the INC-VGGN framework, which successfully increased the accuracy of rice disease image classification to 92.00% [147]. Chen and Liao designed the ConvNeXt-ECA network model by improving the ConvNeXt network, using pre-trained model weight parameters from ImageNet and achieving an accuracy of 96.73% [148]. Zhang et al., based on self-supervised learning, pre-trained the ViT model on the ImageNet dataset and successfully applied it to tea tree disease recognition, significantly improving recognition efficiency and accuracy [149].
With the rise in large-scale language models, such as ChatGPT, researchers have proposed various frameworks and theories for vegetation disease and pest identification. Song et al. constructed a pest recognition model using convolutional neural networks, where the Inception-v4 support vector machine classification method achieved the highest accuracy at 97.3% [150]. In a study conducted by Shafik et al., two plant disease and pest detection models, Early Fusion and Lead Voting Ensembles, were introduced. These models integrated nine pre-trained convolutional neural networks, which were fine-tuned and transferred to the field of plant disease and pest detection. The accuracy of the models was validated using the PlantVillage dataset, achieving the classification of 15 categories of diseases and pests. The models outperformed the original convolutional neural networks, achieving accuracy rates of 96.74% and 97.79%, respectively. These models demonstrated stability and ease of generalization, offering practical solutions to enhance the accuracy and efficiency of plant disease detection and classification [151]. Hu et al. successfully classified corn diseases through transfer learning and adjusting optimizer parameters, achieving an average recognition accuracy of 97.6% [152]. Wu et al. proposed a Crop Disease Segmentation Detection Transformer, achieving a segmentation accuracy of 96.4% for crop disease grading [153]. Lee et al. addressed the citrus vegetation pest detection and classification problem by utilizing the EfficientNet-b0 pre-trained model to build an effective web application server for citrus disease classification. The average model accuracy in their experiments reached 97.0% [154]. Xing and Lee developed a Decoupling-and-Attention network (DANet) based on the image dataset of crop pests and diseases (CPD). Through comparison with transfer learning ImageNet pre-trained models, the results showed that both networks performed similarly, while DANet had a computation cost of one-fourth of ResNet-50 [155]. Rajeswarappa et al. established an advanced transfer learning framework for Inception-ResNet-v2 and VGG16 networks to examine the accuracy of crop disease and pest identification and analysis models [156]. Li et al. proposed an integrated model that combines a single-stage network based on the YO-LO network and a two-stage network based on the Faster-RCNN network for object detection. The model achieved an Average Precision over the dimensions of the category (mAP) of 85.2% in detecting 37 pests and 8 diseases [157]. Zhou et al. addressed the issue of rice leaf disease recognition by proposing a residual-distilled transformer framework. Based on the pre-trained Transformer model, this method achieved an accuracy of 92.0% [158]. Furthermore, self-supervised learning methods have also been introduced into the field of vegetation disease and pest identification. Liu et al. proposed a training method based on feature relation conditional filtering through self-supervised learning, achieving high accuracy on multiple datasets [159]. Kar et al. employed methods such as Nearest Neighbor Contrastive Learning of Visual Representations (NNCLR). Through ImageNet initialization for transfer learning and fine-tuning, they achieved a classification accuracy of 79.0% [160].
Artificial intelligence technology has demonstrated powerful potential in the field of vegetation disease diagnosis, achieving significant breakthroughs. Deep learning, especially methods based on pre-trained models, has become mainstream, greatly improving the accuracy and efficiency of vegetation disease detection and diagnosis. However, existing challenges mainly stem from deep learning models being trained on general computer vision datasets rather than specialized vegetation datasets, resulting in a lack of domain knowledge. Based on the pre-trained models, researchers have proposed various effective vegetation disease recognition frameworks. From constructing a pearl millet downy mildew recognition system using the VGG16 model to applying the ViT model based on self-supervised learning methods for tea tree disease recognition, these approaches have achieved remarkable results in enhancing diagnostic efficiency and accuracy. In the future, the field of plant disease recognition will continue to present a series of opportunities and challenges. The continued optimization and specialization of pre-trained models, as well as more attention to vegetation datasets, will be crucial directions. Additionally, the application of large language models in vegetation disease prediction and agricultural management deserves in-depth exploration. The development of this field will provide robust support for achieving more precise and efficient agricultural production and vegetation disease prevention and control.
6.3. Transfer Learning
The definition of transfer learning is a concept that involves applying knowledge or patterns learned in one domain or task to a different but related domain. It has found wide applications in the fields of machine learning and deep learning. In deep learning, due to the inherent similarities among different tasks, transferring model parameters trained on one task to another can significantly accelerate the convergence of the model and alleviate the impact of overfitting. This approach has been extensively researched and applied in the domain of pest and disease management.
Pattinaik et al. researched a tomato pest dataset using 15 pre-trained CNN models to classify 10 types of tomato diseases. Among them, the DenseNet model achieved the best performance with an accuracy of 83.2% [161]. In another study, Abbas et al. employed Conditional Generative Adversarial Networks (C-GAN) to generate synthetic images of tomato plant leaves. Through transfer learning techniques, they trained the DenseNet121 model on both synthetic and real images, successfully classifying tomato leaf images into different diseases with accuracies of 99.5%, 98.7%, and 97.1% [162]. Ai et al. used the Inception-ResNet-v2 model for transfer learning, achieving an overall recognition accuracy of 86.1% in crop disease identification [40]. Geetharamani worked on plant leaf disease recognition using deep convolutional neural networks [163], and Coulibaly utilized the VGG-16 model for achieving 95.0% transfer learning accuracy on millet image datasets [146]. Zhang et al. applied the VGG-16 model pre-trained on the ImageNet dataset as the base network for transfer learning in the identification of rose pests and diseases, achieving an accuracy of 93.0% [164]. Paymode et al. employed transfer learning on VGG models for training on tomato and grape leaf image datasets, achieving accuracies of 95.7% and 98.4%, respectively [16]. Chen et al. used a pre-trained network from ImageNet for transfer learning in rice disease identification, reaching accuracies no less than 91.8% [147]. Oppenheim et al. improved the model’s accuracy from 83.0% to 96.0% by fine-tuning the VGG network on differently sized, toned, and shaped infected potato images collected under natural light [165]. Similar research includes retraining the Inception v3 model by Ramcharan et al. [39] and retraining the AlexNet model by Mohapatra and Wang et al.’s pest detection and recognition system based on transfer learning [166,167]. Mohanty et al. used a pre-trained AlexNet model to classify 26 different diseases across 14 crops, achieving an accuracy of 99.4% [33]. Rao et al. transferred the AlexNet model to a dataset containing 8438 images of healthy and diseased leaves, with detection accuracies of 99.0% for grape leaves and 89.0% for mango leaves [168]. Rangarajan et al. applied transfer learning with AlexNet and VGG16 net, achieving a high classification accuracy of 97.5% [169]. Selvaraj retrained three different CNN architectures using transfer learning, and compared to Mo-bileNetV50, models based on ResNet2 and InceptionV1 performed better [170]. Hassan et al. used transfer learning to train four different deep learning models for plant disease detection, experimenting on 14 different categories of healthy and diseased leaves from 38 different species, achieving a maximum accuracy of 99.6% [171].
However, Thenmozhi et al. transferred AlexNet, ResNet, GoogleNet, and VGGNet to three publicly available insect datasets (Xie1, Xie2, and NBAIR) for model training, aiming to detect pests and diseases. Experimental results indicated that the performance of the transfer-trained models on the three datasets was very similar and consistently outperformed AlexNet [172]. Hong et al.’s research emphasized the importance of transfer learning in reducing training data size, time, and computational costs when constructing deep learning models. They utilized deep network architectures such as Resnet50, Xception, MobileNet, ShuffleNet, and Densent121_Xception for feature extraction, achieving the classification of nine types of diseased leaves. Among them, Densent121_Xception achieved the highest recognition accuracy at 97.1% [173].
Transfer learning has demonstrated significant potential for its application in the field of pest and disease management. Researchers have achieved remarkable results by successfully transferring deep learning models trained in other domains to plant pest and disease detection tasks. From plant disease classification based on various CNN models to disease identification on plant leaves, and further to extensive classification of various crops and diseases, these studies not only confirm the effectiveness of transfer learning in improving model accuracy and generalization but also provide innovative technical support for pest and disease control in agriculture. However, researchers have also engaged in in-depth discussions on different model choices, transfer learning methods, and datasets, aiming to further advance deep learning research in the field of pest and disease management. This series of research outcomes provides powerful digital tools for agricultural production, laying a solid foundation for achieving more intelligent and efficient pest and disease management.
7. Conclusions and Future Work
The potential of deep learning in the field of plant pest and disease management is evident, as its powerful data processing and feature extraction capabilities provide significant advantages in tasks such as image classification and object detection. However, major limitations and challenges persist and must be addressed to ensure the practical applicability and scalability of these technologies [174].
Data serves as the foundation for training deep learning models. Currently, most datasets are limited to single plant types [175,176], failing to capture the diversity and complexity of real-world agricultural environments. To improve model accuracy and robustness, future research should prioritize the development of multi-class, large-scale datasets that integrate diverse data types [177], such as meteorological, soil, and sensor information, to create multimodal datasets. Training models on such datasets not only enhances the accuracy and generalizability of pest and disease detection but also establishes a foundation for applying these models to real-world scenarios.
Another significant limitation lies in model training. Deep learning models require substantial computational resources and long training times before deployment. Addressing this challenge, many studies have explored model lightweighting techniques, which improve training efficiency by reducing the number of model parameters [178,179,180,181,182]. Future research should focus on deploying deep learning models on terminal devices and developing end-to-end plant pest and disease detection systems.
The lack of interpretability in deep learning models further limits their practical application, as agricultural practitioners often require transparent decision-making processes to trust model predictions and take appropriate actions. Future efforts should emphasize the development of interpretable frameworks that provide actionable insights, thereby bridging the gap between technological advancements and practical applications in agriculture.
Addressing these current limitations is critical for unlocking the full potential of deep learning in plant pest and disease management [183]. By overcoming challenges in data, computation, and interpretability, while leveraging emerging opportunities such as multimodal data integration and model lightweighting, deep learning has the potential to drive sustainable development in agriculture.
In summary, deep learning holds immense potential and research value in plant pest and disease detection due to its exceptional performance in image processing. This study provides a comprehensive review of research on the application of deep learning in this field, focusing on key aspects such as plant pest and disease image classification, semantic segmentation, object detection, change detection, pest and disease prediction, and the application of large models. Overall, while CNNs and their variants have achieved significant success in plant pest and disease detection, their widespread adoption in practical agricultural production remains limited. Effectively integrating agricultural knowledge with deep learning algorithms and models and translating research findings into real-world applications is a substantial challenge. However, with deeper interdisciplinary collaboration and continuous technological iteration, the application potential of deep learning in plant pest and disease detection will be further realized, providing robust support for intelligent and precise agricultural management.
Conceptualization, S.W., D.X. and Y.B.; methodology, S.W., D.X., H.L. and X.L.; investigation, C.S., J.Z. and W.W.; writing—original draft preparation, D.X. and W.W.; writing—review and editing, S.W., H.L. and Y.B.; visualization, C.S., J.Z. and X.L.; supervision, S.W. All authors have read and agreed to the published version of the manuscript.
The present study did not produce any new data, code, or materials. All analyses and findings were derived from existing publicly available datasets and established methodologies. Consequently, no additional resources were developed or generated in the course of this research.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 3. The VGG16 neural network model, widely employed in the classification of plant disease and pest images.
Figure 4. Two-stage object detection algorithm based on generating region proposals.
Figure 6. Workflow of semantic segmentation for plant disease and pest images based on deep learning.
Figure 7. The structure of LSTM cells. LSTM is commonly employed to model temporal sequences of plant disease and pest data, capturing dynamic variations for accurate detection and risk prediction.
Basic information of commonly used datasets [
Dataset Name | Contributors | Data Content |
---|---|---|
ACFR Orchard Fruit Dataset | The University of Sydney, Australia and the Australian Center for Field Robotics | Apple, Mango, and Apricot |
Date Fruit Dataset | King Saud University | Jujube |
Fruit Recognition Dataset | Crop Disease Dataset | Fruit Photos |
Multi-species fruit flower detection | United States Department of Agriculture (USDA) | Fruit Flowering Stage |
PlantVillage Dataset | Cornell University, Marcel Salathé’s Team | Recognition and Image Classification |
PlantDoc | RML Information Services Pvt. Ltd. in India | Diseases and Providing Relevant Information |
WHU-HI (Unmanned Aerial Vehicle Hyperspectral Dataset) | Wuhan University | Accurate Crop Classification and Hyperspectral Image Classification |
RiceSeedling Dataset | Remote Sensing | Rice Object Detection and Rice Seedling Classification |
Purple rapeseed leaves Dataset | Separation Study of Purple Mustard Seed Leaves | |
Stewart NLBimages 2019 | Gore Lab | Detection of Northern Corn Leaf Blight |
Field Images of Maize Annotated with Disease Symptoms | Tyr Wiesner-Hanks Mohammed Brahimi | Corn Disease Detection |
RSC | Jintao Wu | Rice Counting |
Mendeley Data | Yousuf Rayhan Emon, | Healthy and Diseased Leaf Dataset of Sweet Oranges |
Improvements to YOLO series models and their applications in the field of pest and disease detection.
Application Scenario | Model | Methodology | Result |
---|---|---|---|
Maize pest detection | Improved YOLOv3 | Two residual units are added to the second residual block of the original YOLOv3 network. | 77.2% (mAP) |
Apple diseases | YOLOV3-Dense | Densely connected neural network is utilized to optimize feature layers of the YOLO-V3 model. | 91.7% (IoU) |
Maize pest area detection | MAF–YOLOv4 | Propose a multi-scale mixed attention mechanism to selectively integrate effective information from auxiliary features across different scales. | 80.1% (AP) |
Rice pest detection | YOLO-GBS | The YOLO-GBS is enhanced with an added detection head, GC attention, BiFPN, and Swin Transformer for improved detection and feature fusion. | 79.8% (mAP) |
Pest detection | Pest-YOLO | A squeeze-and-excitation module is added to CNN for key feature extraction, and a cross-stage fusion method improves the feature pyramid, enhancing small target detection like pests. | 71.6% (mAP) |
Forest pest detection | ds-YOLOv3-tiny | DenseNet replaces the feature extraction network for better semantic features, and Swish activation replaces Leaky ReLU to reduce information loss. | 81.2% (mAP) |
Rice pests and diseases detection | MobileNetv2-YOLOv4 | MobileNetv2 replaces CSPDarknet53 in YOLOv4 to reduce parameters, and focal loss balances sample recognition during training. | 90.5% (mAP) |
Detection method for insect pests of the Papilionidae family | ASFL-YOLOX | Using the Tanh-Softplus activation function, integrating an efficient channel attention mechanism, adopting an adaptive spatial feature fusion module. | 95.7% (mAP) |
Detection and classification of peanut diseases | Optimized YOLOv5 | The IASM mechanism enhances accuracy and efficiency, reduces model weights with GhostNet and WBF, and accelerates feature learning using BiFPN and fast normalization fusion. | 92.9% (F1) |
Supplementary Materials
The following are available online at
References
1. IPPC Secretariat. Scientific Review of the Impact of Climate Change on Plant Pests; FAO on behalf of the IPPC Secretariat: Rome, Italy, 2021; ISBN 978-92-5-134435-4
2. Phophi, M.M.; Mafongoya, P.L. Constraints to Vegetable Production Resulting from Pest and Diseases Induced by Climate Change and Globalization: A Review. J. Agric. Sci.; 2017; 9, 11. [DOI: https://dx.doi.org/10.5539/jas.v9n10p11]
3. Wang, X.; Gan, P.; Tang, C.; Kang, Z. Plant Disease Resistance and Disease Green Prevention and Control: Major Scientific Issues and Future Research Directions. Bull. Natl. Nat. Sci. Found. China; 2020; 34, pp. 381-392. [DOI: https://dx.doi.org/10.16262/j.cnki.1000-8217.2020.04.003]
4. Li, M.; Li, H.; Ding, X.; Wang, L.; Wang, X.; Chen, F. The Detection of Pine Wilt Disease: A Literature Review. Int. J. Mol. Sci.; 2022; 23, 10797. [DOI: https://dx.doi.org/10.3390/ijms231810797] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36142710]
5. Yang, Z.; Ma, W.; Lu, J.; Tian, Z.; Peng, K. The Application Status and Trends of Machine Vision in Tea Production. Appl. Sci.; 2023; 13, 10744. [DOI: https://dx.doi.org/10.3390/app131910744]
6. Zhong, E. Deep Mapping—A Critical Engagement of Cartography with Neuroscience. Geomat. Inf. Sci. Wuhan Univ.; 2022; 47, pp. 1988-2002. [DOI: https://dx.doi.org/10.13203/j.whugis20220382]
7. Tian, X.; Bai, Y.; Li, G.; Yang, X.; Huang, J.; Chen, Z. An Adaptive Feature Fusion Network with Superpixel Optimization for Crop Classification Using Sentinel-2 Imagery. Remote Sens.; 2023; 15, 1990. [DOI: https://dx.doi.org/10.3390/rs15081990]
8. Yang, X.; Zhang, B.; Chen, Z.; Bai, Y.; Chen, P. A Multi-Temporal Network for Improving Semantic Segmentation of Large-Scale Landsat Imagery. Remote Sens.; 2022; 14, 5062. [DOI: https://dx.doi.org/10.3390/rs14195062]
9. Zhong, Y.; Wang, S.; Liang, H.; Wang, Z.; Zhang, X.; Chen, X.; Su, C. ReCovNet: Reinforcement Learning with Covering Information for Solving Maximal Coverage Billboards Location Problem. Int. J. Appl. Earth Obs. Geoinf.; 2024; 128, 103710. [DOI: https://dx.doi.org/10.1016/j.jag.2024.103710]
10. Liang, H.; Wang, S.; Li, H.; Zhou, L.; Chen, H.; Zhang, X.; Chen, X. Sponet: Solve Spatial Optimization Problem Using Deep Reinforcement Learning for Urban Spatial Decision Analysis. Int. J. Digit. Earth; 2023; 17, 2299211. [DOI: https://dx.doi.org/10.1080/17538947.2023.2299211]
11. Wang, M.; Zhang, X.; Niu, X.; Wang, F.; Zhang, X. Scene Classification of High-Resolution Remotely Sensed Image Based on ResNet. J. Geovisualization Spat. Anal.; 2019; 3, 16. [DOI: https://dx.doi.org/10.1007/s41651-019-0039-9]
12. Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep Learning for Generic Object Detection: A Survey. Int. J. Comput. Vis.; 2020; 128, pp. 261-318. [DOI: https://dx.doi.org/10.1007/s11263-019-01247-4]
13. Gong, X.; Liu, L.; Huang, Y.; Zou, B.; Sun, Y.; Luo, L.; Lin, Y. A Pruned Feed-Forward Neural Network (Pruned-FNN) Approach to Measure Air Pollution Exposure. Environ. Monit. Assess.; 2023; 195, 1183. [DOI: https://dx.doi.org/10.1007/s10661-023-11814-5] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37695355]
14. Yang, X.; Li, S.; Chen, Z.; Chanussot, J.; Jia, X.; Zhang, B.; Li, B.; Chen, P. An Attention-Fused Network for Semantic Segmentation of Very-High-Resolution Remote Sensing Imagery. ISPRS J. Photogramm. Remote Sens.; 2021; 177, pp. 238-262. [DOI: https://dx.doi.org/10.1016/j.isprsjprs.2021.05.004]
15. Krizhevsky, A. One Weird Trick for Parallelizing Convolutional Neural Networks. arXiv; 2014; [DOI: https://dx.doi.org/10.48550/ARXIV.1404.5997]
16. Paymode, A.S.; Malode, V.B. Transfer Learning for Multi-Crop Leaf Disease Image Classification Using Convolutional Neural Network VGG. Artif. Intell. Agric.; 2022; 6, pp. 23-33. [DOI: https://dx.doi.org/10.1016/j.aiia.2021.12.002]
17. Kolhar, S.; Jagtap, J. Plant Trait Estimation and Classification Studies in Plant Phenotyping Using Machine Vision—A Review. Inf. Process. Agric.; 2023; 10, pp. 114-135. [DOI: https://dx.doi.org/10.1016/j.inpa.2021.02.006]
18. Rani, A.P.A.S.; Singh, N.S. Improved Detection and Classification of Multiple Tasks in Paddy Crops Using Optimized Deep Belief Networks with Bidirectional Long-Short Term Memory. SN Comput. Sci.; 2024; 5, 831. [DOI: https://dx.doi.org/10.1007/s42979-024-03136-1]
19. Ngugi, L.C.; Abelwahab, M.; Abo-Zahhad, M. Recent Advances in Image Processing Techniques for Automated Leaf Pest and Disease Recognition—A Review. Inf. Process. Agric.; 2021; 8, pp. 27-51. [DOI: https://dx.doi.org/10.1016/j.inpa.2020.04.004]
20. Bezabh, Y.A.; Salau, A.O.; Abuhayi, B.M.; Mussa, A.A.; Ayalew, A.M. CPD-CCNN: Classification of Pepper Disease Using a Concatenation of Convolutional Neural Network Models. Sci. Rep.; 2023; 13, 15581. [DOI: https://dx.doi.org/10.1038/s41598-023-42843-2]
21. Wang, X.; Zhang, C.; Zhang, S.; Zhu, Y. Forecasting of cotton diseases and pests based on adaptive discriminant deep belief network. Trans. Chin. Soc. Agric. Eng.; 2018; 34, pp. 157-164.
22. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ; 2021; 372, n71. [DOI: https://dx.doi.org/10.1136/bmj.n71]
23. Huang, Y.; He, J.; Liu, G.; Li, D.; Hu, R.; Hu, X.; Bian, D. YOLO-EP: A Detection Algorithm to Detect Eggs of Pomacea Canaliculata in Rice Fields. Ecol. Inform.; 2023; 77, 102211. [DOI: https://dx.doi.org/10.1016/j.ecoinf.2023.102211]
24. Johannes, A.; Picon, A.; Alvarez-Gila, A.; Echazarra, J.; Rodriguez-Vaamonde, S.; Navajas, A.D.; Ortiz-Barredo, A. Automatic Plant Disease Diagnosis Using Mobile Capture Devices, Applied on a Wheat Use Case. Comput. Electron. Agric.; 2017; 138, pp. 200-209. [DOI: https://dx.doi.org/10.1016/j.compag.2017.04.013]
25. Singh, V.; Misra, A.K. Detection of Plant Leaf Diseases Using Image Segmentation and Soft Computing Techniques. Inf. Process. Agric.; 2017; 4, pp. 41-49. [DOI: https://dx.doi.org/10.1016/j.inpa.2016.10.005]
26. Lu, Y.; Yi, S.; Zeng, N.; Liu, Y.; Zhang, Y. Identification of Rice Diseases Using Deep Convolutional Neural Networks. Neurocomputing; 2017; 267, pp. 378-384. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.06.023]
27. Ferentinos, K.P. Deep Learning Models for Plant Disease Detection and Diagnosis. Comput. Electron. Agric.; 2018; 145, pp. 311-318. [DOI: https://dx.doi.org/10.1016/j.compag.2018.01.009]
28. Lu, J.; Hu, J.; Zhao, G.; Mei, F.; Zhang, C. An In-Field Automatic Wheat Disease Diagnosis System. Comput. Electron. Agric.; 2017; 142, pp. 369-379. [DOI: https://dx.doi.org/10.1016/j.compag.2017.09.012]
29. Mehdipour Ghazi, M.; Yanikoglu, B.; Aptoula, E. Plant Identification Using Deep Neural Networks via Optimization of Transfer Learning Parameters. Neurocomputing; 2017; 235, pp. 228-235. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.01.018]
30. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A Comparative Study of Fine-Tuning Deep Learning Models for Plant Disease Identification. Comput. Electron. Agric.; 2019; 161, pp. 272-279. [DOI: https://dx.doi.org/10.1016/j.compag.2018.03.032]
31. Altaheri, H.; Alsulaiman, M.; Muhammad, G.; Amin, S.U.; Bencherif, M.; Mekhtiche, M. Date Fruit Dataset for Intelligent Harvesting. Data Brief; 2019; 26, 104514. [DOI: https://dx.doi.org/10.1016/j.dib.2019.104514]
32. Singh, D.; Jain, N.; Jain, P.; Kayal, P.; Kumawat, S.; Batra, N. PlantDoc: A Dataset for Visual Plant Disease Detection. Proceedings of the 7th ACM IKDD CoDS and 25th COMAD ACM; Hyderabad, India, 5 January 2020; pp. 249-253.
33. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci.; 2016; 7, 1419. [DOI: https://dx.doi.org/10.3389/fpls.2016.01419]
34. Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-Borne Hyperspectral with High Spatial Resolution (H2) Benchmark Datasets and Classifier for Precise Crop Identification Based on Deep Convolutional Neural Network with CRF. Remote Sens. Environ.; 2020; 250, 112012. [DOI: https://dx.doi.org/10.1016/j.rse.2020.112012]
35. Cui, J.; Zheng, H.; Zeng, Z.; Yang, Y.; Ma, R.; Tian, Y.; Tan, J.; Feng, X.; Qi, L. Real-Time Missing Seedling Counting in Paddy Fields Based on Lightweight Network and Tracking-by-Detection Algorithm. Comput. Electron. Agric.; 2023; 212, 108045. [DOI: https://dx.doi.org/10.1016/j.compag.2023.108045]
36. Zhang, J.; Xie, T.; Yang, C.; Song, H.; Jiang, Z.; Zhou, G.; Zhang, D.; Feng, H.; Xie, J. Segmenting Purple Rapeseed Leaves in the Field from UAV RGB Imagery Using Deep Learning as an Auxiliary Means for Nitrogen Stress Detection. Remote Sens.; 2020; 12, 1403. [DOI: https://dx.doi.org/10.3390/rs12091403]
37. Wiesner-Hanks, T.; Stewart, E.L.; Kaczmar, N.; DeChant, C.; Wu, H.; Nelson, R.J.; Lipson, H.; Gore, M.A. Image Set for Deep Learning: Field Images of Maize Annotated with Disease Symptoms. BMC Res. Notes; 2018; 11, 440. [DOI: https://dx.doi.org/10.1186/s13104-018-3548-6]
38. Emon, Y.R.; Ahad, M.T.; Rabbany, G. Multi-Format Open-Source Sweet Orange Leaf Dataset for Disease Detection, Classification, and Analysis. Data Brief; 2024; 55, 110713. [DOI: https://dx.doi.org/10.1016/j.dib.2024.110713] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/39100782]
39. Ramcharan, A.; Baranowski, K.; McCloskey, P.; Ahmed, B.; Legg, J.; Hughes, D.P. Deep Learning for Image-Based Cassava Disease Detection. Front. Plant Sci.; 2017; 8, 1852. [DOI: https://dx.doi.org/10.3389/fpls.2017.01852]
40. Ai, Y.; Sun, C.; Tie, J.; Cai, X. Research on Recognition Model of Crop Diseases and Insect Pests Based on Deep Learning in Harsh Environments. IEEE Access; 2020; 8, pp. 171686-171693. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3025325]
41. Alfarisy, A.A.; Chen, Q.; Guo, M. Deep Learning Based Classification for Paddy Pests & Diseases Recognition. Proceedings of the 2018 International Conference on Mathematics and Artificial Intelligence, ACM; Chengdu, China, 20 April 2018; pp. 21-25.
42. Liu, J.; Wang, X. Plant Diseases and Pests Detection Based on Deep Learning: A Review. Plant Methods; 2021; 17, 22. [DOI: https://dx.doi.org/10.1186/s13007-021-00722-9]
43. Arivazhagan, S.; Shebiah, R.N.; Ananthi, S.; Varthini, S.V. Detection of Unhealthy Region of Plant Leaves and Classification of Plant Leaf Diseases Using Texture Features. Agric. Eng. Int. CIGR J.; 2013; 15, pp. 211-217.
44. Nisar, N.; Awasthi, A.; Chhabra, M.; Abidi, A.I. Image Based Recognition of Plant Leaf Diseases: A Review. Proceedings of the 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), IEEE; Palladam, India, 7 October 2020; pp. 373-378.
45. Ahmad Loti, N.N.; Mohd Noor, M.R.; Chang, S. Integrated Analysis of Machine Learning and Deep Learning in Chili Pest and Disease Identification. J. Sci. Food Agric.; 2021; 101, pp. 3582-3594. [DOI: https://dx.doi.org/10.1002/jsfa.10987] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33275806]
46. Padol, P.B.; Yadav, A.A. SVM Classifier Based Grape Leaf Disease Detection. Proceedings of the 2016 Conference on Advances in Signal Processing (CASP); Pune, India, 9–11 June 2016; pp. 175-179.
47. Pydipati, R.; Burks, T.F.; Lee, W.S. Identification of Citrus Disease Using Color Texture Features and Discriminant Analysis. Comput. Electron. Agric.; 2006; 52, pp. 49-59. [DOI: https://dx.doi.org/10.1016/j.compag.2006.01.004]
48. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern.; 1979; 9, pp. 62-66. [DOI: https://dx.doi.org/10.1109/TSMC.1979.4310076]
49. Chakraborty, S.; Paul, S.; Rahat-uz-Zaman, M. Prediction of Apple Leaf Diseases Using Multiclass Support Vector Machine. Proceedings of the 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST); Dhaka, Bangladesh, 5 January 2021; pp. 147-151.
50. Weizheng, S.; Yachun, W.; Zhanliang, C.; Hongda, W. Grading Method of Leaf Spot Disease Based on Image Processing. Proceedings of the 2008 International Conference on Computer Science and Software Engineering; Wuhan, China, 12–14 December 2008; pp. 491-494.
51. Li, B.; Liu, Z.; Huang, J.; Zhang, L.; Zhou, W.; Shi, J. Hyperspectral Identification of Rice Diseases and Pests Based on Principal Component Analysis and Probabilistic Neural Network. Trans. Chin. Soc. Agric. Eng.; 2009; 25, pp. 143-147. [DOI: https://dx.doi.org/10.3969/j.issn.1002-6819.2009.09.026]
52. Huang, L.; Liu, W.; Huang, W.; Zhao, J.; Song, F. Remote Sensing Monitoring of Winter Wheat Powdery Mildew Based on Wavelet Analysis and Support Vector Machine. Trans. Chin. Soc. Agric. Eng.; 2017; 33, pp. 188-195. [DOI: https://dx.doi.org/10.11975/j.issn.1002-6819.2017.14.026]
53. Lu, B.; Ge, Y.; Qin, K.; Zheng, J. A Review on Geographically Weighted Regression. Geomat. Inf. Sci. Wuhan Univ.; 2020; 45, pp. 1356-1366. [DOI: https://dx.doi.org/10.13203/j.whugis20190346]
54. Chaudhary, A.; Kolhe, S.; Kamal, R. An Improved Random Forest Classifier for Multi-Class Classification. Inf. Process. Agric.; 2016; 3, pp. 215-222. [DOI: https://dx.doi.org/10.1016/j.inpa.2016.08.002]
55. Arnal Barbedo, J.G. Digital Image Processing Techniques for Detecting, Quantifying and Classifying Plant Diseases. SpringerPlus; 2013; 2, 660. [DOI: https://dx.doi.org/10.1186/2193-1801-2-660] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24349961]
56. Domingues, T.; Brandão, T.; Ferreira, J.C. Machine Learning for Detection and Prediction of Crop Diseases and Pests: A Comprehensive Survey. Agriculture; 2022; 12, 1350. [DOI: https://dx.doi.org/10.3390/agriculture12091350]
57. Karthik, R.; Hariharan, M.; Anand, S.; Mathikshara, P.; Johnson, A.; Menaka, R. Attention Embedded Residual CNN for Disease Detection in Tomato Leaves. Appl. Soft Comput.; 2020; 86, 105933. [DOI: https://dx.doi.org/10.1016/j.asoc.2019.105933]
58. Shoaib, M.; Shah, B.; EI-Sappagh, S.; Ali, A.; Ullah, A.; Alenezi, F.; Gechev, T.; Hussain, T.; Ali, F. An Advanced Deep Learning Models-Based Plant Disease Detection: A Review of Recent Research. Front. Plant Sci.; 2023; 14, 1158933. [DOI: https://dx.doi.org/10.3389/fpls.2023.1158933]
59. Mahlein, A.-K.; Steiner, U.; Hillnhütter, C.; Dehne, H.-W.; Oerke, E.-C. Hyperspectral Imaging for Small-Scale Analysis of Symptoms Caused by Different Sugar Beet Diseases. Plant Methods; 2012; 8, 3. [DOI: https://dx.doi.org/10.1186/1746-4811-8-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22273513]
60. Wang, J.; Wu, Y.; Liao, Y.; Chen, Y. Hyperspectral classification of citrus diseased leaves based on convolutional neural network. Inf. Technol. Informatiz.; 2020; 3, pp. 84-87.
61. Wang, Z.; Zhang, S.; Zhao, B. Crop Disease Leaf Segmentation Based on Cascaded Convolutional Neural Network. J. Comput. Eng. Appl.; 2020; 56, pp. 242-250.
62. Jiang, P.; Chen, Y.; Liu, B.; He, D.; Liang, C. Real-Time Detection of Apple Leaf Diseases Using Deep Learning Approach Based on Improved Convolutional Neural Networks. IEEE Access; 2019; 7, pp. 59069-59080. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2914929]
63. Yadav, P.K.; Burks, T.; Qin, J.; Kim, M.; Frederick, Q.K.; Dewdney, M.M.; Ritenour, M.A. Citrus Disease Classification with Convolution Neural Network Generated Features and Machine Learning Classifiers on Hyperspectral Image Data. Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII; Bauer, C.; Thomasson, J.A. SPIE: Orlando, FL, USA, 13 June 2023; 5.
64. Bezabh, Y.A.; Ayalew, A.M.; Abuhayi, B.M.; Demlie, T.N.; Awoke, E.A.; Mengistu, T.E. Classification of Mango Disease Using Ensemble Convolutional Neural Network. Smart Agric. Technol.; 2024; 8, 100476. [DOI: https://dx.doi.org/10.1016/j.atech.2024.100476]
65. Wen, C.; Wang, Q.; Chen, H.; Wu, J.; Ni, J.; Yang, C.; Su, H. Model for the Recognition of Large-Scale Multi-Class Diseases and Pests. Trans. Chin. Soc. Agric. Eng.; 2022; 38, pp. 169-177. [DOI: https://dx.doi.org/10.11975/j.issn.1002-6819.2022.08.020]
66. Shrivastava, V.K.; Pradhan, M.K.; Minz, S.; Thakur, M.P. Rice Plant Disease Classification Using Transfer Learning of Deep Convolution Neural Network. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.; 2019; XLII-3/W6, pp. 631-635. [DOI: https://dx.doi.org/10.5194/isprs-archives-XLII-3-W6-631-2019]
67. Ma, J.; Du, K.; Zheng, F.; Zhang, L.; Gong, Z.; Sun, Z. A Recognition Method for Cucumber Diseases Using Leaf Symptom Images Based on Deep Convolutional Neural Network. Comput. Electron. Agric.; 2018; 154, pp. 18-24. [DOI: https://dx.doi.org/10.1016/j.compag.2018.08.048]
68. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci.; 2016; 3289801. [DOI: https://dx.doi.org/10.1155/2016/3289801] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27418923]
69. Abdulridha, J.; Ampatzidis, Y.; Ehsani, R.; De Castro, A.I. Evaluating the Performance of Spectral Features and Multivariate Analysis Tools to Detect Laurel Wilt Disease and Nutritional Deficiency in Avocado. Comput. Electron. Agric.; 2018; 155, pp. 203-211. [DOI: https://dx.doi.org/10.1016/j.compag.2018.10.016]
70. Susič, N.; Žibrat, U.; Širca, S.; Strajnar, P.; Razinger, J.; Knapič, M.; Vončina, A.; Urek, G.; Gerič Stare, B. Discrimination between Abiotic and Biotic Drought Stress in Tomatoes Using Hyperspectral Imaging. Sens. Actuators B Chem.; 2018; 273, pp. 842-852. [DOI: https://dx.doi.org/10.1016/j.snb.2018.06.121]
71. Mousavi, S.A.; Hanifeloo, Z.; Sumari, P.; Arshad, M.R.M. Enhancing the Diagnosis of Corn Pests Using Gabor Wavelet Features and SVM Classification. J. Sci. Ind. Res.; 2016; 75, pp. 349-354.
72. Faqe Ibrahim, G.R.; Rasul, A.; Abdullah, H. Improving Crop Classification Accuracy with Integrated Sentinel-1 and Sentinel-2 Data: A Case Study of Barley and Wheat. J. Geovisualization Spat. Anal.; 2023; 7, 22. [DOI: https://dx.doi.org/10.1007/s41651-023-00152-2]
73. Wahabzada, M.; Mahlein, A.-K.; Bauckhage, C.; Steiner, U.; Oerke, E.-C.; Kersting, K. Plant Phenotyping Using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants. Sci. Rep.; 2016; 6, 22482. [DOI: https://dx.doi.org/10.1038/srep22482] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26957018]
74. Gao, B.; Lu, A.; Pan, Y.; Huo, L.; Gao, Y.; Li, X.; Li, S.; Chen, Z. Additional Sampling Layout Optimization Method for Environmental Quality Grade Classifications of Farmland Soil. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2017; 10, pp. 5350-5358. [DOI: https://dx.doi.org/10.1109/JSTARS.2017.2753467]
75. Mallick, M.T.; Biswas, S.; Das, A.K.; Saha, H.N.; Chakrabarti, A.; Deb, N. Deep Learning Based Automated Disease Detection and Pest Classification in Indian Mung Bean. Multimed. Tools Appl.; 2023; 82, pp. 12017-12041. [DOI: https://dx.doi.org/10.1007/s11042-022-13673-7]
76. Rahman, C.R.; Arko, P.S.; Ali, M.E.; Iqbal Khan, M.A.; Apon, S.H.; Nowrin, F.; Wasif, A. Identification and Recognition of Rice Diseases and Pests Using Convolutional Neural Networks. Biosyst. Eng.; 2020; 194, pp. 112-120. [DOI: https://dx.doi.org/10.1016/j.biosystemseng.2020.03.020]
77. Ahila Priyadharshini, R.; Arivazhagan, S.; Arun, M.; Mirnalini, A. Maize Leaf Disease Classification Using Deep Convolutional Neural Networks. Neural Comput. Appl.; 2019; 31, pp. 8887-8895. [DOI: https://dx.doi.org/10.1007/s00521-019-04228-3]
78. Upreti, K.; Singh, P.; Jain, D.; Pandey, A.K.; Gupta, A.; Singh, H.R.; Srivastava, S.K.; Prasad, J.S. Progressive Loss-Aware Fine-Tuning Stepwise Learning with GAN Augmentation for Rice Plant Disease Detection. Multimed. Tools Appl.; 2024; 83, pp. 84565-84588. [DOI: https://dx.doi.org/10.1007/s11042-024-19255-z]
79. Jesie, R.S.; Godwin Premi, M.S.; Jarin, T. Comparative Analysis of Paddy Leaf Diseases Sensing with a Hybrid Convolutional Neural Network Model. Meas. Sens.; 2024; 31, 100966. [DOI: https://dx.doi.org/10.1016/j.measen.2023.100966]
80. Mahadevan, K.; Punitha, A.; Suresh, J. Automatic Recognition of Rice Plant Leaf Diseases Detection Using Deep Neural Network with Improved Threshold Neural Network. E-Prime-Adv. Electr. Eng. Electron. Energy; 2024; 8, 100534. [DOI: https://dx.doi.org/10.1016/j.prime.2024.100534]
81. Dutta, M.; Gupta, D.; Gulzar, Y.; Mir, M.S.; Onn, C.W.; Soomro, A.B. Leveraging Inception V3 for Precise Early and Late Blight Disease Classification in Potato Crops. Trait. Signal; 2024; 41, pp. 705-715. [DOI: https://dx.doi.org/10.18280/ts.410213]
82. Liang, W.; Zhang, H.; Zhang, G.; Cao, H. Rice Blast Disease Recognition Using a Deep Convolutional Neural Network. Sci. Rep.; 2019; 9, 2869. [DOI: https://dx.doi.org/10.1038/s41598-019-38966-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30814523]
83. Li, J.; Lin, L.; Tian, K.; Alaa, A.A. Detection of Leaf Diseases of Balsam Pear in the Field Based on Improved Faster R-CNN. Trans. Chin. Soc. Agric. Eng.; 2020; 36, pp. 179-185.
84. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens.; 2019; 11, 1554. [DOI: https://dx.doi.org/10.3390/rs11131554]
85. Deng, X.; Tong, Z.; Lan, Y.; Huang, Z. Detection and Location of Dead Trees with Pine Wilt Disease Based on Deep Learning and UAV Remote Sensing. AgriEngineering; 2020; 2, pp. 294-307. [DOI: https://dx.doi.org/10.3390/agriengineering2020019]
86. Wang, Z.; Qiao, L.; Wang, M. Hu, R.; Yue, Y.; Chen, S. Agricultural Pest Detection Algorithm Based on Improved Faster RCNN. Proceedings of the International Conference on Computer Vision and Pattern Analysis (ICCPA 2021); Hangzhou, China, 19–21 November 2021; SPIE: Guangzhou, China, 2022; 8.
87. Jacob, A.; Weixiu, S. Application of Target Detection Algorithm Based on Deep Learning in Farmland Pest Recognition. Int. J. Artif. Intell. Appl.; 2020; 11, pp. 1-10. [DOI: https://dx.doi.org/10.5121/ijaia.2020.11301]
88. Li, M.-T.; Lee, S.-H. A Study on Small Pest Detection Based on a CascadeR-CNN-Swin Model. Comput. Mater. Contin.; 2022; 72, pp. 6155-6165. [DOI: https://dx.doi.org/10.32604/cmc.2022.025714]
89. Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Trans. Geosci. Remote Sens.; 2021; 59, pp. 4340-4354. [DOI: https://dx.doi.org/10.1109/TGRS.2020.3016820]
90. Lv, J.; Li, W.; Fan, M.; Zheng, T.; Yang, Z.; Chen, Y.; He, G.; Yang, X.; Liu, S.; Sun, C. Detecting Pests From Light-Trapping Images Based on Improved YOLOv3 Model and Instance Augmentation. Front. Plant Sci.; 2022; 13, 939498. [DOI: https://dx.doi.org/10.3389/fpls.2022.939498] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35873992]
91. Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Detection of Apple Lesions in Orchards Based on Deep Learning Methods of CycleGAN and YOLOV3-Dense. J. Sens.; 2019; 2019, 7630926. [DOI: https://dx.doi.org/10.1155/2019/7630926]
92. Li, X.; Li, S.; Liu, B. Apple Leaf Disease Detection Method Based on Improved Faster R-CNN. Comput. Eng.; 2021; 47, pp. 298-304.
93. Zhang, W.; Sun, Y.; Huang, H.; Pei, H.; Sheng, J.; Yang, P. Pest Region Detection in Complex Backgrounds via Contextual Information and Multi-Scale Mixed Attention Mechanism. Agriculture; 2022; 12, 1104. [DOI: https://dx.doi.org/10.3390/agriculture12081104]
94. Hu, Y.; Deng, X.; Lan, Y.; Chen, X.; Long, Y.; Liu, C. Detection of Rice Pests Based on Self-Attention Mechanism and Multi-Scale Feature Fusion. Insects; 2023; 14, 280. [DOI: https://dx.doi.org/10.3390/insects14030280]
95. Li, D.; Ahmed, F.; Wu, N.; Sethi, A.I. YOLO-JD: A Deep Learning Network for Jute Diseases and Pests Detection from Images. Plants; 2022; 11, 937. [DOI: https://dx.doi.org/10.3390/plants11070937] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35406915]
96. Tang, Z.; Chen, Z.; Qi, F.; Zhang, L.; Chen, S. Pest-YOLO: Deep Image Mining and Multi-Feature Fusion for Real-Time Agriculture Pest Detection. Proceedings of the 2021 IEEE International Conference on Data Mining (ICDM); Auckland, New Zealand, 7–10 December 2021; pp. 1348-1353.
97. Yin, X.; Song, Y.; He, T.; Song, B.; Li, G. Forest Pest Detection Method Based on DenseNet and Pyramid Pooling. Proceedings of the 2021 7th International Conference on Computer and Communications (ICCC); Chengdu, China, 10 December 2021; pp. 641-645.
98. Zhou, Y.; Liu, W.; Luo, Y.; Zong, S. Small object detection for infected trees based on the deep learning method. Sci. Silvae Sin.; 2021; 57, pp. 98-107.
99. Wang, Z.; Ma, F.; Zhang, Y.; Ji, P.; Cao, M. Rice Pest Detection Based on MobileNetv2-YOLOv4. Proceedings of the 2022 China Automation Congress (CAC); Xiamen, China, 25 November 2022; pp. 2499-2503.
100. Wenjuan, G.; Shuo, F.; Quan, F.; Xiangzhou, L.; Xueze, G. Cotton Leaf Disease Detection Method Based on Improved SSD. Int. J. Agric. Biol. Eng.; 2024; 17, pp. 211-220. [DOI: https://dx.doi.org/10.25165/j.ijabe.20241702.8574]
101. Xu, L.; Shi, X.; Tang, Z.; He, Y.; Yang, N.; Ma, W.; Zheng, C.; Chen, H.; Zhou, T.; Huang, P. et al. ASFL-YOLOX: An Adaptive Spatial Feature Fusion and Lightweight Detection Method for Insect Pests of the Papilionidae Family. Front. Plant Sci.; 2023; 14, 1176300. [DOI: https://dx.doi.org/10.3389/fpls.2023.1176300] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37546271]
102. Wang, H.; Shang, S.; Wang, D.; He, X.; Feng, K.; Zhu, H. Plant Disease Detection and Classification Method Based on the Optimized Lightweight YOLOv5 Model. Agriculture; 2022; 12, 931. [DOI: https://dx.doi.org/10.3390/agriculture12070931]
103. Uddin, M.S.; Mazumder, M.K.A.; Prity, A.J.; Mridha, M.F.; Alfarhood, S.; Safran, M.; Che, D. Cauli-Det: Enhancing Cauliflower Disease Detection with Modified YOLOv8. Front. Plant Sci.; 2024; 15, 1373590. [DOI: https://dx.doi.org/10.3389/fpls.2024.1373590] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38699536]
104. Gao, W.; Zong, C.; Wang, M.; Zhang, H.; Fang, Y. Intelligent Identification of Rice Leaf Disease Based on YOLO V5-EFFICIENT. Crop Prot.; 2024; 183, 106758. [DOI: https://dx.doi.org/10.1016/j.cropro.2024.106758]
105. Rezaei, M.; Diepeveen, D.; Laga, H.; Jones, M.G.K.; Sohel, F. Plant Disease Recognition in a Low Data Scenario Using Few-Shot Learning. Comput. Electron. Agric.; 2024; 219, 108812. [DOI: https://dx.doi.org/10.1016/j.compag.2024.108812]
106. Hassan, S.M.; Maji, A.K. Pest Identification Based on Fusion of Self-Attention With ResNet. IEEE Access; 2024; 12, pp. 6036-6050. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3351003]
107. Fu, X.; Ma, Q.; Yang, F.; Zhang, C.; Zhao, X.; Chang, F.; Han, L. Crop Pest Image Recognition Based on the Improved ViT Method. Inf. Process. Agric.; 2024; 11, pp. 249-259. [DOI: https://dx.doi.org/10.1016/j.inpa.2023.02.007]
108. Bhagat, S.; Kokare, M.; Haswani, V.; Hambarde, P.; Taori, T.; Ghante, P.H.; Patil, D.K. Advancing Real-Time Plant Disease Detection: A Lightweight Deep Learning Approach and Novel Dataset for Pigeon Pea Crop. Smart Agric. Technol.; 2024; 7, 100408. [DOI: https://dx.doi.org/10.1016/j.atech.2024.100408]
109. Mazumder, M.K.A.; Mridha, M.F.; Alfarhood, S.; Safran, M.; Abdullah-Al-Jubair, M.; Che, D. A Robust and Light-Weight Transfer Learning-Based Architecture for Accurate Detection of Leaf Diseases across Multiple Plants Using Less Amount of Images. Front. Plant Sci.; 2024; 14, 1321877. [DOI: https://dx.doi.org/10.3389/fpls.2023.1321877] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38273954]
110. Feng, J. Research on UAV Image Classification and Semantic Segmentation of Maize Diseases and Pests Based on Deep Learning. Master’s Thesis; China University of Mining and Technology: Xuzhou, China, 2022.
111. Persello, C.; Wegner, J.D.; Hansch, R.; Tuia, D.; Ghamisi, P.; Koeva, M.; Camps-Valls, G. Deep Learning and Earth Observation to Support the Sustainable Development Goals: Current Approaches, Open Challenges, and Future Opportunities. IEEE Geosci. Remote Sens. Mag.; 2022; 10, pp. 172-200. [DOI: https://dx.doi.org/10.1109/MGRS.2021.3136100]
112. Matese, A.; Toscano, P.; Di Gennaro, S.; Genesio, L.; Vaccari, F.; Primicerio, J.; Belli, C.; Zaldei, A.; Bianconi, R.; Gioli, B. Intercomparison of UAV, Aircraft and Satellite Remote Sensing Platforms for Precision Viticulture. Remote Sens.; 2015; 7, pp. 2971-2990. [DOI: https://dx.doi.org/10.3390/rs70302971]
113. Mu, Y.; Fujii, Y.; Takata, D.; Zheng, B.; Noshita, K.; Honda, K.; Ninomiya, S.; Guo, W. Characterization of Peach Tree Crown by Using High-Resolution Images from an Unmanned Aerial Vehicle. Hortic. Res.; 2018; 5, 74. [DOI: https://dx.doi.org/10.1038/s41438-018-0097-z] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30564372]
114. Zhu, X.; Goldberg, A.B. Introduction to Semi-Supervised Learning; Synthesis Lectures on Artificial Intelligence and Machine Learning; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; ISBN 978-3-031-01548-9
115. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in Vegetation Remote Sensing. ISPRS J. Photogramm. Remote Sens.; 2021; 173, pp. 24-49. [DOI: https://dx.doi.org/10.1016/j.isprsjprs.2020.12.010]
116. Stewart, E.L.; Wiesner-Hanks, T.; Kaczmar, N.; DeChant, C.; Wu, H.; Lipson, H.; Nelson, R.J.; Gore, M.A. Quantitative Phenotyping of Northern Leaf Blight in UAV Images Using Deep Learning. Remote Sens.; 2019; 11, 2209. [DOI: https://dx.doi.org/10.3390/rs11192209]
117. Fuentes, A.; Yoon, S.; Kim, S.; Park, D. A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition. Sensors; 2017; 17, 2022. [DOI: https://dx.doi.org/10.3390/s17092022] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28869539]
118. Ke, Z.; Qiu, D.; Li, K.; Yan, Q.; Lau, R.W.H. Guided Collaborative Training for Pixel-Wise Semi-Supervised Learning. Computer Vision–ECCV 2020; Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J.-M. Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12358, pp. 429-445. ISBN 978-3-030-58600-3
119. Ahmad, A.; Saraswat, D.; El Gamal, A. A Survey on Using Deep Learning Techniques for Plant Disease Diagnosis and Recommendations for Development of Appropriate Tools. Smart Agric. Technol.; 2023; 3, 100083. [DOI: https://dx.doi.org/10.1016/j.atech.2022.100083]
120. Luo, Z.; Yang, W.; Yuan, Y.; Gou, R.; Li, X. Semantic Segmentation of Agricultural Images: A Survey. Inf. Process. Agric.; 2023; 11, pp. 172-186. [DOI: https://dx.doi.org/10.1016/j.inpa.2023.02.001]
121. Tassis, L.M.; Tozzi De Souza, J.E.; Krohling, R.A. A Deep Learning Approach Combining Instance and Semantic Segmentation to Identify Diseases and Pests of Coffee Leaves from In-Field Images. Comput. Electron. Agric.; 2021; 186, 106191. [DOI: https://dx.doi.org/10.1016/j.compag.2021.106191]
122. Rezk, N.G.; Attia, A.-F.; El-Rashidy, M.A.; El-Sayed, A.; Hemdan, E.E.-D. An Efficient Plant Disease Recognition System Using Hybrid Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs) for Smart IoT Applications in Agriculture. Int. J. Comput. Intell. Syst.; 2022; 15, 65. [DOI: https://dx.doi.org/10.1007/s44196-022-00129-x]
123. Ji, M.; Wu, Z. Automatic Detection and Severity Analysis of Grape Black Measles Disease Based on Deep Learning and Fuzzy Logic. Comput. Electron. Agric.; 2022; 193, 106718. [DOI: https://dx.doi.org/10.1016/j.compag.2022.106718]
124. Zhu, S.; Ma, W.; Lu, J.; Ren, B.; Wang, C.; Wang, J. A Novel Approach for Apple Leaf Disease Image Segmentation in Complex Scenes Based on Two-Stage DeepLabv3+ with Adaptive Loss. Comput. Electron. Agric.; 2023; 204, 107539. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107539]
125. Kouadio, L.; El Jarroudi, M.; Belabess, Z.; Laasli, S.-E.; Roni, M.Z.K.; Amine, I.D.I.; Mokhtari, N.; Mokrini, F.; Junk, J.; Lahlali, R. A Review on UAV-Based Applications for Plant Disease Detection and Monitoring. Remote Sens.; 2023; 15, 4273. [DOI: https://dx.doi.org/10.3390/rs15174273]
126. Ding, W.; Abdel-Basset, M.; Alrashdi, I.; Hawash, H. Next Generation of Computer Vision for Plant Disease Monitoring in Precision Agriculture: A Contemporary Survey, Taxonomy, Experiments, and Future Direction. Inf. Sci.; 2024; 665, 120338. [DOI: https://dx.doi.org/10.1016/j.ins.2024.120338]
127. Gong, L.; Zhang, Z. The model of cotton intrusion detection system based on the CDAN technology and BP algorithm. Guangdong Agric. Sci.; 2009; 7, pp. 228–230+236. [DOI: https://dx.doi.org/10.16768/j.issn.1004-874x.2009.07.016]
128. Xu, Y.; Geng, X.; Zhao, W.; Zhang, Y.; Ning, H.; Lei, T. A Remote Sensing Image Change Detection Model Based on CNN-Transformer Hybrid Structure. Comput. Mod.; 2023; pp. 79-85.
129. Liu, T.; Yang, L.; Lunga, D. Change Detection Using Deep Learning Approach with Object-Based Image Analysis. Remote Sens. Environ.; 2021; 256, 112308. [DOI: https://dx.doi.org/10.1016/j.rse.2021.112308]
130. Liu, M.; Chai, Z.; Deng, H.; Liu, R. A CNN-Transformer Network With Multiscale Context Aggregation for Fine-Grained Cropland Change Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2022; 15, pp. 4297-4306. [DOI: https://dx.doi.org/10.1109/JSTARS.2022.3177235]
131. Bao, T.; Fu, C.; Fang, T.; Huo, H. PPCNET: A Combined Patch-Level and Pixel-Level End-to-End Deep Network for High-Resolution Remote Sensing Image Change Detection. IEEE Geosci. Remote Sens. Lett.; 2020; 17, pp. 1797-1801. [DOI: https://dx.doi.org/10.1109/LGRS.2019.2955309]
132. Das, S.; Biswas, A.; Vimalkumar, C.; Sinha, P. Deep Learning Analysis of Rice Blast Disease Using Remote Sensing Images. IEEE Geosci. Remote Sens. Lett.; 2023; 20, 2500905. [DOI: https://dx.doi.org/10.1109/LGRS.2023.3244324]
133. Wei, Y.; Lin, F. The Research of Prediction of Pests Based on Fuzzy RBF Neural Network. Proceedings of the 2009 International Conference on Computational Intelligence and Software Engineering; Wuhan, China, 11–13 December 2009; pp. 1-4.
134. Zhang, X.; Wang, B.; Tian, Y.; Yuan, L.; Jiang, Y.; Dong, Y.; Huang, W.; Zhang, J. Research progress on forecasting mechanism and methodology for crop disease and insect pest. J. Agric. Sci. Technol. Beijing; 2019; 21, pp. 110-120.
135. Michael, P.J.M.; Hussein, M.; Camilius, A.S.; Richard, R.M.; Beatrice, M.; Caroline, M. Artificial Intelligence and Deep Learning Based Technologies for Emerging Disease Recognition and Pest Prediction in Beans (Phaseolus Vulgaris L.): A Systematic Review. Afr. J. Agric. Res.; 2023; 19, pp. 260-271. [DOI: https://dx.doi.org/10.5897/AJAR2022.16226]
136. Xiao, Q.; Li, W.; Chen, P.; Wang, B. Prediction of Crop Pests and Diseases in Cotton by Long Short Term Memory Network. Intelligent Computing Theories and Application; Huang, D.-S.; Jo, K.-H.; Zhang, X.-L. Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 10955, pp. 11-16. ISBN 978-3-319-95932-0
137. Wahyono, T.; Heryadi, Y.; Soeparno, H.; Abbas, B.S. Enhanced LSTM Multivariate Time Series Forecasting for Crop Pest Attack Prediction. ICIC Express Lett.; 2020; 14, pp. 943-949. [DOI: https://dx.doi.org/10.24507/icicel.14.10.943]
138. Chen, P.; Xiao, Q.; Zhang, J.; Xie, C.; Wang, B. Occurrence Prediction of Cotton Pests and Diseases by Bidirectional Long Short-Term Memory Networks with Climate and Atmosphere Circulation. Comput. Electron. Agric.; 2020; 176, 105612. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105612]
139. ShanWen Zhang; Zhen Wang; Zuliang Wang Prediction of wheat stripe rust disease by combining knowledge graph and bidirectional long short term memory network. Trans. Chin. Soc. Agric. Eng.; 2020; 36, pp. 172-178.
140. Jain, S.; Ramesh, D. AI Based Hybrid CNN-LSTM Model for Crop Disease Prediction: An ML Advent for Rice Crop. Proceedings of the 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT); Kharagpur, India, 6 July 2021; pp. 1-7.
141. Patil, J.; Mytri, V.D. A Prediction Model for Population Dynamics of Cotton Pest (Thrips Tabaci Linde) Using Multilayer-Perceptron Neural Network. Int. J. Comput. Appl.; 2013; 67, pp. 19-26. [DOI: https://dx.doi.org/10.5120/11384-6663]
142. Saleem, R.M.; Kazmi, R.; Bajwa, I.S.; Ashraf, A.; Ramzan, S.; Anwar, W. IOT-Based Cotton Whitefly Prediction Using Deep Learning. Sci. Program.; 2021; 2021, 8824601. [DOI: https://dx.doi.org/10.1155/2021/8824601]
143. Saleem, R.M.; Bashir, R.N.; Faheem, M.; Haq, M.A.; Alhussen, A.; Alzamil, Z.S.; Khan, S. Internet of Things Based Weekly Crop Pest Prediction by Using Deep Neural Network. IEEE Access; 2023; 11, pp. 85900-85913. [DOI: https://dx.doi.org/10.1109/ACCESS.2023.3301504]
144. Grünig, M.; Razavi, E.; Calanca, P.; Mazzi, D.; Wegner, J.D.; Pellissier, L. Applying Deep Neural Networks to Predict Incidence and Phenology of Plant Pests and Diseases. Ecosphere; 2021; 12, e03791. [DOI: https://dx.doi.org/10.1002/ecs2.3791]
145. Dong, X.; Wang, Q.; Huang, Q.; Ge, Q.; Zhao, K.; Wu, X.; Wu, X.; Lei, L.; Hao, G. PDDD-PreTrain: A Series of Commonly Used Pre-Trained Models Support Image-Based Plant Disease Diagnosis. Plant Phenomics; 2023; 5, 0054. [DOI: https://dx.doi.org/10.34133/plantphenomics.0054] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37213546]
146. Coulibaly, S.; Kamsu-Foguem, B.; Kamissoko, D.; Traore, D. Deep Neural Networks with Transfer Learning in Millet Crop Images. Comput. Ind.; 2019; 108, pp. 115-120. [DOI: https://dx.doi.org/10.1016/j.compind.2019.02.003]
147. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A. Using Deep Transfer Learning for Image-Based Plant Disease Identification. Comput. Electron. Agric.; 2020; 173, 105393. [DOI: https://dx.doi.org/10.1016/j.compag.2020.105393]
148. Chen, H.; Liao, Y. The Function of Crop Pest and Disease Identification Based on the Improved ConvNeXt Network Model. Proceedings of the 2023 4th International Conference on Electronic Communication and Artificial Intelligence (ICECAI); Guangzhou, China, 12 May 2023; pp. 387-391.
149. Zhang, J.; Guo, H.; Guo, J.; Zhang, J. An Information Entropy Masked Vision Transformer (IEM-ViT) Model for Recognition of Tea Diseases. Agronomy; 2023; 13, 1156. [DOI: https://dx.doi.org/10.3390/agronomy13041156]
150. Song, Y.; Duan, X.; Ren, Y.; Xu, J.; Luo, L.; Li, D. Identification of the Agricultural Pests Based on Deep Learning Models. Proceedings of the 2019 International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI); Taiyuan, China, 8–10 November 2019; pp. 195-198.
151. Shafik, W.; Tufail, A.; De Silva Liyanage, C.; Apong, R.A.A.H.M. Using Transfer Learning-Based Plant Disease Classification and Detection for Sustainable Agriculture. BMC Plant Biol.; 2024; 24, 136. [DOI: https://dx.doi.org/10.1186/s12870-024-04825-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38408925]
152. Hu, R.; Zhang, S.; Wang, P.; Xu, G.; Wang, D.; Qian, Y. The Identification of Corn Leaf Diseases Based on Transfer Learning and Data Augmentation. Proceedings of the 2020 3rd International Conference on Computer Science and Software Engineering; Beijing, China, 22 May 2020; pp. 58-65.
153. Wu, J.; Wen, C.; Chen, H.; Ma, Z.; Zhang, T.; Su, H.; Yang, C. DS-DETR: A Model for Tomato Leaf Disease Segmentation and Damage Evaluation. Agronomy; 2022; 12, 2023. [DOI: https://dx.doi.org/10.3390/agronomy12092023]
154. Lee, S.; Choi, G.; Park, H.-C.; Choi, C. Automatic Classification Service System for Citrus Pest Recognition Based on Deep Learning. Sensors; 2022; 22, 8911. [DOI: https://dx.doi.org/10.3390/s22228911] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36433508]
155. Xing, S.; Lee, H.J. Crop Pests and Diseases Recognition Using DANet with TLDP. Comput. Electron. Agric.; 2022; 199, 107144. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107144]
156. Rajeswarappa, G.; Depuru, S.; Sirisala, S. Crop Pests Identification Based on Fusion CNN Model: A Deep Learning. Proceedings of the 2023 8th International Conference on Communication and Electronics Systems (ICCES); Coimbatore, India, 1 June 2023; pp. 968-974.
157. Li, M.; Cheng, S.; Cui, J.; Li, C.; Li, Z.; Zhou, C.; Lv, C. High-Performance Plant Pest and Disease Detection Based on Model Ensemble with Inception Module and Cluster Algorithm. Plants; 2023; 12, 200. [DOI: https://dx.doi.org/10.3390/plants12010200]
158. Zhou, C.; Zhong, Y.; Zhou, S.; Song, J.; Xiang, W. Rice Leaf Disease Identification by Residual-Distilled Transformer. Eng. Appl. Artif. Intell.; 2023; 121, 106020. [DOI: https://dx.doi.org/10.1016/j.engappai.2023.106020]
159. Liu, H.; Zhan, Y.; Xia, H.; Mao, Q.; Tan, Y. Self-Supervised Transformer-Based Pre-Training Method Using Latent Semantic Masking Auto-Encoder for Pest and Disease Classification. Comput. Electron. Agric.; 2022; 203, 107448. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107448]
160. Kar, S.; Nagasubramanian, K.; Elango, D.; Carroll, M.E.; Abel, C.A.; Nair, A.; Mueller, D.S.; O’Neal, M.E.; Singh, A.K.; Sarkar, S. et al. Self-supervised Learning Improves Classification of Agriculturally Important Insect Pests in Plants. Plant Phenome J.; 2023; 6, e20079. [DOI: https://dx.doi.org/10.1002/ppj2.20079]
161. Pattnaik, G.; Shrivastava, V.K.; Parvathi, K. Transfer Learning-Based Framework for Classification of Pest in Tomato Plants. Appl. Artif. Intell.; 2020; 34, pp. 981-993. [DOI: https://dx.doi.org/10.1080/08839514.2020.1792034]
162. Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato Plant Disease Detection Using Transfer Learning with C-GAN Synthetic Images. Comput. Electron. Agric.; 2021; 187, 106279. [DOI: https://dx.doi.org/10.1016/j.compag.2021.106279]
163. Geetharamani, G.; Arun Pandian, J. Identification of Plant Leaf Diseases Using a Nine-Layer Deep Convolutional Neural Network. Comput. Electr. Eng.; 2019; 76, pp. 323-338. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2019.04.011]
164. Zhang, S. Research on Identification of Rose Diseases and Insect Pests Based on Deep Learning. Master’s Thesis; Yunnan University: Kunming, China, 2022.
165. Oppenheim, D.; Shani, G.; Erlich, O.; Tsror, L. Using Deep Learning for Image-Based Potato Tuber Disease Detection. Phytopathology; 2019; 109, pp. 1083-1087. [DOI: https://dx.doi.org/10.1094/PHYTO-08-18-0288-R] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30543489]
166. Mohapatra, D.; Das, N. A Precise Model for Accurate Rice Disease Diagnosis: A Transfer Learning Approach. Proc. Indian Natl. Sci. Acad.; 2023; 89, pp. 162-171. [DOI: https://dx.doi.org/10.1007/s43538-022-00149-3]
167. Dawei, W.; Limiao, D.; Jiangong, N.; Jiyue, G.; Hongfei, Z.; Zhongzhi, H. Recognition Pest by Image-based Transfer Learning. J. Sci. Food Agric.; 2019; 99, pp. 4524-4531. [DOI: https://dx.doi.org/10.1002/jsfa.9689]
168. Sanath Rao, U.; Swathi, R.; Sanjana, V.; Arpitha, L.; Chandrasekhar, K.; Chinmayi,; Naik, P.K. Deep Learning Precision Farming: Grapes and Mango Leaf Disease Detection by Transfer Learning. Glob. Transit. Proc.; 2021; 2, pp. 535-544. [DOI: https://dx.doi.org/10.1016/j.gltp.2021.08.002]
169. Rangarajan, A.K.; Purushothaman, R.; Ramesh, A. Tomato Crop Disease Classification Using Pre-Trained Deep Learning Algorithm. Procedia Comput. Sci.; 2018; 133, pp. 1040-1047. [DOI: https://dx.doi.org/10.1016/j.procs.2018.07.070]
170. Selvaraj, M.G.; Vergara, A.; Ruiz, H.; Safari, N.; Elayabalan, S.; Ocimati, W.; Blomme, G. AI-Powered Banana Diseases and Pest Detection. Plant Methods; 2019; 15, 92. [DOI: https://dx.doi.org/10.1186/s13007-019-0475-z]
171. Hassan, S.M.; Maji, A.K.; Jasiński, M.; Leonowicz, Z.; Jasińska, E. Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach. Electronics; 2021; 10, 1388. [DOI: https://dx.doi.org/10.3390/electronics10121388]
172. Thenmozhi, K.; Srinivasulu Reddy, U. Crop Pest Classification Based on Deep Convolutional Neural Network and Transfer Learning. Comput. Electron. Agric.; 2019; 164, 104906. [DOI: https://dx.doi.org/10.1016/j.compag.2019.104906]
173. Hong, H.; Lin, J.; Huang, F. Tomato Disease Detection and Classification by Deep Learning. Proceedings of the 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE); Fuzhou, China, 12–14 June 2020; pp. 25-29.
174. Mahlein, A.-K.; Arnal Barbedo, J.G.; Chiang, K.-S.; Del Ponte, E.M.; Bock, C.H. From Detection to Protection: The Role of Optical Sensors, Robots, and Artificial Intelligence in Modern Plant Disease Management. Phytopathology; 2024; 114, pp. 1733-1741. [DOI: https://dx.doi.org/10.1094/PHYTO-01-24-0009-PER] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38810274]
175. Kunduracioglu, I.; Pacal, I. Advancements in Deep Learning for Accurate Classification of Grape Leaves and Diagnosis of Grape Diseases. J. Plant Dis. Prot.; 2024; 131, pp. 1061-1080. [DOI: https://dx.doi.org/10.1007/s41348-024-00896-z]
176. Prasad, K.V.; Vaidya, H.; Rajashekhar, C.; Karekal, K.S.; Sali, R.; Nisar, K.S. Multiclass Classification of Diseased Grape Leaf Identification Using Deep Convolutional Neural Network (DCNN) Classifier. Sci. Rep.; 2024; 14, 9002. [DOI: https://dx.doi.org/10.1038/s41598-024-59562-x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38637587]
177. Deng, J.; Hong, D.; Li, C.; Yao, J.; Yang, Z.; Zhang, Z.; Chanussot, J. RustQNet: Multimodal Deep Learning for Quantitative Inversion of Wheat Stripe Rust Disease Index. Comput. Electron. Agric.; 2024; 225, 109245. [DOI: https://dx.doi.org/10.1016/j.compag.2024.109245]
178. Khanna, M.; Singh, L.K.; Thawkar, S.; Goyal, M. PlaNet: A Robust Deep Convolutional Neural Network Model for Plant Leaves Disease Recognition. Multimed. Tools Appl.; 2024; 83, pp. 4465-4517. [DOI: https://dx.doi.org/10.1007/s11042-023-15809-9]
179. Yun, Y.; Yu, Q.; Yang, Z.; An, X.; Li, D.; Huang, J.; Zheng, D.; Feng, Q.; Ma, D. Research on a Method for Identification of Peanut Pests and Diseases Based on a Lightweight LSCDNet Model. Phytopathology; 2024; 114, pp. 2162-2175. [DOI: https://dx.doi.org/10.1094/PHYTO-01-24-0013-R] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38810273]
180. Lachure, J.; Doriya, R. Designing of Lightweight Deep Learning Framework for Plant Disease Detection. SN Comput. Sci.; 2024; 5, 761. [DOI: https://dx.doi.org/10.1007/s42979-024-03100-z]
181. Zhang, Y.; Lv, C. TinySegformer: A Lightweight Visual Segmentation Model for Real-Time Agricultural Pest Detection. Comput. Electron. Agric.; 2024; 218, 108740. [DOI: https://dx.doi.org/10.1016/j.compag.2024.108740]
182. Karim, M.J.; Goni, M.O.F.; Nahiduzzaman, M.; Ahsan, M.; Haider, J.; Kowalski, M. Enhancing Agriculture through Real-Time Grape Leaf Disease Classification via an Edge Device with a Lightweight CNN Architecture and Grad-CAM. Sci. Rep.; 2024; 14, 16022. [DOI: https://dx.doi.org/10.1038/s41598-024-66989-9]
183. Dai, G.; Tian, Z.; Fan, J.; Sunil, C.K.; Dewi, C. DFN-PSAN: Multi-Level Deep Information Feature Fusion Extraction Network for Interpretable Plant Disease Classification. Comput. Electron. Agric.; 2024; 216, 108481. [DOI: https://dx.doi.org/10.1016/j.compag.2023.108481]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Traditional methods for detecting plant diseases and pests are time-consuming, labor-intensive, and require specialized skills and resources, making them insufficient to meet the demands of modern agricultural development. To address these challenges, deep learning technologies have emerged as a promising solution for the accurate and timely identification of plant diseases and pests, thereby reducing crop losses and optimizing agricultural resource allocation. By leveraging its advantages in image processing, deep learning technology has significantly enhanced the accuracy of plant disease and pest detection and identification. This review provides a comprehensive overview of recent advancements in applying deep learning algorithms to plant disease and pest detection. It begins by outlining the limitations of traditional methods in this domain, followed by a systematic discussion of the latest developments in applying various deep learning techniques—including image classification, object detection, semantic segmentation, and change detection—to plant disease and pest identification. Additionally, this study highlights the role of large-scale pre-trained models and transfer learning in improving detection accuracy and scalability across diverse crop types and environmental conditions. Key challenges, such as enhancing model generalization, addressing small lesion detection, and ensuring the availability of high-quality, diverse training datasets, are critically examined. Emerging opportunities for optimizing pest and disease monitoring through advanced algorithms are also emphasized. Deep learning technology, with its powerful capabilities in data processing and pattern recognition, has become a pivotal tool for promoting sustainable agricultural practices, enhancing productivity, and advancing precision agriculture.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details







1 Key Laboratory of Earth Observation of Hainan Province, Hainan Aerospace Information Research Institute, Sanya 572029, China; State Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China;
2 State Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China;
3 State Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China;
4 State Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China;
5 State Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China;