1. Introduction
Electric vehicles (EVs) offer improved acceleration and responsiveness compared to traditional internal combustion engine vehicles. However, these performance enhancements also increase the risk of traffic accidents, a significant global challenge responsible for 1.35 million fatalities annually [1]. Research indicates that human factors contribute to approximately 90% of road traffic accidents [2]. As EV adoption grows, along with advancements in advanced driving assistance systems (ADAS) and autonomous driving technologies, it is crucial to develop accurate real-time methods for identifying dangerous driving behaviors. These methods, powered by advanced data analytics and machine learning, can provide timely warnings to drivers and inform the development of future intelligent driving systems, improving road safety, minimizing human errors, and reducing traffic accidents.
1.1. Dangerous Driver Behavior
Driver behavior can be broadly classified into regular and dangerous driving behaviors. Regular driving behaviors encompass typical actions such as maintaining a safe following distance and executing routine lane changes. In contrast, dangerous driving behavior [3] recognition focuses on identifying unregulated actions (e.g., distraction), abnormal maneuvers (e.g., speeding), and significant physiological changes (e.g., fatigue). Fatigue [4] impairs the driver’s attention and perception, making it difficult to react appropriately to emerging situations. Speeding [5] increases braking distances and often results in delayed responses to potential obstacles. Distracted driving [6] diverts the driver’s focus, reducing situational awareness and heightening the risk of accidents. Consequently, developing accurate methods for detecting and addressing these dangerous driving behaviors is essential for enhancing road safety. However, dangerous driving behavior recognition confronts several significant challenges. These challenges arise from the complexity and diversity of driving behaviors, influenced by individual differences, environmental factors, and external influences. Therefore, constructing a model for identifying dangerous driving behaviors has consistently posed a significant technical challenge and remains an active area of research.
1.2. Dangerous Driving Behavior Recognition Technology
With the continuous advancement of driving behavior recognition technology, particularly in the domain of dangerous driving behavior recognition, methods have evolved from traditional fuzzy logic to machine learning, and more recently, to deep learning. Traditional methods for recognizing dangerous driving behavior, such as fuzzy logic, have been effective in simpler driving scenarios [7,8,9,10], but their reliance on manually defined rules limits their adaptability to complex driving conditions. To address these limitations, machine learning algorithms like support vector machines (SVM) [11,12,13], Hidden Markov Models (HMM) [14], and Random Forest (RF) [15,16,17] have been introduced. However, these methods still require manual feature selection and face challenges with overfitting and high computational complexity. Recently, deep learning has emerged as a promising solution. Unlike traditional machine learning methods, deep learning can automatically learn features from large datasets, offering robust modeling capabilities and superior performance in recognizing complex driving behaviors.
1.3. Deep Learning
In the field of deep learning, various techniques such as DBN, CNN, and RNN have shown great potential for identifying dangerous driving behaviors. As illustrated in Figure 1, multi-dimensional data from various sensors are collected to capture comprehensive information about the vehicle and the driver. These data are then processed and classified using deep learning models, ultimately yielding specific results for dangerous driving behavior recognition.
DBN [18] is composed of multiple Restricted Boltzmann Machines (RBMs) [19], with the input layer being an observable variable, and the top layer employing supervised learning via a Back Propagation (BP) Neural Network [20] (see Figure 2). DBN is a probability generation model (see Equation (1)), which is divided into two parts: pre-training and fine-tuning during the training process. In the pre-training stage, the model establishes a joint distribution between the observation data and labels, training the weights between neurons to maximize the likelihood. This enables the entire neural network to generate training data based on the maximum probability. The flexibility of DBN makes it easy to expand and can be used for parallel computation, which is suitable for the identification of dangerous driving behavior.
(1)
P(v, h1, h2…, hn) denote the joint probability distribution of the observed data v and the hidden layer h1, h2…, hn
denote the conditional probability of the observed data v given the first hidden layer h1
The core concept of this formula is to decompose the complex joint probability distribution into a series of conditional probability factors. This decomposition allows for the sequential learning of conditional probabilities at each layer through RBM training, thereby constructing the probabilistic model of the entire DBN.
CNN [21] is a kind of Feed-forward Neural Network (FNN) that includes convolutional computation, offering a deep structure. The structure of CNNs consists of an input layer, a hidden layer, and an output layer (see Figure 3). The hidden layer includes the convolution layer, pooling layer, and full connection layer. It extracts features from the input data using convolution kernels, convolution parameters, and activation functions. After extracting features from the convolutional layer, they will be input to the pooling layer for feature selection, dimension reduction, and information filtering. The full connection layer functions as a classifier, mapping the extracted features to the corresponding labels in the output space. CNNs are well suited for tasks such as image recognition, behavior analysis, and gesture recognition due to their ability to capture learning, translation invariance, weight sharing, and local connectivity. Therefore, CNNs have a good effect in identifying dangerous driving behaviors.
RNN [22] is a type of neural network designed to process sequential data. It works by recursively iterating in the direction of sequence evolution, with all nodes connected in a chain-like structure (see Figure 4). However, RNNs face challenges in capturing long-term dependencies due to the vanishing gradient problem. To solve this problem, Long Short-Term Memory (LSTM) networks were developed. LSTM is a specialized variant of RNNs, which is suitable for processing and predicting important events with relatively long intervals and delays in time series. The key feature of LSTM is its gate mechanism, which helps regulate the flow of information and ensures that important information is retained over time. This mechanism effectively mitigates the problem of long-term dependencies. Figure 5 illustrates the architecture of the LSTM model, where each block “A” represents an LSTM unit. Each unit consists of the input vector X(t), the forget gate F(t), the input gate I(t), Cell State, and the output gate O(t) [23].
-. The forget gate F(t): Determines which information from the previous time step should be retained or discarded.
-. The input gate I(t): Controls how much of the current input information should be stored in the cell state.
-. The cell state C(t): Updates the memory state by integrating the outputs of the forget gate and input gate, thereby retaining important information and discarding irrelevant data.
-. The output gate O(t): Determines the output for the current time step by generating the hidden state based on the updated cell state.
The output function is shown in Equation (2). LSTM has strong advantages in processing time series data, which can meet the memory ability of historical data and is suitable for dangerous driving recognition.
(2)
Zhao et al. [24] introduced a driver drowsiness detection method that integrates a DBN with dynamic facial information fusion. This approach achieves high accuracy by extracting facial features and fusing dynamic textures from the eye and mouth regions, thereby enhancing the reliability of drowsiness detection in real-world driving scenarios; Shahverdy et al. [25] successfully identified five driving behaviors—normal, aggressive, distracted, drowsy, and drunk driving—by converting driving signals into images and employing a CNN for classification. Zhang et al. [26] developed a deep learning framework that integrates a CNN and RNN, enabling the successful classification of multiple types of driving events. The framework achieved F1 scores of 0.9508 and 0.9579 on smartphone sensor data, demonstrating its high accuracy and reliability. These studies highlight the critical role of deep learning in identifying dangerous driving behaviors and emphasize the potential of deep learning techniques like DBNs, CNNs, and RNNs in enhancing road safety.
1.4. Related Work
Recent reviews have predominantly focused on specific aspects of dangerous driving behaviors. Elassad et al. [27] compared the estimation accuracy between machine learning (ML) models and non-ML models, specifically examining the performance of various ML models in identifying dangerous driving behaviors. The study concluded by summarizing the advantages and disadvantages of different ML techniques for this purpose. Ferreira Jr. et al. [28] introduced a machine learning approach for analyzing driving behavior using sensor data from Android smartphones; however, none of these studies provided a comprehensive review of deep learning methods. Abd El-Nabi et al. [29] utilized existing ML and deep learning (DL) technologies to assess various methods for collecting fatigue data, including image-based, video-based, and biological signal-based approaches. David et al. [30] outlined the strengths and weaknesses of three lane change behaviors using different ML techniques, namely artificial neural networks (ANNs), hidden Markov models (HMMs), and support vector machines (SVMs). These findings are largely confined to specific domains and lack a comprehensive analysis of the multifaceted nature of dangerous driving behaviors.
Therefore, our conclusion is that existing studies have not comprehensively addressed the detection of dangerous driving behaviors using deep learning technology in recent years, particularly the application of prominent learning methods such as DBNs, CNNs, and RNNs. To address this gap, we propose conducting a new systematic review aimed at filling this void in the literature by systematically summarizing the various deep learning techniques employed to identify dangerous driving behaviors. Additionally, we have categorized the data sources into four main types: questionnaires, vehicle state parameter monitoring, driver body behavior monitoring, and driver physiological signal monitoring.
The main contributions of this study are as follows:
Provision of a Comprehensive Framework: This study provides a comprehensive framework that integrates a variety of data sources and advanced preprocessing techniques for dangerous driving behavior identification. This framework serves as a foundation for understanding how various components contribute to the recognition process, enhancing the accuracy and reliability of dangerous driving behavior models.
Review of Latest Advances in Driver Behavior Recognition: The study offers an in-depth review of recent advancements in driver behavior recognition models and algorithms, providing researchers with valuable insights to optimize methodologies and improve the performance of recognition systems.
Identification of Research Gaps and Future Directions: The study identifies existing gaps between current state-of-the-art recognition models and their optimal performance. It suggests directions for future research to address these gaps and advance the accuracy and effectiveness of dangerous driving behavior recognition systems.
The outline of the paper is as follows: The second part introduces the acquisition and analysis of dangerous driving behavior characteristic data and the pre-processing methods. The third part details the methods of dangerous driving behavior recognition based on deep learning, including DBNs, CNNs, and RNNs, and further compares their application scenarios and behavior recognition feature information. Finally, the conclusion and prospect of this paper are given. As illustrated in Figure 6, the process for identifying dangerous driving behaviors is detailed, encompassing the steps from data collection through model testing and validation.
2. The Data Analysis of Dangerous Driving Behavior
With the ongoing advancements in automotive intelligence and electrification technologies, modern vehicles are now equipped with an extensive array of onboard sensor systems, including GPS, gyroscopes, cameras, and CAN buses. These sensors not only provide reliable input to vehicle safety control systems but also accurately capture driver behavior and vehicle operational status. The diverse data types, as detailed in Table 1, reflect the comprehensive information acquired by the vehicle’s sensors.
In the system of identifying dangerous driving behaviors, the required data should be collected first and then feature extraction and classification should be carried out. Data collection types are mainly divided into four parts: questionnaire collection data [31], vehicle running state data [32], machine vision data [33], and driver physiological characteristics data [34]. Only when the collected data are accurate and comprehensive enough can it provide the necessary support for the dangerous driving recognition model. Figure 7 illustrates an integrated framework for the acquisition of dangerous driving behavior data. This framework employs a diverse set of methods, including questionnaires, video capture, physiological monitoring (e.g., EEG, ECG, EOG), and vehicle operation status detection. These methods are designed to gather comprehensive information on drivers’ facial expressions, body postures, hand movements, driving experiences, physiological signals, vehicle operation data, and the surrounding environment. The collected data are then utilized to analyze driver behavior patterns, assess driving risks, understand driving habits, and ultimately enhance driving safety and support the development of advanced assistance systems.
2.1. Questionnaire Data
Collecting driving data through questionnaire surveys is a foundational method that involves drivers completing structured questionnaires to gather information on their demographic characteristics, driving habits, and self-reported risk perception. Subsequently, statistical learning techniques are applied for quantitative analysis and descriptive modeling of the collected data. The results of the questionnaire are easily quantifiable for statistical analysis and can be investigated on a large scale. For example, Seo et al. [31] analyzed the impact caused by different dangerous driving by using a DBQ questionnaire and structural equation modeling in South Korea, and the results showed that the highest level of impact of violation occupies 0.464, Lapse occupies 0.383, and the impact of Mistake due to inexperience or other unintentional reasons occupies 0.158, which needs to be prevented from intentional violations and negligent errors of drivers. Pawan et al. [35] analysis using a modified version of the Manchester Driving Behavior Questionnaire in India found that driver attitudes had an impact of 0.77 on infractions and 0.83 on errors, while passive passengers had an impact of 0.88 and active passengers 0.061, which can be used for safe driving training and policy development. In addition, references [36,37] used questionnaires to analyze dangerous driving behaviors in China and Spain, respectively. However, the information obtained from the questionnaire is relatively limited, and some detailed and deep information may be missed, complex and diverse driving behavior information cannot be obtained, and real-time monitoring of vehicle status cannot be achieved. Therefore, such data collection methods are not widely used.
2.2. Vehicle Running Status Data
The parameters provide the most immediate and reliable dataset, readily attainable through a variety of sensors. The Controller Area Network (CAN) bus, as referenced [32], is instrumental in acquiring precise, real-time vehicle operation data. An inertial measurement unit (IMU) [38], which includes an accelerometer, gyroscope, and magnetometer, is employed to measure three-dimensional acceleration, angular velocity, and magnetic field alignment, as well as to assess the impact of longitudinal velocity on vehicle yaw rate. Furthermore, research [39] indicates that smartphone sensor data are highly comparable to that gathered by specialized equipment, efficiently capturing and analyzing driver behaviors such as speeding, acceleration variations, harsh braking, and aggressive maneuvering. However, this method’s reliance on a real-time network connection is a notable limitation. Simulation testing with a driving simulator [40] allows for flexible and convenient scene changes, enabling data collection in intricate and challenging settings. Table 2 provides a comprehensive summary of the aforementioned information acquisition methods, along with an analysis of their respective advantages and limitations. This comparison not only elucidates the strengths of each technique across diverse research contexts but also aids in identifying the most suitable methods for accurately monitoring and analyzing driving patterns, thereby supporting the overarching objectives of the study.
2.3. The Vision Data
Using machine vision to acquire data is one of the information sources in current dangerous driving behavior recognition, which can intuitively observe the driving behavioral environment. Machine vision captures the driver’s facial features through a camera, then transmits the data to the image processing system, and finally obtains the morphological information of the target through processing. The facial features are divided into three parts: eye state, mouth state, and facial posture. The eye state is divided into external and internal aspects. The external aspects include the closing time of the eyes, the movement of the upper and lower eyelids, and the blinking frequency. The internal aspects mainly involve the dilation and directional movement of the pupil to verify fatigue [41,42]. The mouth state is based on changes in the driver’s expressions and muscle tension during various driving conditions, including vertical and horizontal movements and mouth shape, to determine fatigue [33]. Facial posture is calculated by extracting the driver’s facial feature points to determine whether the driver is distracted or not [43]. Images captured by the camera are analyzed using deep learning techniques to classify dangerous driving behaviors, effectively compensating for distractions, phone use, and ignoring potential road hazards (see Figure 7). Furthermore, variations in lighting conditions significantly influence the model’s accuracy. Das et al. [44] introduced a method that integrates a CNN and a bidirectional long short-term memory network (BiLSTM) to analyze drivers’ facial expressions and movements in dynamic environments, especially under fluctuating lighting conditions. This approach enhances the robustness of visual recognition for dangerous driving behaviors, thereby improving overall system performance. Through these scenario tests, the model’s performance in real-world applications can be thoroughly evaluated, thereby providing a robust foundation for future optimization and refinement of the model.
2.4. Driver Physiological Characteristics Data
When external environmental stimuli and emotional changes impact the driver, the data collected by machine vision may be imperfect. Therefore, it is necessary to use instrumentation to collect the physiological characteristics of the driver (see Figure 7). According to the different physiological signals extracted, they can be categorized into three types: EEG signals, ECG signals, and myoelectric signals. EEG signal processing methods primarily rely on filtering, integrated learning, and other techniques to extract features for driver status recognition. Liu et al. [34] preprocessed the collected EEG and EOG data, extracted multiple features, and employed functional support vector machines (FSVM) to classify and recognize fatigue states. Other researchers have collected electroencephalogram (EEG) data to analyze drivers’ emotional and fatigue states [45], demonstrating that physiological characteristics serve as reference data for evaluating dangerous driving conditions. Based on the concept of ensemble learning, Rao et al. [46] established ensemble learning classification models, including Bagged Tree, RSM Discrimination, and RUSBoosted Tree. The Pearson correlation coefficient was utilized to construct a functional network for distinguishing between fatigued driving and normal driving. The recognition technology for ECG signals primarily employs neural networks, differential thresholds, and other methods to extract features and identify the driver’s state. EMG signal recognition technology primarily uses various wearable sensors to acquire EMG signals from key body parts to assess the driver’s state. Zheng et al. [47] and Zontone et al. [48] obtained eye muscle signals through motorized devices to detect driver fatigue.
To summarize, the data collection methods analyzed—questionnaire data, vehicle running status data, vision data, and driver physiological characteristics data—each possess distinct advantages and limitations in identifying dangerous driving behaviors. Questionnaire data provide a convenient and scalable means of understanding driving habits and self-reported risk perceptions but lack real-time applicability. Vehicle running status data offer precise, real-time monitoring; however, they are contingent on reliable network connections and sensor integration. Vision-based approaches excel in detecting driver distractions and fatigue through facial feature analysis but are sensitive to environmental conditions such as lighting. Physiological characteristics data delve deeper into drivers’ mental and physical states, offering robust support for fatigue and emotion recognition, albeit requiring specialized equipment.
The complementary strengths of these methods underscore the necessity of a multimodal approach that integrates various data sources to enhance the accuracy and robustness of dangerous driving behavior analysis. A detailed comparison is presented in Table 3 below.
While the four data collection methods each possess distinct characteristics in both research and practical applications, their effectiveness in identifying dangerous driving behaviors is contingent upon the timeliness of the data and the robustness of processing capabilities. Real-time monitoring not only extends data collection processes but also serves as a critical component for the identification and prevention of dangerous driving behaviors. Kashevnik et al. [49] proposed an integrated system leveraging edge computing that combines machine vision with time series analysis to perform real-time multimodal data processing, thereby facilitating the accurate identification of hazardous driving behaviors. Lashkov et al. [50] employed real-time facial recognition and feature point extraction technology on smartphones, utilizing the Dlib and OpenCV libraries to monitor drivers’ facial features and head posture. By processing data directly on mobile devices, they leveraged key advantages of edge computing, such as reduced latency, improved response speed, and enhanced data privacy protection. It is evident that current driving data collection and real-time monitoring predominantly leverage the advantages of edge computing, thereby significantly reducing data processing and feedback latency. By employing efficient real-time data processing algorithms, low-latency communication protocols, and system optimization techniques, these systems ensure high timeliness in real-time monitoring and early warning of driving behavior. Consequently, this enhances driver safety and effectively mitigates the risk of potential traffic accidents.
2.5. Data Privacy
With the rapid advancement of intelligent driving technology, real-time collection and analysis of driving data have become increasingly essential for enhancing road safety and improving the driving experience. However, this process generates vast amounts of driving data that contain significant amounts of personal sensitive information, including driver behavior, location, driving trajectories, speed, and other related data. If not adequately protected, this information is at risk of privacy breaches and misuse.
To address these risks, privacy regulations such as the General Data Protection Regulation (GDPR) in the European Union mandate strict rules for data collection, storage, and processing. These laws emphasize informed consent, data minimization, and the right to data deletion.
Recent research has explored advanced techniques to mitigate privacy risks while maintaining data utility. Martin et al. [51] explores the implementation of a deidentification filter on video sequences of looking at a driver from naturalistic driving data. The goal is to protect the privacy of drivers while preserving behavior-related information, such as eye gaze, head pose, and hand activity. Li et al. [52] investigated the privacy challenges associated with autonomous driving services within the context of vehicular edge computing (VEC) and introduced a federated learning (FL)-based framework to safeguard vehicle privacy. Furthermore, they addressed potential threats from semi-honest multi-access edge computing (MEC) servers and malicious vehicles by proposing an anonymous identity-based privacy protection mechanism that leverages zero-knowledge proofs (ZKP) to ensure robust identity privacy. Xiong et al. [53] introduced a novel approach, termed Auto-Driving GAN (ADGAN), which integrates Generative Adversarial Networks (GANs) with advanced image-to-image translation techniques to generate privacy-preserving camera images. This method effectively enhances location privacy in autonomous driving systems by ensuring that sensitive information is obscured while maintaining the utility of the visual data. Liu et al. [54] introduced a federated deep attention fusion model (FedDAF), specifically designed to address the critical challenges of data security and traffic safety in the detection of dangerous driving behaviors. By leveraging federated learning and deep attention mechanisms, FedDAF ensures robust protection of sensitive data while enhancing the accuracy and reliability of detecting hazardous driving activities.
Despite these advancements, challenges persist. For example, vision-based systems require real-time data processing, which often involves transmitting unencrypted video streams. Similarly, physiological data such as EEG and ECG are highly personal and may lead to stigmatization if exposed. Moreover, balancing the trade-offs between privacy preservation and model accuracy remains a key research question.
After the above data acquisition, the data need to be pre-processed. As shown in Figure 8, the methods of data pre-processing include data cleaning (filling in missing data and data filtering), data integration, data transformation, and data reduction. The methods to deal with missing data include elimination, mean value replacement, and hot card filling. The data filtering methods include the Kalman Filter [55], MentorMix [56], and Deep Residual Shrinkage Network [57]. The method for data integration is to put the data collected by multiple sensors into a database, including the schema integration method, data replication method, and ontology-based data integration. Data transformation uses function transformation, normalization, discretization, etc. Data reduction methods are divided into attribute reduction and numerical reduction. Through the above methods, the collected data are pre-processed according to the needs, and then the data can be input into the deep learning model for dangerous driving behavior recognition.
3. The Recognition Model of Dangerous Driving Behavior
To accurately identify dangerous driving behavior, we need to extract highly correlated features from the collected big data, mine relevant information, and establish a mapping between these features and dangerous driving behavior. Deep learning can automatically extract features, has excellent portability, strong learning ability, and a high data-driven upper limit. It has excellent performance in the fields of character recognition [58], voice recognition [59], and image recognition [60], making it very suitable for the recognition of dangerous driving behavior. The recognition of dangerous driving behavior based on deep learning involves inputting the collected data into the deep learning model for recognition and classification after pre-processing.
3.1. Deep Belief Network
DBNs have been used by some researchers in the recognition of dangerous driving and have a good recognition effect. In fatigue driving recognition, Kır et al. [61] introduce a novel driver fatigue detection method grounded in behavioral measurement information, employing a four-layer DBN architecture. The proposed method was rigorously evaluated on three established datasets: YawDD, Nthu-DDD, and KOU-DFD. Experimental results demonstrate that the accuracy rates for eye-based and mouth-based detection are 87% and 88%, respectively, highlighting the robust modeling and classification performance of the proposed approach. Zhao et al. [24] proposed a driver fatigue detection system composed of face dynamic information fusion and DBN by using the camera to collect data. The average accuracy of this method through testing is 96.7%. In recognizing abnormal driving behavior, Hema and Jaison [62] used a DBN to simulate lane change decisions and optimized the parameters of a Long Short-Term Memory (LSTM) model with an enhanced Gray Wolf Optimization algorithm to predict vehicle positions. Employing the Next Generation Simulation (NGSIM) dataset, their model achieved an accuracy of 98.4%. Abbas [63] developed a video-based intelligent transportation system (V-ITS) that integrates CNN and DBN to identify speeding behavior in automobiles. Through testing using the GRAM-RTM dataset, the system achieved an average recognition accuracy of 90.01%. In the prediction of vehicle driving trajectory, Xie et al. [64] developed a lane change behavior model utilizing DBN and LSTM networks, which can predict both lane change behavior and trajectory. The model was validated using the NGSIM dataset, achieving a Mean Squared Error (MSE) of less than 0.002 for lane change behavior prediction. Some researchers have studied lane detection [65,66], gear-shifting decision-making [67], and other aspects with good recognition results.
To sum up, the advantages of using DBNs to identify dangerous driving behavior include strong flexibility, easy expansion, parallel computing, less convergence time, etc. Therefore, the recognition model based on DBNs has a relatively good recognition rate in the recognition of dangerous behavior, but it has some disadvantages that lead to incomplete recognition functions. DBNs can only process one-dimensional data and are not suitable for identifying various dangerous actions of the driver. In data training, it is necessary to provide a labeled sample set, which increases the manual workload. Inappropriate parameter selection in data processing will lead to convergence to the local optimal solution and cannot process time-variable data. Table 4 categorizes recent literature on DBN-based dangerous driving behavior recognition, analyzes the proposed solutions, and provides a comprehensive summary of their respective advantages and limitations.
3.2. Convolutional Neural Network
CNN is the most common in dangerous driving behavior recognition models, and its advantages are very suitable for dangerous driving behavior recognition. In the recognition of fatigue driving behavior, Savaş and Becerikli [74] proposed a Multi-task Convolutional Neural Network (MT-CNN) model to detect drivers’ drowsiness and fatigue. After testing on the YawdDD dataset and NthuDDD dataset, the model achieved a recognition rate of 98.81%. Additionally, the model demonstrated effectiveness in recognizing distracted driving behaviors. Leekha et al. [75] developed a distracted driving behavior detection model based on CNNs. When tested on the AUC Distracted Driver dataset, the model achieved an accuracy rate of 95.64%. Huang et al. [76] introduced a Hybrid CNN Framework (HCF) for identifying distracted driving behavior using camera-collected data. The method achieves an accuracy of 96.74% and exhibits the lowest average running time and processing time (approximately 0.041 s) across all tests, demonstrating a significant performance advantage over other deep learning models. Kose et al. [77] used a spatial–temporal method to identify distracted driving behavior based on CNNs. It was able to recognize ten distracted driving behaviors, such as making phone calls, drinking water, and adjusting the radio during driving. The recognition accuracy is 99.10%. In the recognition of abnormal driving behavior. Kumaran et al. [78] proposed a CNN and Variational Autoencoder (VAE) hybrid structure for the abnormal driving behavior detection method. It can recognize that the vehicle is not driving along the lane, the speed changes suddenly, the vehicle movement stops suddenly, and the vehicle is moving in the wrong direction. In the recognition of vehicle running state, Xie et al. [79] present an autonomous driving operation classification system based on smartphone sensor data, leveraging CNN and multi-sliding window fusion techniques to capture both short-term and long-term dynamic information. By integrating short and long windows, the classification performance is significantly enhanced, with the F1 score increasing from 58.22% to 77.45%. Employing a CNN with intermediate fusion further elevates the macro F1 score to 80.25%. Zhang et al. [26] used smartphones to collect data and built a DeepConvGRU Attention model based on CNN to identify the running state of vehicles. The response time of the proposed scheme is about 300 ms, and the recognition accuracy is above 91%. In the identification of dangerous environments around vehicles, Yin et al. [80] proposed the multi-CNN model to detect the lanes, vehicles, and pedestrians on the road to judge driving behavior. The Mean Square Error of this model is 1.9. Other scholars have made good achievements in vehicle driving trajectory prediction and driving decision [81,82,83,84], driver posture [85,86,87,88], driving style [89,90,91], etc.
To sum up, CNNs have the advantages of processing high-dimensional data without pressure, sharing convolution kernels, automatic feature extraction, and strong anti-noise interference, which makes them suitable for the recognition of dangerous driving behavior, and the effect of identifying dangerous driving behavior based on CNNs is better than machine learning and DBNs. However, a CNN is not a perfect deep learning algorithm, and CNNs also have the following shortcomings: when the network level is too deep, BP is used to modify parameters, which will make the parameters close to the input layer change slowly. The pooling layer will lose a lot of valuable information and ignore the relationship between the parts and the whole. Table 5 categorizes recent literature on CNN-based dangerous driving behavior recognition, analyzes the proposed solutions, and provides a comprehensive summary of their respective advantages and limitations.
3.3. Recurrent Neural Network
RNN is common in the recognition model of dangerous driving behavior. In fatigue driving recognition, Utomo et al. [117] proposed a system that integrates an RNN with LSTM units to predict driver fatigue using HRV and PERCLOS parameters. Experimental results demonstrated that the LSTM-based module achieved superior performance on PERCLOS data, with a true positive rate of 75% and an accuracy of 88%. Conversely, the BPNN-based module exhibited better performance on HRV data, achieving a true positive rate of 80% and an accuracy of 88%. Du et al. [118] used an RGB-D camera to extract fatigue features, and based on a new Multimodal Fusion Recurrent Neural Network (MFRNN) to build a fatigue driving recognition model, the recognition accuracy was 91.6% in the test process. Ed-Doughmi et al. [119] detected the drowsiness of drivers based on the LSTM model, collected data through cameras, and the recognition accuracy was 92.71% after testing. In the recognition of distracted driving behavior, Saleh et al. [120] built a classification model of driving behavior based on a Stacked-LSTM neural network and used smartphone sensors to collect data. The model can recognize normal driving, aggressive driving, and drowsy driving with an average recognition rate of 91%. Mafeni et al. [121] benchmarked a range of deep learning methods for detecting driver distraction, identifying the InceptionV3 model augmented with stacked BiLSTM as the top-performing approach. This configuration achieved an average loss of 0.292 and an F1-score of 93.1% on the AUC Distracted Driver Dataset. Matousek et al. [122] built an abnormal driving anomaly detection model based on RNN to distinguish aggressive, anxious, nervous, and unstable driving by identifying dangerous lane-changing and emergency acceleration actions. The recognition accuracy through testing is 93%. Jia et al. [123] recognized abnormal driving behavior based on the LSTM-CNN model, such as rapid acceleration, emergency braking, and rapid lane changing. The average recognition rate of data collected by the DAS system can reach 95.684%. In vehicle running status recognition, Khairboost et al. [124] proposed a driving behavior recognition model based on LSTM, which collected dashboard data through the CAN bus. The left/right lane change and left/right turn were identified by the model with an accuracy of 84.6%. In the identification of the surrounding dangerous conditions of vehicles, Zhang et al. [125] used laser radar, GPS, and other sensors to collect data, identify the status of nearby vehicles based on the LSTN neural network, and judge whether surrounding vehicles pose a danger to own vehicles. Some scholars have made good results in their research on vehicle trajectory prediction [126,127,128], vehicle following [129,130], driving style [131,132], and other behaviors.
To sum up, RNNs can share weights at each time, greatly reducing model parameters, deeply mining the characteristics of data with temporal characteristics, remembering and discarding data, and solving the gradient disappearance problem by using the gating mechanism (LSTM). RNNs have better recognition performance than DBNs. However, RNNs also have the following disadvantages, such as a poor processing effect on long sequence data, inability to conduct parallel computing, a long training time, great training difficulty, and the need for large-scale labeled training data. Table 6 categorizes recent literature on RNN-based dangerous driving behavior recognition, analyzes the proposed solutions, and provides a comprehensive summary of their respective advantages and limitations.
There is also an adaptive neural network model [147,148,149], a deep neural network (DNN) model [150,151], and a generative adversarial networks (GANs) model [152] to identify dangerous driving behavior. For example, Tang, J et al. [149] proposed a lane-changing recognition based on an adaptive fuzzy neural network model. Through experiments on different speed levels in a driving simulator, the prediction results show that the proposed method can accurately identify wheel rotation angles. Ou C et al. [153] designed a driver distraction recognition system based on GAN, which improved image classification performance by 11.45% in a simulated driving environment. Choi S et al. [154] identified different types of driving styles and then predicted a vehicle’s trajectory based on the GAN and driving styles. At present, the recognition accuracy of these methods is more than 80%. With the development of computer technology and deep learning, other methods for various dangerous driving will be more comprehensive, and the recognition performance will also be improved.
To fully leverage the powerful capabilities of deep learning in identifying dangerous driving behaviors, researchers have increasingly adopted ensemble methods that integrate multiple deep learning models. A single deep learning model typically extracts features from a specific perspective or type of data, while an ensemble approach enables the complementary strengths of different models, thereby enhancing overall accuracy and robustness in recognition. For instance, Li et al. [155] utilized smartphones to collect data and constructed a CNN-LSTM model to identify six dangerous driving behaviors, including sharp turns and rapid lane changes. The recognition accuracy achieved 95.22% in the test. Patil [156] proposed Hypo-Driver, an advanced system for detecting driver inattention and fatigue. This system employs a multi-fusion architecture integrating CNN, RNN-LSTM, and DRNN (deep residual neural network) models to effectively process data from visual and biological signals, thereby enhancing the accuracy of driver alertness detection. Qu [157] integrated a pre-trained CNN with a bidirectional long short-term memory network (BiLSTM), thereby significantly enhancing the detection accuracy of distracted driving. Experimental results demonstrate that the CNN-BiLSTM framework achieves an accuracy of 98.97% on the “joint dataset” which combines the Kaggle State Farm dataset and the AUC Distracted Driving dataset. Zhang et al. [158] integrated a multi-scale convolutional neural network (MSCNN) with dual LSTM to detect distracted driving behaviors on the public BLVD dataset, achieving a peak accuracy of 97.75%. This performance significantly outperformed that of the LSTM and HMM models. This multi-model fusion approach not only enhances accuracy but also significantly improves the robustness and real-time performance of the system, particularly in complex environments such as nighttime driving conditions or diverse driver behaviors. Consequently, integrating multiple deep learning models for identifying dangerous driving behaviors better addresses practical challenges, thereby promoting the development of intelligent transportation systems while enhancing safety. As illustrated in Figure 9, the diagram provides an overview of the CNN-LSTM hybrid model architecture, highlighting its key components and data flow.
4. Summaries and Prospects
4.1. Summaries
This paper provides a comprehensive review of deep learning-based methods for recognizing dangerous driving behaviors, with particular emphasis on two key aspects: data sources and the application of deep learning technology. The primary data sources examined include survey questionnaires and on-board sensors. On-board sensors gather a wide array of data, encompassing vehicle operation status (e.g., speed, acceleration, steering angle), visual data (e.g., driver facial expressions, gaze points), and physiological data (e.g., heart rate, respiratory rate). These multi-dimensional datasets offer valuable insights into identifying dangerous driving behaviors. By integrating multiple data acquisition methods, such as onboard sensors, cameras, and physiological sensors, more comprehensive and precise data can be obtained. This integration ensures that the recognition system more closely reflects real-world driving conditions, thereby enhancing its practicality and reliability.
The second part of this paper focuses on the application of deep learning technology in recognizing dangerous driving behaviors. Deep learning models exhibit superior capabilities in processing complex big data, particularly through their ability to automatically learn effective feature representations from raw data without the need for manual feature extraction. Key strengths of deep learning include its robust learning capacity, excellent portability, and efficient performance, which enable it to adapt seamlessly to diverse data sources and application scenarios. As illustrated in Figure 10, the image depicts the recognition accuracy of fatigue driving detection achieved through deep learning methodologies [24,61,68,69,70,71,74,93,94,95,96,97,98,117,118,119,133,134,159].
According to the summary and analysis presented in this paper, deep learning-based dangerous driving behavior recognition systems have achieved significant advancements, with an overall recognition accuracy surpassing 80%. Notably, when utilizing a deep learning hybrid model for fatigue driving recognition, the accuracy rate can exceed 95%, thereby substantially enhancing both the accuracy and practicality of the recognition system. Table 7 provides a detailed overview of the key characteristics of the three driving behavior recognition models discussed in this paper, including their application scenarios, advantages, and limitations.
4.2. Practical Application
In recent years, driver assistance systems have become an integral component of modern automobiles. These systems provide critical functions such as automatic emergency braking (AEB), lane-keeping assistance (LKA), and adaptive cruise control (ACC) through advanced sensors and sophisticated algorithms. With the continuous advancement of data analysis technologies, particularly the integration of deep learning and edge computing, the capabilities of driver assistance systems can be significantly enhanced. The incorporation of real-time data analysis into these systems enables vehicles to more intelligently detect driver behavior, monitor road conditions, and identify potential safety risks. Over the past decade, the automotive industry has invested substantial resources in developing innovative technologies and functionalities aimed at detecting driver inattention.
Ford [160] has integrated a driver fatigue monitoring system into select vehicle models. This system employs advanced algorithms to analyze drivers’ facial expressions, eye movements, and operational behaviors, thereby enabling the prompt detection of fatigue indicators and timely issuance of alerts. The successful deployment of this system has significantly enhanced driver alertness and markedly reduced the incidence of accidents attributable to fatigued driving.
Tesla’s Autopilot system [161] integrates extensive real-time data analysis by leveraging onboard sensors such as radar and cameras, in conjunction with advanced deep learning models, to continuously monitor driver behavior and road conditions. Through systematic analysis of driving data, the system can predict potential driver actions and provide timely intervention measures, when necessary, thereby enhancing vehicle safety and automation capabilities. This integration of real-time analytics and machine learning algorithms significantly improves the system’s ability to anticipate and respond to dynamic driving scenarios, further reinforcing its role in advancing automotive safety and autonomy.
While these applications demonstrate the potential of deep learning to enhance vehicle safety and driver assistance systems, their real-world deployment faces several challenges. Limitations in data collection, model complexity, and variability in driving conditions present significant obstacles to achieving widespread and effective implementation. Overcoming these challenges is essential for advancing the practical application and reliability of these technologies.
4.3. Challenge
Building upon the advancements discussed in the previous section, this section delves into the key challenges that currently hinder the broader adoption of deep learning-based dangerous driving behavior recognition systems. These challenges range from limitations in data acquisition methods to the computational demands of deep learning models and the adaptability of these systems to diverse and dynamic driving environments.
Limitations of Data Collection Methods: Current studies rely on in-vehicle sensors, video surveillance, and driver physiological data to gather dangerous driving behavior data. While these methods provide valuable information for recognition systems, they also present challenges. Limited sensor coverage may miss key aspects of dangerous driving behavior, especially psychological or emotional states. Video surveillance accuracy can be compromised by low light or extreme weather. Physiological data collection requires specialized equipment, limiting its real-world application. Future research should prioritize the development of more advanced sensors, particularly through the design of cameras and sensors that can withstand low-light conditions and adverse weather, thereby enhancing the accuracy of video surveillance. Moreover, leveraging external data sources, such as urban traffic monitoring data and information from connected vehicles, can significantly broaden the scope and depth of data collection.
The Complexity of Deep Learning Models and Training Challenges: Deep learning models, such as DBNs, CNNs, and RNNs, require extensive labeled data for training due to their large parameter sets and complex architectures. Collecting and processing these data is costly and time-consuming. Existing datasets can lead to issues like overfitting, gradient explosion, or vanishing gradients, especially in deep networks, complicating and prolonging training. Techniques like layer-wise learning and selective initialization help mitigate these problems, but long training times persist. Future research should prioritize enhancing training efficiency, optimizing model architectures, and developing more sophisticated gradient optimization algorithms. By incorporating weakly supervised learning, semi-supervised learning, and active learning methods, the reliance on costly manual data annotation can be significantly reduced, thereby improving data utilization efficiency. Moreover, by applying transfer learning techniques and leveraging pre-trained models, the training process can be substantially accelerated while markedly reducing the need for large-scale datasets.
Robustness and Adaptability of the Model: While deep learning methods provide accurate results in most cases, their robustness and adaptability remain challenging. Existing models may perform inconsistently across different driving environments, driver behaviors, and external factors. For instance, driver behavior can vary by region and climate, which existing models may not fully accommodate. Future research should prioritize enhancing cross-domain adaptability to ensure model accuracy in complex and dynamic environments. Specifically, this entails developing adaptive models that are capable of dynamically adjusting to real-time changes in driving conditions. By leveraging advanced adaptive algorithms, these models can improve both robustness and reliability, thereby facilitating broader applicability across diverse scenarios.
User Adoption Challenges: The adoption of deep learning-based dangerous driving behavior recognition systems encounters several critical challenges. Chief among these are concerns over data privacy and security. Users may be reluctant to share sensitive information, such as driving habits or physiological data, without stringent safeguards. Additionally, the lack of system transparency can undermine trust, as users often find it difficult to comprehend the decision-making processes of the models. Future research should prioritize the development of privacy-preserving techniques, enhancing model interoperability, and refining user interface design to effectively address key challenges and foster greater user acceptance. Specifically, this entails implementing privacy-preserving techniques such as federated learning and differential privacy to safeguard user data without compromising model performance. Moreover, improving user interface design through simplified, intuitive interfaces with features like dashboards, voice assistants, and straightforward interaction methods can significantly enhance usability and user acceptance, thereby facilitating broader adoption of these technologies across diverse applications.
Despite these significant challenges, ongoing advancements in technology and research offer promising opportunities to surmount these obstacles. Addressing issues such as data acquisition limitations, model complexity, and adaptability will be critical for fully realizing the potential of deep learning in recognizing dangerous driving behaviors. These efforts not only promise to resolve existing bottlenecks but also lay the foundation for developing more robust and intelligent systems in the future. Continuous innovation and rigorous testing are essential to ensure that these technologies can effectively enhance road safety and driver assistance capabilities.
4.4. Prospects
The rapid advancement of intelligent and connected vehicle technologies has significantly enhanced the potential for deep learning-based dangerous driving behavior recognition. By leveraging high-performance onboard sensor technology, advanced big data processing capabilities, and sophisticated artificial intelligence algorithms, future research will increasingly emphasize more intelligent, efficient, and engineering-oriented approaches.
In the field of driving safety, the further optimization and application of deep learning models represent a critical research direction. A key challenge is to fully extract driving behavior features by integrating multi-source vehicle-mounted sensor data and develop a driving behavior recognition model with high accuracy and real-time performance. In natural driving conditions, driving behavior is influenced by multiple factors, including driver status, road environment, and vehicle dynamics. Consequently, the model must possess enhanced robustness and generalization capabilities to effectively address complex driving scenarios and uncertainties.
Meanwhile, achieving lightweight and real-time performance will remain a central objective in future model development. Currently, dangerous driving behavior recognition models frequently exhibit high computational complexity and significant power consumption, which are at odds with the demands for real-time recognition and practical engineering applications. Future research should prioritize simplifying model architectures and optimizing algorithms to develop lightweight models capable of efficient operation on embedded devices and edge computing platforms. Such models will not only meet the stringent requirements for low computational power consumption but also ensure robust data processing and rapid decision-making capabilities, thereby providing substantial support for advanced driver assistance systems (ADAS) and autonomous driving technologies.
Furthermore, as autonomous driving technology continues to mature, the role of deep learning algorithms in identifying dangerous driving behaviors will be significantly extended. Future research will concentrate on integrating advanced techniques such as reinforcement learning and transfer learning with driving behavior recognition to enhance the safety and reliability of autonomous driving systems. Specifically, in high-dynamic and high-risk scenarios, accurately detecting abnormal driving behaviors through deep learning models and dynamically optimizing autonomous driving decision-making logic in real time will become a critical component of the comprehensive unmanned driving safety framework [161].
In conclusion, future research on dangerous driving behavior recognition based on deep learning will exhibit three key trends: first, the further advancement of multi-source data fusion and multi-modal feature extraction; second, the concurrent development of lightweight models and real-time performance optimization; third, the deep integration with autonomous driving technologies. These trends will not only drive innovation in driving safety but also establish a robust technical foundation for the realization of safer and smarter transportation systems.
4.5. Policy Linkages
As these advancements in dangerous driving behavior recognition continue to evolve, their impact will not only be confined to technological innovation but will also extend to shaping traffic safety policies and regulatory frameworks. Bridging the gap between cutting-edge research and practical implementation requires robust policy support to ensure that these technologies are effectively deployed for improving road safety and fostering a more intelligent transportation ecosystem.
First and foremost, deep learning facilitates the real-time identification of dangerous driving behaviors within traffic monitoring systems, thereby equipping traffic management authorities with timely and accurate accident warning information. This capability not only significantly reduces the incidence of traffic accidents but also provides law enforcement agencies with a robust scientific basis for enhancing road safety.
Furthermore, with the ongoing advancement of intelligent transportation systems and autonomous driving technologies, deep learning is poised to significantly enhance traffic safety regulation. For instance, by integrating real-time sensor data with deep learning models, it becomes possible to more accurately detect hazardous behaviors such as fatigue driving, speeding, and distracted driving. This enhanced detection capability facilitates the continuous refinement and updating of traffic safety regulations. Through policy support and technological integration, governments can establish scientifically robust guidelines for driving behavior, thereby promoting safer driving practices and ultimately contributing to a substantial reduction in traffic accident rates.
However, the complexity and substantial computational demands of deep learning models present significant challenges for their deployment in real-world environments. Many existing traffic safety systems, particularly those equipped with onboard CPUs, are unable to meet the real-time processing requirements of these models, thereby necessitating more robust policy formulation. Future traffic safety policies should prioritize overcoming the limitations of deep learning models through technological innovation and hardware upgrades, thus facilitating the widespread adoption and application of intelligent transportation systems. Concurrently, governments should enhance research support for deep learning technology, foster cross-disciplinary collaboration, and ensure its effective utilization to improve traffic safety and reduce traffic accidents.
Conceptualization, J.H. and W.H.; literature review and data collection, B.Z. and Y.Z.; methodology, B.Z. and J.H.; writing—original draft, B.Z. All authors have read and agreed to the published version of the manuscript.
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 10. Identification accuracy of tired driving by deep learning and traditional machine learning.
Overview of vehicle sensor information.
Vehicle Sensor Information | Collected Data | Acquisition Device |
---|---|---|
Drivers | Eye movement, blink rate, head posture, facial expression, body movement | Facial recognition camera |
Eye drift, hand movements | Gesture recognition sensor | |
Acceleration, braking, steering, lane change, gas pedal control | Gyroscope | |
Vehicles | Engine speed, engine temperature, oil pressure, coolant temperature, etc. | CAN bus |
Environment | Traffic signs, lights | Laser radar |
Pedestrians, obstacles | Front and Rear-View Cameras | |
Rain, snow, light intensity | Light sensor, temperature sensor |
Data acquisition equipment.
Data Acquisition Equipment | CAN Bus | IMU | Smartphone | Driving Simulator |
---|---|---|---|---|
Data type | Intake valve opening, braking distance, steering wheel angle, acceleration, etc. | Accelerometer, gyroscope, magnetometer | Accelerometers, gyroscopes, magnetometers, GPS, barometers, microphones, etc. | Cameras, Radar, Fingerprint sensors, Infrared sensors |
Advantages | High performance, anti-interference, high integration, controllability, and intelligence. | High precision, Real-time data, versatility, Low power consumption | Highly comparable, Low cost, A wide range of data collection, Real-time acquisition | Safe and reliable, Controllability and repeatability, Flexible test range |
Disadvantages | Channel clogging, channel error, data inconsistency. | Depends on initial alignment, Susceptible to environmental influences, Noise and bias error | Measurement error, Environmental impact, Data processing is difficult, the collection location is not flexible | Higher cost, the data accuracy is not high, |
Comparison of data types for dangerous driving behavior analysis.
Data Type | Source | Advantages | Limitations | Applicable Scenarios |
---|---|---|---|---|
Questionnaire Data | Self-reported surveys, DBQ (Driver Behavior Questionnaire), statistical modeling | Easy to quantify and scale for large populations | Limited to subjective responses, prone to biases | Behavior assessment, policy development, safe driving training |
Vehicle Running Status Data | CAN bus, IMU (accelerometer, gyroscope), smartphone sensors, driving simulators | Provides real-time, precise, and objective data | Relies on network connectivity for real-time data transmission | Urban roads, highways, controlled simulations |
Vision Data | Machine vision: driver facial features (eyes, mouth, posture), environment monitoring | Intuitive observation of the driver’s environment | Sensitive to lighting conditions | Highways, complex road conditions, distracted or fatigued driving |
Physiological Data | EEG, ECG, EMG signals using wearable sensors or specialized devices | Provides direct insights into driver’s physical and emotional states | Equipment can be intrusive, affecting driver comfort | Fatigue detection, long-distance driving, stress monitoring |
Application and characteristics of the DBN Model.
Identify the Type | Open Source Dataset | Solve the Problem | Advantages and Disadvantages |
---|---|---|---|
Fatigue driving [ | YawDD, Nthu-DDD and KOU-DFD dataset 87% [ | Identify driver fatigue driving behavior. | Advantages: Good flexibility, easy expansion, parallel computing, relatively less convergence time, etc. |
Abnormal driving | NGSIM [ | Dangerous lane change [ | |
Vehicle trajectory prediction [ | NGSIM [ | Lane change trajectory prediction, steering angle prediction, etc. | |
other | Lane detection [ |
Application and characteristics of the CNN model.
Identification Type | Open Source Dataset | Solve the Problem | Advantages and Disadvantages |
---|---|---|---|
Fatigue driving [ | YawdDD dataset and NthuDDD dataset [ | Identify whether the driver has fatigued driving behavior. | Advantages: Data dimension reduction, shared convolution kernel, automatic feature extraction, high accuracy, robustness against noise interference. |
Distracted driving behavior [ | the SFD3 and AUCD2 98.48% and 95.64% [ | Recognize using the phone, talking, eating, etc. while driving. | |
Abnormal driving behavior [ | VDB, MDBD, DD and DD 97.9%, 92.92%, 91.81% and 100% [ | Identify the vehicle to stop, turn suddenly, speed increase sharply, etc. | |
Running state of vehicle [ | UAH-DriveSet 80.25% [ | Identify vehicle turning left, vehicle turning right, stopping, etc. | |
Surrounding environment | HCRL dataset and HCI-lab dataset 95.03% and 94.27% [ | Judge the potential hazards in the process of driving. | |
Other | Vehicle trajectory prediction and driving decision [ |
Application and characteristics of the RNN model.
Identification Type | Open Source Dataset | Solve the Problem | Advantages and Disadvantages |
---|---|---|---|
Fatigue driving [ | NTHU-DDD 97% [ | Identify fatigue driving behavior. | Advantages: It can process the data with sequence characteristics, it can mine the timing information and semantic information in the data, and solve the gradient disappearance problem by using the gated mechanism (LSTM). |
Distracted driving [ | UAH-DriveSet 91% [ | Recognize using the phone, talking, eating, etc. while driving. | |
Abnormal driving [ | the Safety Pilot Model Deployment Data (SPMD) 95.684% [ | Identify the vehicle to stop, turn suddenly, speed increase sharply, etc. | |
Running state of vehicle [ | Identify vehicle turning left, vehicle turning right, stopping, etc. | ||
Surrounding environment [ | NGSIM [ | Judge the potential hazards in the process of driving. | |
other | NGSIM [ | Vehicle driving track prediction [ |
Comparison of three deep learning methods.
Deep Learning | DBN | CNN | RNN |
---|---|---|---|
Characteristic | Good flexibility, easy expansion, and is capable of parallel computing. | It can handle high-dimensional data, automatic feature extraction, and strong noise immunity. | It can process the data with sequence characteristics, and it can mine the timing information and semantic information in the data. |
Recognition performance | >80%. | >85%. | >90%. |
Calculation amount | Medium amount of calculation. | A large amount of calculation. | A large amount of calculation. |
Adaptability | Static model | Static model | Dynamic model with adaptive learning capabilities |
Disadvantages | It can only be one-dimensional data, the learning of the time relation of observed variables is not explicitly handled. | The pooling layer of CNN will lose a lot of valuable information. | Poor effect on long sequence processing, unable to process multiple tasks at the same time, long training time, and great training difficulty. |
Application | To process simple sensor data, application of model extensions. | Processing high-dimensional data with too many parameters. | Processing high-dimensional data with time series. |
References
1. World Health Organization. Global Status Report on Road Safety 2018; World Health Organization: Geneva, Switzerland, 2018; ISBN 978-92-4-156568-4
2. Singh, H.; Kushwaha, V.; Agarwal, A.D.; Sandhu, S.S. Fatal Road Traffic Accidents: Causes and Factors Responsible. J. Indian Acad. Foren. Med.; 2016; 38, 52. [DOI: https://dx.doi.org/10.5958/0974-0848.2016.00014.2]
3. United States Department of Transportation. National Highway Traffic Safety Administration. Countermeasures That Work: A Highway Safety Countermeasure Guide for State Highway Safety Offices; 10th ed. United States Department of Transportation, National Highway Traffic Safety Administration: Washington, DC, USA, 2020; [DOI: https://dx.doi.org/10.21949/1526021]
4. Lin, N.; Zuo, Y. Advancing Driver Fatigue Detection in Diverse Lighting Conditions for Assisted Driving Vehicles with Enhanced Facial Recognition Technologies. PLoS ONE; 2024; 19, e0304669. [DOI: https://dx.doi.org/10.1371/journal.pone.0304669] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38985745]
5. Wilson, C.; Willis, C.; Hendrikz, J.K.; Bellamy, N. Speed Enforcement Detection Devices for Preventing Road Traffic Injuries. Cochrane Database of Systematic Reviews; The Cochrane Collaboration. John Wiley & Sons, Ltd: Chichester, UK, 2006; CD004607.pub2.
6. Rao, X.; Lin, F.; Chen, Z.; Zhao, J. Distracted Driving Recognition Method Based on Deep Convolutional Neural Network. J. Ambient. Intell. Human. Comput.; 2021; 12, pp. 193-200. [DOI: https://dx.doi.org/10.1007/s12652-019-01597-4]
7. Almadi, A.I.M.; Al Mamlook, R.E.; Almarhabi, Y.; Ullah, I.; Jamal, A.; Bandara, N. A Fuzzy-Logic Approach Based on Driver Decision-Making Behavior Modeling and Simulation. Sustainability; 2022; 14, 8874. [DOI: https://dx.doi.org/10.3390/su14148874]
8. Fasanmade, A.; He, Y.; Al-Bayatti, A.H.; Morden, J.N.; Aliyu, S.O.; Alfakeeh, A.S.; Alsayed, A.O. A Fuzzy-Logic Approach to Dynamic Bayesian Severity Level Classification of Driver Distraction Using Image Recognition. IEEE Access; 2020; 8, pp. 95197-95207. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2994811]
9. Yuksel, A.S.; Atmaca, S. Driver’s Black Box: A System for Driver Risk Assessment Using Machine Learning and Fuzzy Logic. J. Intell. Transp. Syst.; 2021; 25, pp. 482-500. [DOI: https://dx.doi.org/10.1080/15472450.2020.1852083]
10. Aksjonov, A.; Nedoma, P.; Vodovozov, V.; Petlenkov, E.; Herrmann, M. A Method of Driver Distraction Evaluation Using Fuzzy Logic: Phone Usage as a Driver’s Secondary Activity: Case Study. Proceedings of the 2017 XXVI International Conference on Information, Communication and Automation Technologies (ICAT); Sarajevo, Bosnia and Herzegovina, 26–28 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1-6.
11. Pan, J.-S.; Lu, K.; Chen, S.-H.; Yan, L. Driving Behavior Analysis of Multiple Information Fusion Based on SVM. Modern Advances in Applied Intelligence; Ali, M.; Pan, J.-S.; Chen, S.-M.; Horng, M.-F. Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2014; Volume 8481, pp. 60-69. ISBN 978-3-319-07454-2
12. Savas, B.K.; Becerikli, Y. Real Time Driver Fatigue Detection Based on SVM Algorithm. Proceedings of the 2018 6th International Conference on Control Engineering & Information Technology (CEIT); Istanbul, Turkey, 25–27 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1-4.
13. Wang, H.; Gu, M.; Wu, S.; Wang, C. A Driver’s Car-Following Behavior Prediction Model Based on Multi-Sensors Data. J. Wirel. Com. Netw.; 2020; 2020, 10. [DOI: https://dx.doi.org/10.1186/s13638-020-1639-2]
14. Tran, C.; Doshi, A.; Trivedi, M.M. Modeling and Prediction of Driver Behavior by Foot Gesture Analysis. Comput. Vis. Image Underst.; 2012; 116, pp. 435-445. [DOI: https://dx.doi.org/10.1016/j.cviu.2011.09.008]
15. Cao, W.; Lin, X.; Zhang, K.; Dong, Y.; Huang, S.; Zhang, L. Analysis and Evaluation of Driving Behavior Recognition Based on a 3-Axis Accelerometer Using a Random Forest Approach: Poster Abstract. Proceedings of the 16th ACM/IEEE International Conference on Information Processing in Sensor Networks; Pittsburgh, PA, USA, 18–21 April 2017; ACM: New York, NY, USA, 2017; pp. 303-304.
16. Shi, H.; Wang, T.; Zhong, F.; Wang, H.; Han, J.; Wang, X. A Data-Driven Car-Following Model Based on the Random Forest. WJET; 2021; 09, pp. 503-515. [DOI: https://dx.doi.org/10.4236/wjet.2021.93033]
17. Dong, B.-T.; Lin, H.-Y. An On-Board Monitoring System for Driving Fatigue and Distraction Detection. Proceedings of the 2021 22nd IEEE International Conference on Industrial Technology (ICIT); Valencia, Spain, 10–12 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 850-855.
18. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput.; 2006; 18, pp. 1527-1554. [DOI: https://dx.doi.org/10.1162/neco.2006.18.7.1527]
19. Zhang, N.; Ding, S.; Zhang, J.; Xue, Y. An Overview on Restricted Boltzmann Machines. Neurocomputing; 2018; 275, pp. 1186-1199. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.09.065]
20. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature; 1986; 323, pp. 533-536. [DOI: https://dx.doi.org/10.1038/323533a0]
21. Hassan, D.A.; Egi, Y.; Redif, S. Neural Networks for Computing Eigenvalues of Parahermitian Matrices. Proceedings of the 2024 32nd European Signal Processing Conference (EUSIPCO); Lyon, France, 26–30 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1307-1311.
22. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom.; 2020; 404, 132306. [DOI: https://dx.doi.org/10.1016/j.physd.2019.132306]
23. Zhao, L.; Wang, Z.; Wang, X.; Liu, Q. Driver Drowsiness Detection Using Facial Dynamic Fusion Information and a DBN. IET Intell. Transp. Syst.; 2018; 12, pp. 127-133. [DOI: https://dx.doi.org/10.1049/iet-its.2017.0183]
24. Shahverdy, M.; Fathy, M.; Berangi, R.; Sabokrou, M. Driver Behavior Detection and Classification Using Deep Convolutional Neural Networks. Expert Syst. Appl.; 2020; 149, 113240. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.113240]
25. Zhang, J.; Wu, Z.; Li, F.; Luo, J.; Ren, T.; Hu, S.; Li, W.; Li, W. Attention-Based Convolutional and Recurrent Neural Networks for Driving Behavior Recognition Using Smartphone Sensor Data. IEEE Access; 2019; 7, pp. 148031-148046. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2932434]
26. Elamrani Abou Elassad, Z.; Mousannif, H.; Al Moatassime, H.; Karkouch, A. The Application of Machine Learning Techniques for Driving Behavior Analysis: A Conceptual Framework and a Systematic Literature Review. Eng. Appl. Artif. Intell.; 2020; 87, 103312. [DOI: https://dx.doi.org/10.1016/j.engappai.2019.103312]
27. Ferreira, J.; Carvalho, E.; Ferreira, B.V.; De Souza, C.; Suhara, Y.; Pentland, A.; Pessin, G. Driver Behavior Profiling: An Investigation with Different Smartphone Sensors and Machine Learning. PLoS ONE; 2017; 12, e0174959. [DOI: https://dx.doi.org/10.1371/journal.pone.0174959] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28394925]
28. El-Nabi, S.A.; El-Shafai, W.; El-Rabaie, E.-S.M.; Ramadan, K.F.; Abd El-Samie, F.E.; Mohsen, S. Machine Learning and Deep Learning Techniques for Driver Fatigue and Drowsiness Detection: A Review. Multimed. Tools Appl.; 2024; 83, pp. 9441-9477. [DOI: https://dx.doi.org/10.1007/s11042-023-15054-0]
29. David, R.; Söffker, D. A Review on Machine Learning-Based Models for Lane-Changing Behavior Prediction and Recognition. Front. Future Transp.; 2023; 4, 950429. [DOI: https://dx.doi.org/10.3389/ffutr.2023.950429]
30. Seo, S.; Kim, M.; Lee, C. A Study on the Dangerous Driving Behaviors by Driver Behavior Analysis. J. Korea Inst. Intelligent. Transp. Syst.; 2015; 14, pp. 13-22. [DOI: https://dx.doi.org/10.12815/kits.2015.14.5.013]
31. Lokman, S.-F.; Othman, A.T.; Abu-Bakar, M.-H. Intrusion Detection System for Automotive Controller Area Network (CAN) Bus System: A Review. J. Wirel. Com. Netw.; 2019; 2019, 184. [DOI: https://dx.doi.org/10.1186/s13638-019-1484-3]
32. Jie, Z.; Mahmoud, M.; Stafford-Fraser, Q.; Robinson, P.; Dias, E.; Skrypchuk, L. Analysis of Yawning Behaviour in Spontaneous Expressions of Drowsy Drivers. Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018); Xi’an, China, 15–19 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 571-576.
33. Liu, L.; Ji, Y.; Gao, Y.; Ping, Z.; Kuang, L.; Li, T.; Xu, W. A Novel Fatigue Driving State Recognition and Warning Method Based on EEG and EOG Signals. J. Healthc. Eng.; 2021; 2021, pp. 1-10. [DOI: https://dx.doi.org/10.1155/2021/7799793] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34853672]
34. Pawan, W.; Yogesh, D.; Kumar, M. Analysis of Factors Influencing Safe Driving Behavior in Indian Context Using Manchester Driver Behavior Questionnaire. Int. J. Perform. Eng.; 2023; 19, 76. [DOI: https://dx.doi.org/10.23940/ijpe.23.01.p8.7684]
35. Liu, J.; Wang, C.; Liu, Z.; Feng, Z.; Sze, N.N. Drivers’ Risk Perception and Risky Driving Behavior under Low Illumination Conditions: Modified Driver Behavior Questionnaire (DBQ) and Driver Skill Inventory (DSI). J. Adv. Transp.; 2021; 2021, 5568240. [DOI: https://dx.doi.org/10.1155/2021/5568240]
36. Sánchez-López, M.T.; Fernández-Berrocal, P.; Tagliabue, M.; Megías-Robles, A. Spanish Adaptation and Validation of the Dula Dangerous Driving Index (DDDI). Aggress. Behav.; 2024; 50, e22129. [DOI: https://dx.doi.org/10.1002/ab.22129] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38268389]
37. Khandakar, A.; Michelson, D.G.; Naznine, M.; Salam, A.; Nahiduzzaman, M.; Khan, K.M.; Suganthan, P.N.; Ayari, M.A.; Menouar, H.; Haider, J. Harnessing Smartphone Sensors for Enhanced Road Safety: A Comprehensive Dataset and Review. arXiv; 2024; arXiv: 2411.07315
38. Vlahogianni, E.I.; Barmpounakis, E.N. Driving Analytics Using Smartphones: Algorithms, Comparisons and Challenges. Transp. Res. Part C Emerg. Technol.; 2017; 79, pp. 196-206. [DOI: https://dx.doi.org/10.1016/j.trc.2017.03.014]
39. Ezzati Amini, R.; Al Haddad, C.; Batabyal, D.; Gkena, I.; De Vos, B.; Cuenen, A.; Brijs, T.; Antoniou, C. Driver Distraction and In-Vehicle Interventions: A Driving Simulator Study on Visual Attention and Driving Performance. Accid. Anal. Prev.; 2023; 191, 107195. [DOI: https://dx.doi.org/10.1016/j.aap.2023.107195]
40. Shulei, W.; Zihang, S.; Huandong, C.; Yuchen, Z.; Yang, Z.; Jinbiao, C.; Qiaona, M. Road Rage Detection Algorithm Based on Fatigue Driving and Facial Feature Point Location. Neural Comput. Appl.; 2022; 34, pp. 12361-12371. [DOI: https://dx.doi.org/10.1007/s00521-021-06856-0]
41. El Zein, H.; Harb, H.; Delmotte, F.; Zahwe, O.; Haddad, S. VAS-3D: A Visual-Based Alerting System for Detecting Drowsy Drivers in Intelligent Transportation Systems. World Electr. Veh. J.; 2024; 15, 540. [DOI: https://dx.doi.org/10.3390/wevj15120540]
42. Sim, S.; Kim, C. Proposal of a Cost-Effective and Adaptive Customized Driver Inattention Detection Model Using Time Series Analysis and Computer Vision. World Electr. Veh. J.; 2024; 15, 400. [DOI: https://dx.doi.org/10.3390/wevj15090400]
43. Das, S.; Pratihar, S.; Pradhan, B. Advanced Deep Learning Models for Automatic Detection of Driver’s Facial Expressions, Movements, and Alertness in Varied Lighting Conditions: A Comparative Analysis. Multimedia Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 1-37. [DOI: https://dx.doi.org/10.1007/s11042-024-20428-z]
44. Gao, Z.; Wang, X.; Yang, Y.; Mu, C.; Cai, Q.; Dang, W.; Zuo, S. EEG-Based Spatio–Temporal Convolutional Neural Network for Driver Fatigue Evaluation. IEEE Trans. Neural Netw. Learn. Syst.; 2019; 30, pp. 2755-2763. [DOI: https://dx.doi.org/10.1109/TNNLS.2018.2886414] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30640634]
45. Rao, S.; Li, K.; Wu, J.; Mu, Z. Application of Ensemble Learning in EEG Signal Analysis of Fatigue Driving. J. Phys. Conf. Ser.; 2021; 1744, 042193. [DOI: https://dx.doi.org/10.1088/1742-6596/1744/4/042193]
46. Zheng, W.-L.; Gao, K.; Li, G.; Liu, W.; Liu, C.; Liu, J.-Q.; Wang, G.; Lu, B.-L. Vigilance Estimation Using a Wearable EOG Device in Real Driving Environment. IEEE Trans. Intell. Transport. Syst.; 2020; 21, pp. 170-184. [DOI: https://dx.doi.org/10.1109/TITS.2018.2889962]
47. Zontone, P.; Affanni, A.; Bernardini, R.; Del Linz, L.; Piras, A.; Rinaldo, R. Stress Evaluation in Simulated Autonomous and Manual Driving through the Analysis of Skin Potential Response and Electrocardiogram Signals. Sensors; 2020; 20, 2494. [DOI: https://dx.doi.org/10.3390/s20092494] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32354062]
48. Kashevnik, A.; Lashkov, I.; Gurtov, A. Methodology and Mobile Application for Driver Behavior Analysis and Accident Prevention. IEEE Trans. Intell. Transport. Syst.; 2020; 21, pp. 2427-2436. [DOI: https://dx.doi.org/10.1109/TITS.2019.2918328]
49. Lashkov, I.; Kashevnik, A.; Shilov, N.; Parfenov, V.; Shabaev, A. Driver Dangerous State Detection Based on OpenCV & Dlib Libraries Using Mobile Video Processing. Proceedings of the 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC); New York, NY, USA, 1–3 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 74-79.
50. Martin, S.; Tawari, A.; Trivedi, M.M. Toward Privacy-Protecting Safety Systems for Naturalistic Driving Videos. IEEE Trans. Intell. Transport. Syst.; 2014; 15, pp. 1811-1822. [DOI: https://dx.doi.org/10.1109/TITS.2014.2308543]
51. Li, Y.; Tao, X.; Zhang, X.; Liu, J.; Xu, J. Privacy-Preserved Federated Learning for Autonomous Driving. IEEE Trans. Intell. Transport. Syst.; 2022; 23, pp. 8423-8434. [DOI: https://dx.doi.org/10.1109/TITS.2021.3081560]
52. Xiong, Z.; Li, W.; Han, Q.; Cai, Z. Privacy-Preserving Auto-Driving: A GAN-Based Approach to Protect Vehicular Camera Data. Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM); Beijing, China, 8–11 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 668-677.
53. Liu, J.; Yang, N.; Lee, Y.; Huang, W.; Du, Y.; Li, T.; Zhang, P. FedDAF: Federated Deep Attention Fusion for Dangerous Driving Behavior Detection. Inf. Fusion; 2024; 112, 102584. [DOI: https://dx.doi.org/10.1016/j.inffus.2024.102584]
54. Khodarahmi, M.; Maihami, V. A Review on Kalman Filter Models. Arch. Comput. Methods Eng.; 2023; 30, pp. 727-747. [DOI: https://dx.doi.org/10.1007/s11831-022-09815-7]
55. Jiang, L.; Huang, D.; Liu, M.; Yang, W. Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels. Proceedings of the 37th International Conference on Machine Learning, PMLR; Virtual, 13–18 July 2020; III, H.D.; Singh, A. 2020; Volume 119, pp. 4804-4815.
56. Zhao, M.; Zhong, S.; Fu, X.; Tang, B.; Pecht, M. Deep Residual Shrinkage Networks for Fault Diagnosis. IEEE Trans. Ind. Inf.; 2020; 16, pp. 4681-4690. [DOI: https://dx.doi.org/10.1109/TII.2019.2943898]
57. Di Bono, M.G.; Zorzi, M. Deep Generative Learning of Location-Invariant Visual Word Recognition. Front. Psychol.; 2013; 4, 635. [DOI: https://dx.doi.org/10.3389/fpsyg.2013.00635] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24065939]
58. Graves, A.; Mohamed, A.; Hinton, G. Speech Recognition with Deep Recurrent Neural Networks. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing; Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 6645-6649.
59. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci.; 2016; 7, 1419. [DOI: https://dx.doi.org/10.3389/fpls.2016.01419]
60. Kır Savaş, B.; Becerikli, Y. Behavior-Based Driver Fatigue Detection System with Deep Belief Network. Neural Comput. Appl.; 2022; 34, pp. 14053-14065. [DOI: https://dx.doi.org/10.1007/s00521-022-07141-4]
61. Hema, D.D.; Jaison, T.R. A Novel Deep Learning-Driven Smart System for Lane Change Decision-Making. Int. J. ITS Res.; 2024; 22, pp. 648-659. [DOI: https://dx.doi.org/10.1007/s13177-024-00421-4]
62. Abbas, Q. V-ITS: Video-Based Intelligent Transportation System for Monitoring Vehicle Illegal Activities. Int. J. Adv. Comput. Sci. Appl.; 2019; 10, pp. 202-208. [DOI: https://dx.doi.org/10.14569/IJACSA.2019.0100326]
63. Xie, D.-F.; Fang, Z.-Z.; Jia, B.; He, Z. A Data-Driven Lane-Changing Model Based on Deep Learning. Transp. Res. Part C Emerg. Technol.; 2019; 106, pp. 41-60. [DOI: https://dx.doi.org/10.1016/j.trc.2019.07.002]
64. Shirke, S.; Udayakumar, R. Hybrid Optimisation Dependent Deep Belief Network for Lane Detection. J. Exp. Theor. Artif. Intell.; 2022; 34, pp. 175-187. [DOI: https://dx.doi.org/10.1080/0952813X.2020.1853249]
65. Hadsell, R.; Erkan, A.; Sermanet, P.; Scoffier, M.; Muller, U.; LeCun, Y. Deep Belief Net Learning in a Long-Range Vision System for Autonomous off-Road Driving. Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems; Nice, France, 22–26 September 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 628-633.
66. Feng, J.; Qin, D.; Wang, K.; Liu, Y. Online Intelligent Gear-Shift Decision of Vehicle Considering Driving Intention Using Moving Horizon Strategy. Advances in Asian Mechanism and Machine Science; Khang, N.V.; Hoang, N.Q.; Ceccarelli, M. Mechanisms and Machine Science; Springer International Publishing: Cham, Switzerland, 2022; Volume 113, pp. 168-178. ISBN 978-3-030-91891-0
67. Chai, R.; Ling, S.H.; San, P.P.; Naik, G.R.; Nguyen, T.N.; Tran, Y.; Craig, A.; Nguyen, H.T. Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks. Front. Neurosci.; 2017; 11, 103. [DOI: https://dx.doi.org/10.3389/fnins.2017.00103]
68. Weng, C.-H.; Lai, Y.-H.; Lai, S.-H. Driver Drowsiness Detection via a Hierarchical Temporal Deep Belief Network. Computer Vision—ACCV 2016 Workshops; Chen, C.-S.; Lu, J.; Ma, K.-K. Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10118, pp. 117-133. ISBN 978-3-319-54525-7
69. Yin, Z.; Zhang, J. Cross-Subject Recognition of Operator Functional States via EEG and Switching Deep Belief Networks with Adaptive Weights. Neurocomputing; 2017; 260, pp. 349-366. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.05.002]
70. Zheng, Z.; Dai, S.; Liang, Y.; Xie, X. Driver Fatigue Analysis Based on Upper Body Posture and DBN-BPNN Model. Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC); Chengdu, China, 20–22 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 574-581.
71. Zhao, C.; Gong, J.; Lu, C.; Xiong, G.; Mei, W. Speed and Steering Angle Prediction for Intelligent Vehicles Based on Deep Belief Network. Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC); Yokohama, Japan, 16–19 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 301-306.
72. Yang, L.; Zhao, C.; Lu, C.; Wei, L.; Gong, J. Lateral and Longitudinal Driving Behavior Prediction Based on Improved Deep Belief Network. Sensors; 2021; 21, 8498. [DOI: https://dx.doi.org/10.3390/s21248498] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34960592]
73. Savas, B.K.; Becerikli, Y. Real Time Driver Fatigue Detection System Based on Multi-Task ConNN. IEEE Access; 2020; 8, pp. 12491-12498. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2963960]
74. Leekha, M.; Goswami, M.; Shah, R.R.; Yin, Y.; Zimmermann, R. Are You Paying Attention? Detecting Distracted Driving in Real-Time. Proceedings of the 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM); Singapore, 11–13 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 171-180.
75. Huang, C.; Wang, X.; Cao, J.; Wang, S.; Zhang, Y. HCF: A Hybrid CNN Framework for Behavior Detection of Distracted Drivers. IEEE Access; 2020; 8, pp. 109335-109349. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3001159]
76. Kose, N.; Kopuklu, O.; Unnervik, A.; Rigoll, G. Real-Time Driver State Monitoring Using a CNN Based Spatio-Temporal Approach. Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC); Auckland, New Zealand, 27–30 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3236-3242.
77. Kumaran, S.K.; Dogra, D.P.; Roy, P.P.; Mitra, A. Video Trajectory Classification and Anomaly Detection Using Hybrid CNN-VAE. arXiv; 2018; arXiv: 1812.07203
78. Xie, J.; Hu, K.; Li, G.; Guo, Y. CNN-Based Driving Maneuver Classification Using Multi-Sliding Window Fusion. Expert Syst. Appl.; 2021; 169, 114442. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.114442]
79. Yin, S.; Duan, J.; Ouyang, P.; Liu, L.; Wei, S. Multi-CNN and Decision Tree Based Driving Behavior Evaluation. Proceedings of the Symposium on Applied Computing; Marrakech, Morocco, 3–7 April 2017; ACM: New York, NY, USA, 2017; pp. 1424-1429.
80. Lee, J.; Choi, J.W. May I Cut Into Your Lane?: A Policy Network to Learn Interactive Lane Change Behavior for Autonomous Driving. Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC); Auckland, New Zealand, 27–30 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4342-4347.
81. Sun, J.; Qi, X.; Xu, Y.; Tian, Y. Vehicle Turning Behavior Modeling at Conflicting Areas of Mixed-Flow Intersections Based on Deep Learning. IEEE Trans. Intell. Transport. Syst.; 2020; 21, pp. 3674-3685. [DOI: https://dx.doi.org/10.1109/TITS.2019.2931701]
82. Chandra, R.; Bhattacharya, U.; Bera, A.; Manocha, D. TraPHic: Trajectory Prediction in Dense and Heterogeneous Traffic Using Weighted Interactions. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 8475-8484.
83. Zhang, Y.; Zou, Y.; Tang, J.; Liang, J. A Lane-Changing Prediction Method Based on Temporal Convolution Network. arXiv; 2020; [DOI: https://dx.doi.org/10.48550/ARXIV.2011.01224] arXiv: 2011.01224
84. Zhao, Y.; Mammeri, A.; Boukerche, A. A Novel Real-Time Driver Monitoring System Based on Deep Convolutional Neural Network. Proceedings of the 2019 IEEE International Symposium on Robotic and Sensors Environments (ROSE); Ottawa, ON, Canada, 17–18 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1-7.
85. González-Lozoya, S.M.; De La Calleja, J.; Pellegrin, L.; Escalante, H.J.; Medina, M.A.; Benitez-Ruiz, A. Recognition of Facial Expressions Based on CNN Features. Multimed. Tools Appl.; 2020; 79, pp. 13987-14007. [DOI: https://dx.doi.org/10.1007/s11042-020-08681-4]
86. Lu, M.; Hu, Y.; Lu, X. Driver Detection Based on Deep Learning. J. Phys. Conf. Ser.; 2018; 1069, 012118. [DOI: https://dx.doi.org/10.1088/1742-6596/1069/1/012118]
87. Jeong, D.; Kim, M.; Kim, K.; Kim, T.; Jin, J.; Lee, C.; Lim, S. Real-Time Driver Identification Using Vehicular Big Data and Deep Learning. Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC); Maui, HI, USA, 4–7 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 123-130.
88. Cura, A.; Kucuk, H.; Ergen, E.; Oksuzoglu, I.B. Driver Profiling Using Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN) Methods. IEEE Trans. Intell. Transport. Syst.; 2021; 22, pp. 6572-6582. [DOI: https://dx.doi.org/10.1109/TITS.2020.2995722]
89. Bejani, M.M.; Ghatee, M. Convolutional Neural Network With Adaptive Regularization to Classify Driving Styles on Smartphones. IEEE Trans. Intell. Transport. Syst.; 2020; 21, pp. 543-552. [DOI: https://dx.doi.org/10.1109/TITS.2019.2896672]
90. Karaduman, M.; Eren, H. Deep Learning Based Traffic Direction Sign Detection and Determining Driving Style. Proceedings of the 2017 International Conference on Computer Science and Engineering (UBMK); Antalya, Turkey, 5–8 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1046-1050.
91. Fernando, P.M.; Sugathadasa, R.; De Silva, M.M.; Thibbotuwawa, A.; Sivakumar, T. Real-Time Driver Drowsiness Detection Using Transfer Learning. Advances in Design, Simulation and Manufacturing VII; Ivanov, V.; Trojanowska, J.; Pavlenko, I.; Rauch, E.; Piteľ, J. Lecture Notes in Mechanical Engineering; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 425-436. ISBN 978-3-031-61796-6
92. Liu, Y.; Zhang, T.; Li, Z. 3DCNN-Based Real-Time Driver Fatigue Behavior Detection in Urban Rail Transit. IEEE Access; 2019; 7, pp. 144648-144662. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2945136]
93. Wang, H.; Xu, L.; Bezerianos, A.; Chen, C.; Zhang, Z. Linking Attention-Based Multiscale CNN With Dynamical GCN for Driving Fatigue Detection. IEEE Trans. Instrum. Meas.; 2021; 70, 2504811. [DOI: https://dx.doi.org/10.1109/TIM.2020.3047502]
94. Zhao, Z.; Zhou, N.; Zhang, L.; Yan, H.; Xu, Y.; Zhang, Z. Driver Fatigue Detection Based on Convolutional Neural Networks Using EM-CNN. Comput. Intell. Neurosci.; 2020; 2020, 7251280. [DOI: https://dx.doi.org/10.1155/2020/7251280] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33293943]
95. Xing, J.; Fang, G.; Zhong, J.; Li, J. Application of Face Recognition Based on CNN in Fatigue Driving Detection. Proceedings of the Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing; Dublin, Ireland, 17–19 October 2019; ACM: New York, NY, USA, 2019; pp. 1-5.
96. Gu, W.H.; Zhu, Y.; Chen, X.D.; He, L.F.; Zheng, B.B. Hierarchical CNN-based Real-time Fatigue Detection System by Visual-based Technologies Using MSP Model. IET Image Process.; 2018; 12, pp. 2319-2329. [DOI: https://dx.doi.org/10.1049/iet-ipr.2018.5245]
97. Xiang, W.; Wu, X.; Li, C.; Zhang, W.; Li, F. Driving Fatigue Detection Based on the Combination of Multi-Branch 3D-CNN and Attention Mechanism. Appl. Sci.; 2022; 12, 4689. [DOI: https://dx.doi.org/10.3390/app12094689]
98. Ye, L.; Chen, C.; Wu, M.; Nwobodo, S.; Antwi, A.A.; Muponda, C.N.; Ernest, K.D.; Vedaste, R.S. Using CNN and Channel Attention Mechanism to Identify Driver’s Distracted Behavior. Transactions on Edutainment XVI; Pan, Z.; Cheok, A.D.; Müller, W.; Zhang, M. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2020; Volume 11782, pp. 175-183. ISBN 978-3-662-61509-6
99. Xie, Z.; Li, L.; Xu, X. Real-Time Driving Distraction Recognition Through a Wrist-Mounted Accelerometer. Hum. Factors; 2022; 64, pp. 1412-1428. [DOI: https://dx.doi.org/10.1177/0018720821995000]
100. Li, Y.; Xu, P.; Zhu, Z.; Huang, X.; Qi, G. Real-Time Driver Distraction Detection Using Lightweight Convolution Neural Network with Cheap Multi-Scale Features Fusion Block. Proceedings of 2021 Chinese Intelligent Systems Conference; Jia, Y.; Zhang, W.; Fu, Y.; Yu, Z.; Zheng, S. Lecture Notes in Electrical Engineering Springer: Singapore, 2022; Volume 804, pp. 232-240. ISBN 978-981-16-6323-9
101. Lu, M.; Hu, Y.; Lu, X. Dilated Light-Head R-CNN Using Tri-Center Loss for Driving Behavior Recognition. Image Vis. Comput.; 2019; 90, 103800. [DOI: https://dx.doi.org/10.1016/j.imavis.2019.08.004]
102. Xu, Y.; Peng, W.; Wang, L. Research on Driver Status Recognition System of Intelligent Vehicle Terminal Based on Deep Learning. World Electr. Veh. J.; 2021; 12, 137. [DOI: https://dx.doi.org/10.3390/wevj12030137]
103. He, X.; Xu, L.; Zhang, Z. Driving Behaviour Characterisation by Using Phase-space Reconstruction and Pre-trained Convolutional Neural Network. IET Intell. Transp. Syst.; 2019; 13, pp. 1173-1180. [DOI: https://dx.doi.org/10.1049/iet-its.2018.5499]
104. Zhang, C.; Lu, Y.; Feng, M.; Wu, M. Trucker Behavior Security Surveillance Based on Human Parsing. IEEE Access; 2019; 7, pp. 97526-97535. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2930403]
105. Hu, Y.; Lu, M.; Lu, X. Spatial-Temporal Fusion Convolutional Neural Network for Simulated Driving Behavior Recognition. Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV); Singapore, 18–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1271-1277.
106. Wang, K.; Chen, X.; Gao, R. Dangerous Driving Behavior Detection with Attention Mechanism. Proceedings of the 3rd International Conference on Video and Image Processing; Shanghai, China, 20–23 December 2019; ACM: New York, NY, USA, 2019; pp. 57-62.
107. Borghi, G.; Venturelli, M.; Vezzani, R.; Cucchiara, R. POSEidon: Face-from-Depth for Driver Pose Estimation. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 5494-5503.
108. Abouelnaga, Y.; Eraqi, H.M.; Moustafa, M.N. Real-Time Distracted Driver Posture Classification. arXiv; 2017; [DOI: https://dx.doi.org/10.48550/ARXIV.1706.09498] arXiv: 1706.09498
109. Wang, R.; Xie, F.; Zhao, J.; Zhang, B.; Sun, R.; Yang, J. Smartphone Sensors-Based Abnormal Driving Behaviors Detection: Serial-Feature Network. IEEE Sens. J.; 2021; 21, pp. 15719-15728. [DOI: https://dx.doi.org/10.1109/JSEN.2020.3036862]
110. Chung, S.H.; Kim, D.J.; Kim, J.S.; Chung, C.C. Collision Detection System for Lane Change on Multi-Lanes Using Convolution Neural Network. Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV); Nagoya, Japan, 11–17 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 690-696.
111. Zhao, Y.; Jia, H.; Luo, H.; Zhao, F.; Qin, Y.; Wang, Y. An Abnormal Driving Behavior Recognition Algorithm Based on the Temporal Convolutional Network and Soft Thresholding. Int. J. Intell. Sys; 2022; 37, pp. 6244-6261. [DOI: https://dx.doi.org/10.1002/int.22842]
112. Zhang, J.; Wu, Z.; Li, F.; Xie, C.; Ren, T.; Chen, J.; Liu, L. A Deep Learning Framework for Driving Behavior Identification on In-Vehicle CAN-BUS Sensor Data. Sensors; 2019; 19, 1356. [DOI: https://dx.doi.org/10.3390/s19061356]
113. Doniec, R.J.; Sieciński, S.; Duraj, K.M.; Piaseczna, N.J.; Mocny-Pachońska, K.; Tkacz, E.J. Recognition of Drivers’ Activity Based on 1D Convolutional Neural Network. Electronics; 2020; 9, 2002. [DOI: https://dx.doi.org/10.3390/electronics9122002]
114. Zhang, Y.; Li, J.; Guo, Y.; Xu, C.; Bao, J.; Song, Y. Vehicle Driving Behavior Recognition Based on Multi-View Convolutional Neural Network With Joint Data Augmentation. IEEE Trans. Veh. Technol.; 2019; 68, pp. 4223-4234. [DOI: https://dx.doi.org/10.1109/TVT.2019.2903110]
115. Azadani, M.N.; Boukerche, A. Driver Identification Using Vehicular Sensing Data: A Deep Learning Approach. Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC); Nanjing, China, 29 March–1 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1-6.
116. Utomo, D.; Yang, T.-H.; Thanh, D.T.; Hsiung, P.-A. Driver Fatigue Prediction Using Different Sensor Data with Deep Learning. Proceedings of the 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS); Taipei, Taiwan, 6–9 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 242-247.
117. Du, G.; Li, T.; Li, C.; Liu, P.X.; Li, D. Vision-Based Fatigue Driving Recognition Method Integrating Heart Rate and Facial Features. IEEE Trans. Intell. Transport. Syst.; 2021; 22, pp. 3089-3100. [DOI: https://dx.doi.org/10.1109/TITS.2020.2979527]
118. Ed-doughmi, Y.; Idrissi, N. Driver Fatigue Detection Using Recurrent Neural Networks. Proceedings of the 2nd International Conference on Networking, Information Systems & Security; Rabat, Morocco, 27–28 March 2019; ACM: New York, NY, USA, 2019; pp. 1-6.
119. Saleh, K.; Hossny, M.; Nahavandi, S. Driving Behavior Classification Based on Sensor Data Fusion Using LSTM Recurrent Neural Networks. Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC); Yokohama, Japan, 16–19 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1-6.
120. Mafeni Mase, J.; Chapman, P.; Figueredo, G.P.; Torres Torres, M. Benchmarking Deep Learning Models for Driver Distraction Detection. Machine Learning, Optimization, and Data Science; Nicosia, G.; Ojha, V.; La Malfa, E.; Jansen, G.; Sciacca, V.; Pardalos, P.; Giuffrida, G.; Umeton, R. Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12566, pp. 103-117. ISBN 978-3-030-64579-3
121. Matousek, M.; EL-Zohairy, M.; Al-Momani, A.; Kargl, F.; Bosch, C. Detecting Anomalous Driving Behavior Using Neural Networks. Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV); Paris, France, 9–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2229-2235.
122. Jia, S.; Hui, F.; Li, S.; Zhao, X.; Khattak, A.J. Long Short-term Memory and Convolutional Neural Network for Abnormal Driving Behaviour Recognition. IET Intell. Trans. Sys; 2020; 14, pp. 306-312. [DOI: https://dx.doi.org/10.1049/iet-its.2019.0200]
123. Khairdoost, N.; Shirpour, M.; Bauer, M.A.; Beauchemin, S.S. Real-Time Driver Maneuver Prediction Using LSTM. IEEE Trans. Intell. Veh.; 2020; 5, pp. 714-724. [DOI: https://dx.doi.org/10.1109/TIV.2020.3003889]
124. Zhang, C.; Che, G.; Gao, B. Vehicle Driving Behavior Predicting and Judging Using LSTM and Statistics Methods. Proceedings of the 2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI); Hangzhou, China, 18–20 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 199-203.
125. Zyner, A.; Worrall, S.; Nebot, E. Naturalistic Driver Intention and Path Prediction Using Recurrent Neural Networks. IEEE Trans. Intell. Transport. Syst.; 2020; 21, pp. 1584-1594. [DOI: https://dx.doi.org/10.1109/TITS.2019.2913166]
126. Ou, C.; Karray, F. Deep Learning-Based Driving Maneuver Prediction System. IEEE Trans. Veh. Technol.; 2020; 69, pp. 1328-1340. [DOI: https://dx.doi.org/10.1109/TVT.2019.2958622]
127. Zhou, D.; Liu, H.; Ma, H.; Wang, X.; Zhang, X.; Dong, Y. Driving Behavior Prediction Considering Cognitive Prior and Driving Context. IEEE Trans. Intell. Transport. Syst.; 2021; 22, pp. 2669-2678. [DOI: https://dx.doi.org/10.1109/TITS.2020.2973751]
128. Huang, X.; Sun, J.; Sun, J. A Car-Following Model Considering Asymmetric Driving Behavior Based on Long Short-Term Memory Neural Networks. Transp. Res. Part C Emerg. Technol.; 2018; 95, pp. 346-362. [DOI: https://dx.doi.org/10.1016/j.trc.2018.07.022]
129. Zhou, M.; Qu, X.; Li, X. A Recurrent Neural Network Based Microscopic Car Following Model to Predict Traffic Oscillation. Transp. Res. Part C Emerg. Technol.; 2017; 84, pp. 245-264. [DOI: https://dx.doi.org/10.1016/j.trc.2017.08.027]
130. Würtz, S.; Göhner, U. Driving Style Analysis Using Recurrent Neural Networks with LSTM Cells. J. Adv. Inf. Technol.; 2020; 11, pp. 1-9. [DOI: https://dx.doi.org/10.12720/jait.11.1.1-9]
131. Xu, W.; Wang, J.; Fu, T.; Gong, H.; Sobhani, A. Aggressive Driving Behavior Prediction Considering Driver’s Intention Based on Multivariate-Temporal Feature Data. Accid. Anal. Prev.; 2022; 164, 106477. [DOI: https://dx.doi.org/10.1016/j.aap.2021.106477]
132. Li, Z.; Chen, L.; Nie, L.; Yang, S.X. A Novel Learning Model of Driver Fatigue Features Representation for Steering Wheel Angle. IEEE Trans. Veh. Technol.; 2022; 71, pp. 269-281. [DOI: https://dx.doi.org/10.1109/TVT.2021.3130152]
133. Wang, Y.; He, Z.; Wang, L. Truck Driver Fatigue Detection Based on Video Sequences in Open-Pit Mines. Mathematics; 2021; 9, 2908. [DOI: https://dx.doi.org/10.3390/math9222908]
134. Ed-Doughmi, Y.; Idrissi, N.; Hbali, Y. Real-Time System for Driver Fatigue Detection Based on a Recurrent Neuronal Network. J. Imaging; 2020; 6, 8. [DOI: https://dx.doi.org/10.3390/jimaging6030008] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34460605]
135. Shi, D.; Tang, H. Research on Safe Driving Evaluation Method Based on Machine Vision and Long Short-Term Memory Network. J. Electr. Comput. Eng.; 2021; 2021, 9955079. [DOI: https://dx.doi.org/10.1155/2021/9955079]
136. Khodairy, M.A.; Abosamra, G. Driving Behavior Classification Based on Oversampled Signals of Smartphone Embedded Sensors Using an Optimized Stacked-LSTM Neural Networks. IEEE Access; 2021; 9, pp. 4957-4972. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3048915]
137. Omerustaoglu, F.; Sakar, C.O.; Kar, G. Distracted Driver Detection by Combining In-Vehicle and Image Data Using Deep Learning. Appl. Soft Comput.; 2020; 96, 106657. [DOI: https://dx.doi.org/10.1016/j.asoc.2020.106657]
138. Fu, X.; Meng, H.; Wang, X.; Yang, H.; Wang, J. A Hybrid Neural Network for Driving Behavior Risk Prediction Based on Distracted Driving Behavior Data. PLoS ONE; 2022; 17, e0263030. [DOI: https://dx.doi.org/10.1371/journal.pone.0263030]
139. Bruxella, J.M.D.; Kanimozhi, J.K. An Efficient FWA–RNN Algorithm for the Driver Distraction Classification. Malaya J. Mat.; 2021; S, pp. 576-580.
140. Monjezi Kouchak, S.; Gaffar, A. Using Bidirectional Long Short Term Memory with Attention Layer to Estimate Driver Behavior. Proceedings of the 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA); Boca Raton, FL, USA, 16–19 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 315-320.
141. Streiffer, C.; Raghavendra, R.; Benson, T.; Srivatsa, M. Darnet: A Deep Learning Solution for Distracted Driving Detection. Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference: Industrial Track; Las Vegas, NV, USA, 11–15 December 2017; ACM: New York, NY, USA, 2017; pp. 22-28.
142. Liu, M.; Yang, K.; Fu, Y.; Wu, D.; Du, W. Driving Maneuver Anomaly Detection Based on Deep Auto-Encoder and Geographical Partitioning. ACM Trans. Sen. Netw.; 2023; 19, pp. 1-22. [DOI: https://dx.doi.org/10.1145/3563217]
143. Li, P.; Abdel-Aty, M.; Islam, Z. Driving Maneuvers Detection Using Semi-Supervised Long Short-Term Memory and Smartphone Sensors. Transp. Res. Rec. J. Transp. Res. Board; 2021; 2675, pp. 1386-1397. [DOI: https://dx.doi.org/10.1177/03611981211007483]
144. Huang, H.; Wang, J.; Fei, C.; Zheng, X.; Yang, Y.; Liu, J.; Wu, X.; Xu, Q. A Probabilistic Risk Assessment Framework Considering Lane-Changing Behavior Interaction. Sci. China Inf. Sci.; 2020; 63, 190203. [DOI: https://dx.doi.org/10.1007/s11432-019-2983-0]
145. Xing, Y.; Lv, C.; Mo, X.; Hu, Z.; Huang, C.; Hang, P. Toward Safe and Smart Mobility: Energy-Aware Deep Learning for Driving Behavior Analysis and Prediction of Connected Vehicles. IEEE Trans. Intell. Transport. Syst.; 2021; 22, pp. 4267-4280. [DOI: https://dx.doi.org/10.1109/TITS.2021.3052786]
146. Flammini, F.; Marrone, S.; Nardone, R.; Caporuscio, M.; D’Angelo, M. Safety Integrity through Self-Adaptation for Multi-Sensor Event Detection: Methodology and Case-Study. Future Gener. Comput. Syst.; 2020; 112, pp. 965-981. [DOI: https://dx.doi.org/10.1016/j.future.2020.06.036]
147. Eftekhari, H.R.; Ghatee, M. Hybrid of Discrete Wavelet Transform and Adaptive Neuro Fuzzy Inference System for Overall Driving Behavior Recognition. Transp. Res. Part F Traffic Psychol. Behav.; 2018; 58, pp. 782-796. [DOI: https://dx.doi.org/10.1016/j.trf.2018.06.044]
148. Tang, J.; Liu, F.; Zhang, W.; Ke, R.; Zou, Y. Lane-Changes Prediction Based on Adaptive Fuzzy Neural Network. Expert Syst. Appl.; 2018; 91, pp. 452-463. [DOI: https://dx.doi.org/10.1016/j.eswa.2017.09.025]
149. Xiao, W.; Liu, H.; Ma, Z.; Chen, W. Attention-Based Deep Neural Network for Driver Behavior Recognition. Future Gener. Comput. Syst.; 2022; 132, pp. 152-161. [DOI: https://dx.doi.org/10.1016/j.future.2022.02.007]
150. Teja, K.B.R.; Kumar, T.K. Real-Time Smart Drivers Drowsiness Detection Using DNN. Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI); Tirunelveli, India, 3–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1026-1030.
151. Kim, H.; Park, J.; Min, K.; Huh, K. Anomaly Monitoring Framework in Lane Detection With a Generative Adversarial Network. IEEE Trans. Intell. Transport. Syst.; 2021; 22, pp. 1603-1615. [DOI: https://dx.doi.org/10.1109/TITS.2020.2973398]
152. Ou, C.; Karray, F. Enhancing Driver Distraction Recognition Using Generative Adversarial Networks. IEEE Trans. Intell. Veh.; 2020; 5, pp. 385-396. [DOI: https://dx.doi.org/10.1109/TIV.2019.2960930]
153. Choi, S.; Kweon, N.; Yang, C.; Kim, D.; Shon, H.; Choi, J.; Huh, K. DSA-GAN: Driving Style Attention Generative Adversarial Network for Vehicle Trajectory Prediction. Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC); Indianapolis, IN, USA, 19–22 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1515-1520.
154. Li, H.; Han, J.; Li, S.; Wang, H.; Xiang, H.; Wang, X. Abnormal Driving Behavior Recognition Method Based on Smart Phone Sensor and CNN-LSTM. Int. J. Sci. Eng. Appl.; 2022; 11, pp. 1-8. [DOI: https://dx.doi.org/10.7753/IJSEA1101.1001]
155. Patil, A.D.; Lokhande, V.; Patil, P.; Gaikwad, S. Real-Time Driver Behaviour Monitoring System Invehicles Using Image Processing. Int. J. Adv. Eng. Manag.; 2022; 4, pp. 1890-1894.
156. Qu, F. Study and Analysis of Machine Learning Techniques for Detection of Distracted Drivers. Master’s Thesis; Florida Atlantic University: Boca Raton, FL, USA, 2024.
157. Zhang, H.; Nan, Z.; Yang, T.; Liu, Y.; Zheng, N. A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN. Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV); Las Vegas, NV, USA, 19 October–13 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 284-289.
158. Abbas, Q. HybridFatigue: A Real-Time Driver Drowsiness Detection Using Hybrid Features and Transfer Learning. Int. J. Adv. Comput. Sci. Appl.; 2020; 11, pp. 585-593. [DOI: https://dx.doi.org/10.14569/IJACSA.2020.0110173]
159. How Do I Use the Driver Alert System?. Available online: https://www.ford.com/support/how-tos/ford-technology/driver-assist-features/how-do-i-use-the-driver-alert-system/ (accessed on 15 January 2025).
160. Ali, I. Tesla FSD V12.4: Autopilot Strikeouts, Vision-Based Monitoring, Conditional Removal of Nags (Release Notes). Tesla Oracle, 24 May 2024. Available online: https://www.teslaoracle.com/2024/05/24/tesla-fsd-v12-4-autopilot-strikeouts-vision-based-monitoring-conditional-removal-of-nags-release-notes/ (accessed on 24 May 2024).
161. Wu, S.; Tian, D.; Duan, X.; Zhou, J.; Zhao, D.; Cao, D. Continuous Decision-Making in Lane Changing and Overtaking Maneuvers for Unmanned Vehicles: A Risk-Aware Reinforcement Learning Approach With Task Decomposition. IEEE Trans. Intell. Veh.; 2024; 9, pp. 4657-4674. [DOI: https://dx.doi.org/10.1109/TIV.2024.3380074]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Published by MDPI on behalf of the World Electric Vehicle Association. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In response to the rising frequency of traffic accidents and growing concerns regarding driving safety, the identification and analysis of dangerous driving behaviors have emerged as critical components in enhancing road safety. In this paper, the research progress in the recognition methods of dangerous driving behavior based on deep learning is analyzed. Firstly, the data collection methods are categorized into four types, evaluating their respective advantages, disadvantages, and applicability. While questionnaire surveys provide limited information, they are straightforward to conduct. The vehicle operation data acquisition method, being a non-contact detection, does not interfere with the driver’s activities but is susceptible to environmental factors and individual driving habits, potentially leading to inaccuracies. The recognition method based on dangerous driving behavior can be monitored in real time, though its effectiveness is constrained by lighting conditions. The precision of physiological detection depends on the quality of the equipment. Then, the collected big data are utilized to extract the features related to dangerous driving behavior. The paper mainly classifies the deep learning models employed for dangerous driving behavior recognition into three categories: Deep Belief Network (DBN), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN). DBN exhibits high flexibility but suffers from relatively slow processing speeds. CNN demonstrates excellent performance in image recognition, yet it may lead to information loss. RNN possesses the capability to process sequential data effectively; however, training these networks is challenging. Finally, this paper concludes with a comprehensive analysis of the application of deep learning-based dangerous driving behavior recognition methods, along with an in-depth exploration of their future development trends. As computer technology continues to advance, deep learning is progressively replacing fuzzy logic and traditional machine learning approaches as the primary tool for identifying dangerous driving behaviors.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer