This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
Today, all higher education institutions face difficulties in the admission process. Every college ought to make a choice in its admission form which is dependent on legitimate and credible admissions procedures that select the student candidates prone to prevail in its programs. Furthermore, every college should use the most ideal methods for foreseeing candidates’ future academic performance before conceding them. This result would uphold college chiefs as they set effective admissions criteria. Recently, educational data mining (EDM), a subfield of datum mining, has appeared that has practical experience in educational datum that is the most common method to value and foresee students’ execution [1, 2]. EDM is the way toward extricating helpful information and examples from an enormous educational database [3], which would then be able to be used to predict students’ performance.
For example, the extracted data can assist teachers in developing educational strategies, comprehending students, and progressing in the educational means. It may be applied by learners to grow their activities [4]. It also makes a variety in the administrator’s taking the proper choices to produce high-quality results [5]. EDM applies computational methods to analyses and visualize the data of education. This examination can be utilized to foretell the execution of the student or his weak and strong abilities and knowledge. It can be used to detect unwanted students’ behaviors and give suggestions to understudies. These models can help teachers with collecting students, getting feedback, and developing courses. Educational data is assembled from different sources, for example, the online web, heuristic stores, and surveys. EDM can use various DM techniques, and a few techniques are used for educational issues. For instance, to assemble an educational predictive model, the best notable method is classification. There are variant algorithms under classes such as artificial neural network, random forest, logistic regression, decision tree, naïve Bayes tree, support vector system, and
In this study, we focus on supporting colleges in smoking admission decisions by applying data mining techniques to best predict applicants’ academic rendering prior to admission. These students’ traits are from an academic background, family economic traits, social traits, institutional traits, and personal traits. We can collect the two educational datasets from two sources: The first dataset (DS1) was obtained from Kaggle and provided by [7] at the university in 2015. The second dataset (DS2) was obtained from the UCI Machine Learning Repository during the 2005-2006 school year from two secondary schools in Portugal by [8]. Then, the suggested model employed some techniques for evaluating the effectiveness of the student’s behavior on his/her academic performance. This work applies three traditional techniques from data mining in this field to produce a performance model. Those techniques are neural networks (NN) [9], decision trees [10], and naïve Bayes [11]. Two ensemble methods are used to increase the results of the classifiers mentioned above. Bagging, in addition to boosting, is used to support the success of the student prediction models. To more accurately predict the results, two classifiers were added to each ensemble method by using voting.
2. Related Work and Research Gap
Table 1 lists the numerous models/techniques/methods that have been utilized in educational data mining and learning analytics to predict student performance. These include decision trees (CART, C4.5, CHAID, J48, ID3, etc.), neural networks (multilayer perceptron, deep ANN, long short-term memory (LSTM), deep belief network, etc.), regression (linear regression, logistic regression, etc.), naïve Bayes (Gaussian NB), random forest, support vector machine (RTV-SVM, RBF-SVM),
Table 1
Models, approaches, and methods that are used to predict a student’s performance.
No. | Technique/method/model | References | Advantages | Disadvantages |
1 | Support vector machine | [13–16] | In base feature space, it is well suited for nonlinearly separable data. | Classification necessitates a large amount of memory, as well as a high level of complexity. |
2 | Decision tree | [15, 17] | It is simple to put in place, understand, and use. | Because a slight change in the data can result in a different decision tree, the time required for searching is significant. |
3 | Regression | [15, 16, 18, 19] | Perform better when dealing with continuous attributes and linearly separable data. | Outliners have an effect on the data; hence, it is not suitable for nonlinearly separable data (overfitting). |
4 | [14, 19–22] | Work well with nonlinearly separable classes and perform well in multimodal classes. | The extra time required for determining the nearest neighbor in a large training dataset. | |
5 | Naïve Bayes | [18–20, 23] | Improve categorical input variable outcomes and multiclass prediction performance. | When dealing with little amounts of data, the algorithm’s precision suffers. |
6 | Neural network | [15, 18, 19, 24] | No retraining is required because it learns events; it is applicable to real-world issues, and there are few parameters to alter, making it simple to use. | Large networks necessitate a long processing time, and determining how many neurons and layers are necessary is difficult. |
In summary, much research has investigated ways to overlook educational challenges using techniques from data mining. Nevertheless, limited research highlights the behavior of the student throughout the learning process and its impact on the academic achievement of the student. Moreover, the knowledge extracted will support schools to promote student academic accomplishment and help administrators to enhance learning systems.
Finally, this task investigates two various undergrad datasets at two different colleges.
3. Methodology
The thinking behind playing out a systematic relational review is to discover reasonable strategies for the current parameters, to satisfy the holes in the existing research, and to put a new research activity in an appropriate setting [25]. The aim of the methodical review of the present literature is to assist the suggested research’s questions. Then, the subsection will highlight the questions of this research to manage the outcomes. This is also useful for distinguishing the field of study.
3.1. Environment
The tests were carried out on a PC with 4GB of RAM and an Intel ® CoreTM i3-2379M CPU (2.40 GHz for each). For the experimental analysis, WEKA [26] was utilized to estimate the suggested classification models plus the results. Furthermore, the datasets were trained to use 10-fold cross-validation.
3.2. Datasets
The used data in this paper was obtained from two different sources. The first dataset (DS1) has been taken from Kaggle [7, 27]. It includes 480 students’ instances in a row while each column holds 16 attributes. The features are as follows: the nationality of the students, gender, and parent responsibility for the students. The academic characteristics contain the educational grade, section ID, semester, topics, and days of students’ absence. Other features are opening resources, discussion associations, raised hands in class, responding to the survey by their parents, viewing announcements, and parent-school satisfaction. The second dataset (DS2) was obtained from the Machine Learning Repository of the UCI [28] and gathered throughout the 2005-2006 school year by two secondary schools in Portugal [8]. It contains 395 student instances in rows while the columns hold 33 attributes. Some of these features are gender, age, address, father’s and mother’s job, family size, travel and study time, and first, second, and final grades.
The metrics utilized by the predictive models to describe student success are shown in Table 2. Course grade range (A, B, D, and E/pass-fail/division), course grade/marks/scores, assignments (performance/time to complete), GPA/CGPA, at-risk of dropout/attrition, graduate/retention, and ambiguous performance are some of these. Some studies predicted multiple metrics. Twenty-five has relied on the course grade range as the key indicator for predicting student achievement. The next most commonly used measure is course grade/marks/score.
Table 2
Predictive models’ metrics for describing a student’s performance.
Students’ performance is described using metrics | Count |
The range of course grades (A, B, D, and E/pass-fail/division) | 26 |
Grades/marks/scores for the course | 18 |
Assignments (time to complete/performance) | 2 |
GPA/CGPA | 8 |
Dropout/attrition at risk | 17 |
Graduate/retention | 2 |
Vague performance | 9 |
3.3. Proposed Method
In this paper, we use ensemble methods to introduce a student performance model. Ensemble methods are a type of problem solving method that uses several models to solve a problem. Ensemble methods strive to train data using a set of models and then combine them to take a vote on their outcomes, in contrast to classical learning approaches that train data using a single learning model. Ensemble forecasts are typically more accurate than predictions produced by a single model. The goal of this approach is to provide an accurate evaluation of the characteristics that may influence a student’s academic success. The proposed methodology’s essential steps are depicted in Figure 1.
[figure omitted; refer to PDF]
The methodology begins by getting datasets from two disparate sources: The first dataset acquired from Kaggle is [7, 27]. This dataset includes 480 rows and 16 columns. The second dataset was previously acquired from the UCI Machine Learning Repository [28] and gathered during 2005-2006 from two secondary schools in Portugal [8]. This dataset includes 395 rows and 33 columns, as mentioned in Section 3.2.
In this paper, ensemble methods are implemented to produce an objective valuation of the influence features on the achievement standards of the learners and to make improvements to the execution of the student’s prognosis model.
Set modes are labelled as independent and dependent processes. Boosting is considered an example of a dependent method. In a dependent process, the learner output is utilized to generate the next learner. In contrast, during the independent process, each learner works separately, and their results are combined via the voting process. Bagging is considered a case of an independent method.
We used some well-known data mining classification techniques to create some prediction models: artificial neural network (ANN), decision tree (DT), and naïve Bayes. Each model was created using 10-fold cross-validation, with 9 sets of data being used for training and the remaining set being utilized for testing. Individual classifier outputs are then pooled through a voting process, with the ensemble choice being the class chosen by the greatest number of classifiers. The method was carried out ten times, once for each of the various sets. The overall number of observations used for testing was increased as a result of this method. All models were running with the default parameter settings in WEKA software.
Boosting relates to an assortment of algorithms that can turn low learners to robust ones. The popular boosting technique is simple: it trains a group of classifiers consecutively and receives their predictions and then concentrates on reducing the mistakes of the preceding learner via modifying the weights of the feeble one. Boosting was used only for binary classification. This limitation, which stands as an adaptive one, is overcome by the AdaBoost algorithm. The algorithm’s primary concept is to be more interested in hard classification patterns. The cost of interest is the weight allocated to each subset within the training data, which is equal. With each iteration, the effects of misclassified cases increase while the significance of precisely classified cases decreases. AdaBoost then brings together the learners to build a strong learner out of the weakest classifiers through the voting process [29, 30].
The bagging method intends to raise the unstable classifiers’ accuracy by producing a composite one, then merging the results of the acquired classifiers into a single prediction. The bagging process begins by reconfiguring the primary data within specific training sets (
In boosting, every classifier is determined by the aid of the result of the prior one. In bagging, every set of data is taken with an equivalent possibility. In boosting, all instances are decided with a possible relative to their weight. Bagging works better with excessive variance models that give a generalization of variance behavior with little modifications in the training dataset. Decision trees, besides neural networks, are considered models of large variance. Both bagging and boosting are summarized in Figure 2. All of the classification techniques mentioned above were trained using cross-validation (10-folds). This technique divides datasets into ten equally sized subgroups, nine of which are used for training, while one is leftover and employed for testing. The process is repeated ten times, and the estimated outcome is the medium error rate of the test models. After the training of the style of classification, the evaluating result process begins.
[figure omitted; refer to PDF]
In the first dataset, the boosting method achieved an equable accuracy with the ANN model, in which the ANN algorithm accuracy using boosting is 79.1%, as displayed in Table 4. NB model performance using the boosting method was raised from 67.7% to 72.3%. Also, DT classifier accuracy carried out an enhancement using boosting and changed from 75.8% to 77.7%, as shown in Figure 5.
Table 4
Classification techniques results using proposed method in DS2.
Techniques | Classifier | Accuracy | Precision | Recall | |
Classical classification | DT | 90.38% | 0.905 | 0.904 | 0.903 |
ANN | 83.54% | 0.835 | 0.835 | 0.835 | |
NB | 84.05% | 0.845 | 0.843 | 0.841 | |
Boosting+2Algorithms | DT+MLP | 90.13% | 0.901 | 0.901 | 0.901 |
MLP+NB | 87.59% | 0.876 | 0.876 | 0.876 | |
NB+DT | 91.14% | 0.913 | 0.911 | 0.911 | |
Bagging+2Algorithms | DT+MLP | 90.38% | 0.905 | 0.904 | 0.904 |
MLP+NB | 88.10% | 0.881 | 0.882 | 0.882 | |
NB+DT | 90.63% | 0.908 | 0.907 | 0.907 |
4.2. Classical and Ensemble Methods with the Second Dataset
In this division also, a group of methods are applied to increase the valuation outcomes of commonly used DM techniques. Figure 6 shows the outputs using the classic classifiers and ensemble methods (boosting and bagging) in the second dataset. Improved results are shown by applying ensemble methods combined with classical classifiers (DT, NB, and ANN), in order to obtain better prediction performance for the student’s model.
[figure omitted; refer to PDF]
In the second dataset, the bagging method accomplished a cleared enhancement with the DT model, where the DT algorithm accuracy with bagging increased from 90.4% to 91.4%, as exposed within Figure 6. Recall results improved from 0.904 to 0.914. Precision results also increased from 0.905 to 0.915. NB model performance using ensemble methods also increased. But the ANN model results without bagging look equal.
4.3. Applying Proposed Method in the First Dataset
In this section, the proposed method also applied for more enhancement in the results of classic DM methods and ensemble methods. This proposed method combines two different algorithms and adds them to one of the ensemble methods (bagging or boosting) by the voting process. Table 4 presents the increase between the classic techniques and the outcomes of the suggested one in second datasets, and better results are showing using the proposed method. Each couple of the ANN, NB, and DT techniques combines with one ensemble method by a majority vote rule. The results achieve the best prediction completion of the student’s model using the proposed method.
In DS1, the bagging method with the gathering of (DT and MLP) classifiers achieved an equable accuracy with the NB and MLP models with the boosting process. It increased from 79.2% in the classic classifiers to 80.8% with the suggested method, as exposed in Figure 3.
4.4. Applying Proposed Method in the Second Dataset
The proposed method is also used in this section to improve the results of classic DM methods and ensemble methods. This proposed method combines two different algorithms and adds them to one of the ensemble methods (bagging or boosting) by the voting process. Figure 4 presents the increase between the classic techniques and the outcomes of the suggested one-second datasets, and better results are shown using the proposed method. Each couple of the ANN, NB, and DT techniques combines with one ensemble method by a majority vote rule. The results achieve the best prediction completion of the student’s model using the proposed method.
5. Conclusions
Academic performance is the primary concern for most high schools in most countries. There are extensive quantities of data generated in learning systems. This data holds hidden knowledge that could be used to heighten the students’ academic success. In this research, a suggested model of student achievement prediction was constructed totally on ensemble methods. The predictive model by classifiers like artificial neural network, decision tree, and naïve Bayesian) and then the methods (bagging and boosting) deal with raising these classifiers’ benefits. The retrieved results expose that there is an enhancement in these models over the conventional classifiers. Then, the proposed method combines two different classifiers with one of the bagging or boosting process. This method gave better results than previous methods that contribute to the growth of the accomplishment of students and educational systems. We will assemble information from numerous understudies of different instructive organizations and use some great data mining techniques to deliver a substantial yield. This project empowers instructional frameworks, foundations, understudies, and instructors to fortify their performance. In future work, more datasets will be applied to these kinds of models. Also, taking into consideration the many good classifiers known, these results prove how genuine the predictive models are. Finally, these models can support teachers in comprehending learners, recognising the weaknesses in them, developing learning styles, and defeating academic drop rates. It can also benefit administrators to advance in teaching methods.
Consent
Informed consent was obtained from all subjects involved in the study.
Acknowledgments
This project was financed by the Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia, under grant No. (G: 405-155-1442). The authors, thence, thank DSR for the technical and financial support.
[1] X. Chen, M. Vorvoreanu, K. P. C. Madhavan, "Mining social media data for understanding students’ learning experiences," IEEE Transactions on Learning Technologies, vol. 7 no. 3, pp. 246-259, DOI: 10.1109/TLT.2013.2296520, 2014.
[2] L. H. Son, H. Fujita, "Neural-fuzzy with representative sets for prediction of student performance," Applied Intelligence, vol. 49 no. 1, pp. 172-187, DOI: 10.1007/s10489-018-1262-7, 2019.
[3] S. K. Mohamad, Z. Tasir, "Educational data mining: a review," Procedia-Social and Behavioral Sciences, vol. 97, pp. 320-324, DOI: 10.1016/j.sbspro.2013.10.240, 2013.
[4] C. Romero, S. Ventura, "Educational data mining: a review of the state of the art," IEEE Transactions on Systems, Man, and Cybernetics, Part C, vol. 40 no. 6, pp. 601-618, DOI: 10.1109/TSMCC.2010.2053532, 2010.
[5] S. Bharara, S. Sabitha, Bansal, "Application of learning analytics using clustering data mining for students’ disposition analysis," Education and Information Technologies, vol. 23 no. 2, pp. 957-984, DOI: 10.1007/s10639-017-9645-7, 2018.
[6] A. M. Shahiri, W. A. Husain, N.’. A. Rashid, "A review on predicting student's performance using data mining techniques," Procedia Computer Science, vol. 72, pp. 414-422, DOI: 10.1016/j.procs.2015.12.157, 2015.
[7] E. A. Amrieh, T. Hamtini, I. Aljarah, "Mining educational data to predict student’s academic performance using ensemble methods," International Journal of Database Theory and Application, vol. 9 no. 8, pp. 119-136, DOI: 10.14257/ijdta.2016.9.8.13, 2016.
[8] P. Cortez, A. M. G. Silva, "Using data mining to predict secondary school student performance," Proceedings of 5th Annual Future Business Technology Conference, .
[9] M. Moller, "A scaled conjugate gradient algorithm for fast supervised learning," Neural Networks, vol. 6 no. 4, pp. 525-533, DOI: 10.1016/S0893-6080(05)80056-5, 1993.
[10] M. Quadri, D. Kalyankar, "Drop out feature of student data for academic performance using decision tree techniques," Global Journal of Computer Science and Technology, vol. 10 no. 2, 2010.
[11] N. T. N. Hien, P. Haddawy, "A decision support system for evaluating international student applications," 2007 37th Annual Frontiers In Education Conference - Global Engineering: Knowledge Without Borders, Opportunities Without Passports, pp. F2A-1-F2A-6, DOI: 10.1109/FIE.2007.4417958, .
[12] C. M. D. Bondoc, T. G. Malawit, "Classifying relevant video tutorials for the school’s learning management system using support vector machine algorithm," Global Journal of Engineering and Technology Advances, vol. 2 no. 3,DOI: 10.30574/gjeta.2020.2.3.0011, 2020.
[13] H. T. Hou, "Integrating cluster and sequential analysis to explore learners' flow and behavioral patterns in a simulation game with situated-learning context for science courses: a video-based process exploration," Computers in Human Behavior, vol. 48, pp. 424-435, DOI: 10.1016/j.chb.2015.02.010, 2015.
[14] S. M. Merchan Rubiano, J. A. Duarte Garcia, "Analysis of data mining techniques for constructing a predictive model for academic performance," IEEE Latin America Transactions, vol. 14 no. 6, pp. 2783-2788, DOI: 10.1109/TLA.2016.7555255, 2016.
[15] K. Coussement, M. Phan, A. De Caigny, D. F. Benoit, A. Raes, "Predicting student dropout in subscription-based online learning environments: the beneficial impact of the logit leaf model," Decision Support Systems, vol. 135,DOI: 10.1016/j.dss.2020.113325, 2020.
[16] P. M. Moreno-Marcos, T. C. Pong, P. J. Munoz-Merino, C. Delgado Kloos, "Analysis of the factors influencing learners’ performance prediction with learning analytics," IEEE Access, vol. 8, pp. 5264-5282, DOI: 10.1109/ACCESS.2019.2963503, 2020.
[17] A. Dhankhar, K. Solanki, A. Rathee, "‘Predicting student’s performance by using classification methods," Journal of advanced trends in computer science and engineering, vol. 8 no. 4, pp. 1532-1536, DOI: 10.30534/ijatcse/2019/75842019, 2019.
[18] A. Y. Huang, O. H. Lu, J. C. Huang, C. J. Yin, S. J. Yang, "Predicting students’ academic performance by using educational big data and learning analytics: evaluation of classification methods and learning logs," Interactive Learning Environments, vol. 28 no. 2, pp. 206-230, DOI: 10.1080/10494820.2019.1636086, 2020.
[19] Á. M. Guerrero-Higueras, C. Fernández Llamas, L. Sánchez González, A. Gutierrez Fernández, G. Esteban Costales, M. Á. Conde González, "Academic success assessment through version control systems," Applied Sciences, vol. 10 no. 4,DOI: 10.3390/app10041492, 2020.
[20] M. Ashraf, M. Zaman, M. Ahmed, "An intelligent prediction system for educational data mining based on ensemble and filtering approaches," Procedia Computer Science, vol. 167, pp. 1471-1483, DOI: 10.1016/j.procs.2020.03.358, 2020.
[21] E. Wakelam, A. Jefferies, N. Davey, Y. Sun, "The potential for student performance prediction in small cohorts with minimal available attributes," British Journal of Educational Technology, vol. 51 no. 2, pp. 347-370, DOI: 10.1111/bjet.12836, 2020.
[22] C. Romero, S. Ventura, E. García, "Data mining in course management systems: Moodle case study and tutorial," Computers & Education, vol. 51 no. 1, pp. 368-384, DOI: 10.1016/j.compedu.2007.05.016, 2008.
[23] D. Baneres, M. E. Rodriguez, M. Serra, "An early feedback prediction system for learners at-risk within a first-year higher education course," IEEE Transactions on Learning Technologies, vol. 12 no. 2, pp. 249-263, DOI: 10.1109/TLT.2019.2912167, 2019.
[24] N. Bedregal-Alpaca, V. Cornejo-Aparicio, J. Zárate-Valderrama, P. Yanque-Churo, "Classification models for determining types of academic risk and predicting dropout in university students," IJACSA, vol. 11 no. 1,DOI: 10.14569/IJACSA.2020.0110133, 2020.
[25] Y. Ma, B. Liu, C. K. Wong, P. S. Yu, S. M. Lee, "Targeting the right students using data mining," Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 457-464, .
[26] P. Cortez, "Student performance data set," . April 2020, https://archive.ics.uci.edu/mL/datasets/student+ performance
[27] E. A. Amrieh, T. Hamtini, I. Aljarah, "Preprocessing and analyzing educational data set using X-API for improving student's performance," 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), .
[28] R. Arora, S. Suman, "Comparative analysis of classification algorithms on different datasets using WEKA," International Journal of Computer Applications, vol. 54 no. 13, pp. 21-25, DOI: 10.5120/8626-2492, 2012.
[29] O. Sagi, L. Rokach, "Ensemble learning: a survey," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 8 no. 4, article e1249,DOI: 10.1002/widm.1249, 2018.
[30] T. Wang, Z. Zhang, X. Jing, L. Zhang, "Multiple kernel ensemble learning for software defect prediction," Automated Software Engineering, vol. 23 no. 4, pp. 569-590, DOI: 10.1007/s10515-015-0179-1, 2016.
[31] W. Jia, R. M. Shukla, S. Sengupta, "Anomaly detection using supervised learning and multiple statistical methods," 2019 18th IEEE International Conference On Machine Learning and Applications (ICMLA), pp. 1291-1297, 2019.
[32] D. M. Powers, "Evaluation: from precision, recall, and F-measure to ROC, informedness, markedness, and correlation," 2011. https://arxiv.org/abs/2010.16061
[33] A. Fernández, S. García, M. Galar, R. C. Prati, B. Krawczyk, F. Herrera, Learning from Imbalanced Data Sets, 2018.
[34] Y. Ma, H. He, Imbalanced Learning: Foundations, Algorithms, and Applications, 2013.
[35] S. Fong, R. Biuk-Aghai, "An automated university admission recommender system for secondary school students," The 6th International Conference on Information Technology and Applications, .
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2021 Mahmoud Ragab et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Student performance prediction is extremely important in today’s educational system. Predicting student achievement in advance can assist students and teachers in keeping track of the student’s progress. Today, several institutes have implemented a manual ongoing evaluation method. Students benefit from such methods since they help them improve their performance. In this study, we can use educational data mining (EDM), which we recommend as an ensemble classifier to anticipate the understudy accomplishment forecast model based on data mining techniques as classification techniques. This model uses distinct datasets which represent the student’s intercommunication with the instructive model. The exhibition of an understudy’s prescient model is evaluated by a kind of classifiers, for instance, logistic regression, naïve Bayes tree, artificial neural network, support vector system, decision tree, random forest, and
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; Centre of Artificial Intelligence for Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia; Department of Mathematics, Faculty of Science, Al-Azhar University, Naser City, 11884 Cairo, Egypt
2 Arid Land Agriculture Department, Faculty of Meteorology, Environment and Arid Land Agriculture, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3 Public Administration Department, Faculty of Economic and Administration, King Abdulaziz University, Jeddah 21589, Saudi Arabia
4 Computer Science Department, Faculty of Computers and Information, South Valley University, Qena, Egypt