A Joint Long Short-Term Memory and AdaBoost regression approach with application to remaining useful life estimation
44
Citation
58
Reference
10
Related Paper
Citation Trend
Keywords:
AdaBoost
Data set
Homogenous component classifiers in the Adaboost algorithm decrease the diversity among component classifiers, which degrades ensemble performance for classification. Aiming to increase the diversity, we propose heterogenous component classifiers for the Adaboost algorithm (termed as He-Adaboost). During each round, three diverse models based on totally different principles are used for learning from the distribution, and three hypotheses are obtained. The diversity of each hypothesis is measured, and the hypothesis with the maximum diversity is selected as the component classifier in this round. To evaluate the ensemble performance of the proposed He-Adaboost method, some experiments are carried out on the standard datasets. Experimental results consistently confirm the superiority of proposed method. It indicates that this study provides an encouraging methodology for improving ensemble performance of the Adaboost algorithm.
AdaBoost
Component (thermodynamics)
Ensemble Learning
Boosting
Cite
Citations (1)
AdaBoost
Boosting
Benchmark (surveying)
Word error rate
False positive rate
Cite
Citations (137)
The Adaptive Boosting (AdaBoost) algorithm is a widely used ensemble learning framework, and it can get good classification results on general datasets. However, it is challenging to apply the AdaBoost algorithm directly to imbalanced data since it is designed mainly for processing misclassified samples rather than samples of minority classes. To better process imbalanced data, this paper introduces the indicator Area Under Curve (AUC) which can reflect the comprehensive performance of the model, and proposes an improved AdaBoost algorithm based on AUC (AdaBoost-A) which improves the error calculation performance of the AdaBoost algorithm by comprehensively considering the effects of misclassification probability and AUC. To prevent redundant or useless weak classifiers the traditional AdaBoost algorithm generated from consuming too much system resources, this paper proposes an ensemble algorithm, PSOPD-AdaBoost-A, which can re-initialize parameters to avoid falling into local optimum, and optimize the coefficients of AdaBoost weak classifiers. Experiment results show that the proposed algorithm is effective for processing imbalanced data, especially the data with relatively high imbalances.
AdaBoost
Boosting
Ensemble Learning
Statistical classification
Cite
Citations (53)
For the sake of guaranteeing the security of complex industrial system, it is important to accurately and efficiently detect the faults. AdaBoost algorithm is an effective fault detection method. It can generate a large number of weak classifiers in iterations and combine many of these weak classifiers into the strong classifier to solve the classification problem for fault detection. For the traditional AdaBoost, several of these poor weak classifiers are often ignored and not fully used. However, the weak classifiers with poor performance may store the significant information and pay more attention to the difficult samples. To solve these problems, we propose a local selective ensemble-based AdaBoost (AdaBoost-LSE) in this article. Firstly, error feedback ELM (EFELM) is introduced to establish the basic weak classifier. Through the iteration of AdaBoost, these weak classifiers based on EFELM are generated. Secondly, these weak classifiers are divided into good weak classifiers and bad weak classifiers based on the classification accuracy. The poor weak classifiers with good performance are selected by calculating the classification accuracy for the targeted samples. Thirdly, the strong classifier of AdaBoost-LSE is constructed by integrating the original good weak classifiers and some of these poor weak classifiers with good performance. To verify the efficiency of AdaBoost-LSE, the Tennessee Eastman (TE) simulation process is used. The experimental results reveal that the proposed AdaBoost-LSE can greatly improve the accuracy of fault detection.
AdaBoost
Boosting
Statistical classification
Ensemble Learning
Cite
Citations (3)
The presented research responds to increased mental illness conditions worldwide and the need for efficient mental health care (MHC) through machine learning (ML) implementations. The datasets employed in this investigation belong to a Kaggle repository named "Mental Health Tech Survey." The surveys for the years 2014 and 2016 were downloaded and aggregated. The prediction results for bagging, stacking, LR, KNN, tree class, NN, RF, and Adaboost yielded 75.93%, 75.93%, 79.89%, 90.42%, 80.69%, 89.95%, 81.22%, and 81.75% respectively. The AdaBoost ML model performed data cleaning and prediction on the datasets, reaching an accuracy of 81.75%, which is good enough for decision-making. The results were further used with other ML models such as Random Forest (RF), K-Nearest Neighbor (KNN), bagging, and a few others, with reported accuracy ranging from 81.22 to 75.93 which is good enough for decision making. Out of all the models used for predicting mental health treatment outcomes, AdaBoost has the highest accuracy.
AdaBoost
Cite
Citations (27)
Boosting
AdaBoost
Cite
Citations (149)
AdaBoost
Benchmark (surveying)
Cite
Citations (42)
Human activity recognition research is being implemented more and more as technology advances in computer vision. Many fields require activity recognition technology, such as theft detection or online exam cheating detection. One method that is widely used is AdaBoost. This study proposes the AdaBoost Support Vector Machine Method, a combination of the AdaBoost Method and the Support Vector Machine. The evaluation uses datasets for human activity recognition and compares them with other machine learning algorithms. The results obtained indicate that the proposed method has the highest performance compared to the tested algorithms. The highest accuracy in this study was 96.06%. It shows that SVM as an AdaBoost component is proven to be able to improve the performance of AdaBoost.
AdaBoost
Activity Recognition
Cite
Citations (2)
The AdaBoost.M1 is one of the machine learning algorithms. But it will fail if the weak learner cannot achieve at least 50% accuracy when run on these hard distributions. Random Forest is computationally effective and offer good prediction performance. A new approach AdaBoost.M1-RF algorithm, which using Random Forest as weak learner, is proposed in the paper. To evaluate the performance of AdaBoost.M1-RF algorithm, it is compared with other machine learning algorithms.
AdaBoost
Cite
Citations (18)
Aiming at the characteristics of soft sensors, an ensemble learning algorithm AdaBoost.RT is used to establish the soft sensor models. According to the shortcoming of AdaBoost.RT and the difficulties of on-line updating for soft sensor models, a self-adaptive modifying threshold φ and an incremental learning method are proposed for improving the performance of original AdaBoost.RT. The new modified AdaBoost.RT can overcome the disadvantages of original AdaBoost.RT and update the soft sensor model in real time. The new method is used to establish the soft sensor model of molten steel temperature in 300t LF. Practical production data are used to test the model. The results demonstrate that the new soft sensor model based on modified AdaBoost.RT can improve the prediction accuracy and has good ability of update.
AdaBoost
Soft sensor
Ensemble Learning
Cite
Citations (3)