Multi-view Transfer Learning with Adaboost
27
Citation
7
Reference
10
Related Paper
Citation Trend
Abstract:
Transfer learning, serving as one of the most important research directions in machine learning, has been studied in various fields in recent years. In this paper, we integrate the theory of multi-view learning into transfer learning and propose a new algorithm named Multi-View Transfer Learning with Adaboost (MV-TL Adaboost). Different from many previous works on transfer learning, we not only focus on using the labeled data from one task to help to learn another task, but also consider how to transfer them in different views synchronously. We regard both the source and target task as a collection of several constituent views and each of these two tasks can be learned from every views at the same time. Moreover, this kind of multi-view transfer learning is implemented with adaboost algorithm. Furthermore, we analyze the effectiveness and feasibility of MV-TL Adaboost. Experimental results also validate the effectiveness of our proposed approach.Keywords:
AdaBoost
Transfer of learning
Inductive transfer
Artificial intelligence is a method that is increasingly becoming widespread in all areas of life and enables machines to imitate human behavior. Machine learning is a subset of artificial intelligence techniques that use statistical methods to enable machines to evolve with experience. As a result of the advancement of technology and developments in the world of science, the interest and need for machine learning is increasing day by day. Human beings use machine learning techniques in their daily life without realizing it. In this study, ensemble learning algorithms, one of the machine learning techniques, are mentioned. The methods used in this study are Bagging and Adaboost algorithms which are from Ensemble Learning Algorithms. The main purpose of this study is to find the best performing classifier with the Classification and Regression Trees (CART) basic classifier on three different data sets taken from the UCI machine learning database and then to obtain the ensemble learning algorithms that can make this performance better and more determined using two different ensemble learning algorithms. For this purpose, the performance measures of the single basic classifier and the ensemble learning algorithms were compared
Ensemble Learning
AdaBoost
Learning classifier system
Instance-based learning
Online machine learning
Cite
Citations (1)
AdaBoost
Boosting
Margin classifier
Cite
Citations (368)
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep neural network model, remains uncharted territory. Within this study, a comprehensive evaluation and comparison of 12 base models (naive Bayes, linear discriminant analysis, Gaussian process, support vector machine, decision tree, random forest, Extreme Gradient Boosting (XGBoost), AdaBoost, logistic regression,
Ensemble Learning
Cite
Citations (4)
Of all the terminal cancers that plague men, prostate cancer remains one of the most prevalent and ubiquitous. Data shows prostate cancer is the second leading cause of cancer death worldwide among men. About 11% of men have prostate cancer at some time during their lives. As it happens, we have dedicated our entire research to developing an approach that can improve the existing precision of prostate cancer diagnosis. In our research, we have dedicated a Transfer Learning approach for the Deep Learning model to compare the accuracy in results using Machine Learning classifiers. In addition, we evaluated individual performance in classifications with different evaluation measures using a Deep Learning pre-trained network, VGG16. During our evaluation, we assessed several performance metrics such as Precision, Recall, F1 Score, and Loss Vs. Accuracy for performance analysis. Upon implementing the Transfer Learning approach, we recorded the optimum performance using the VGG16 architecture compared to other popular Deep learning models such as MobileNet and ResNet. It is important to note that we have used the convolutional block and dense layers of VGG16 architecture to extract features from our image dataset. Afterward, we forwarded those features to Machine Learning classifiers to tabulate the final classification result. Upon successful tabulation, we have secured significant accuracy in prognostication using the Deep Machine Learning method in our research.
Transfer of learning
Cite
Citations (1)
Ensemble of classifiers (Multiple classifier system) and Support Vector Machine (SVM) are now well established research lines in machine learning. Recently, some works devoted to SVM-based ensembles report that the most popular ensembles creation methods Bagging and Adaboost are not expected to improve the performance of SVMs and sometimes they even worsen the performance, due to that SVM is stable and strong classifier. In this paper, we focus on adapting Bagging and Adaboost to SVM. The framework of Bagging is extended by introducing the Class-wise expert classifiers, then we proposed the improved algorithm CeBag. The weighting rule of AdaBoost is modified to deal with the overfitting problem which may be even worse when boosting strong classifiers, and the strength of SVM is weakened by adaptively adjusting the kernel parameters, then we proposed the algorithm WwBoost. Experiments implemented on IDA benchmark data sets show that our algorithms are effective in building ensemble of SVMs.
Overfitting
AdaBoost
Boosting
Ranking SVM
Cite
Citations (5)
Transfer of learning
Normalization
Binary classification
Cite
Citations (0)
With the advent of the Internet of Things (IoT), there have been significant advancements in the area of human activity recognition (HAR) in recent years. HAR is applicable to wider application such as elderly care, anomalous behaviour detection and surveillance system. Several machine learning algorithms have been employed to predict the activities performed by the human in an environment. However, traditional machine learning approaches have been outperformed by feature engineering methods which can select an optimal set of features. On the contrary, it is known that deep learning models such as Convolutional Neural Networks (CNN) can extract features and reduce the computational cost automatically. In this paper, we use CNN model to predict human activities from Wiezmann Dataset. Specifically, we employ transfer learning to get deep image features and trained machine learning classifiers. Our experimental results showed the accuracy of 96.95% using VGG-16. Our experimental results also confirmed the high performance of VGG-16 as compared to rest of the applied CNN models.
Transfer of learning
Feature Engineering
Feature (linguistics)
Activity Recognition
Cite
Citations (43)
Predictions that are made based on features are performed through machine learning (ML) algorithms. Machine learning allows systems to learn and develop on their own by gaining experience. In the field of artificial intelligence, machine learning is a sub-discipline. Supervised and unsupervised learning are the two prevalent categories under machine learning. Supervised ML is used for classification whereas unsupervised ML is used for clustering. Currently, machine learning is being employed in a plethora of fields. Biometric recognition, handwriting recognition, and medical diagnosis are some of the use cases of ML. A significant role is played by machine learning in the medical field: identify diseases based on a patient's characteristics. Software applications based on ML algorithms are helping doctors in diagnosing various diseases like cancer, cardiac arrest, etc. We employed an ensemble learning strategy to predict heart problems in this paper. Through the comparison of different evaluation parameters namely ROC, F-measure, recall, precision and accuracy, our paper describes the performance of ML algorithms. The study used a mix of machine learning classifiers to predict heart problems, including Naive Bayes (NB), Decision Tree (DT), Random Forest (RF) and Support Vector Machine (SVM) algorithms. It was observed that implementing Paretto Distribution enabled adaboost resulted in 98.61% accuracy. NB, DT, RF and SVM models were also trained and tested separately.
AdaBoost
Ensemble Learning
Cite
Citations (9)
Despite the recent success of deep transfer learning approaches in NLP, there is a lack of quantitative studies demonstrating the gains these models offer in low-shot text classification tasks over existing paradigms. Deep transfer learning approaches such as BERT and ULMFiT demonstrate that they can beat state-of-the-art results on larger datasets, however when one has only 100-1000 labelled examples per class, the choice of approach is less clear, with classical machine learning and deep transfer learning representing valid options. This paper compares the current best transfer learning approach with top classical machine learning approaches on a trinary sentiment classification task to assess the best paradigm. We find that BERT, representing the best of deep transfer learning, is the best performing approach, outperforming top classical machine learning algorithms by 9.7% on average when trained with 100 examples per class, narrowing to 1.8% at 1000 labels per class. We also show the robustness of deep transfer learning in moving across domains, where the maximum loss in accuracy is only 0.7% in similar domain tasks and 3.2% cross domain, compared to classical machine learning which loses up to 20.6%.
Transfer of learning
Robustness
Cite
Citations (3)
When brain tumors are not treated in their early stages, they can cause uncontrolled proliferation of brain cells. Early detection of lesions in the brain is critical for treatment planning and patient survival. Over the last few years, detection algorithms based on deep learning (DL) and machine learning (ML) have exhibited cutting-edge performance. These algorithms have been successfully utilized to classify, segment, and identify medical images. In this work, many deep learning (DL) and machine learning (ML) methodologies, as well as transfer learning approaches, were evaluated. The proposed Transfer Learning (TL) approach for detecting brain tumors outperformed the existing algorithms. Since deep learning approaches offer the most cutting-edge findings and are better suitable for dealing with this issue than other methods, automated detection based on these techniques has lately gained popularity. When compared with different techniques, such as Deep Learning (DL) or Machine Learning (ML), the Transfer Learning approach provides the highest precision. This helps in the diagnosis of brain tumors at an earlier stage of development.
Transfer of learning
Popularity
Cite
Citations (1)