Hyperparameter optimization on supervised learning models

2021 
Machine learning has taken the technological world by storm in recent years. Every practitioner needs to develop a model that meets the needs of the problem that is faced along with all the available resources that come by. A lot of challenges come up along the way of this process; one of these challenges is the selection of the most appropriate hyperparameters in the developing model. This phase, called hyperparameter optimization, is crucial since on the one hand there are models that have proven to be effective in both performance and execution time, while on the other hand these same models can be rendered rather useless without the appropriate selection of hyperparameters. In addition, hyperparameter tuning can really help a model to shine and exploit its capabilities to the fullest. Since every problem is unique and complex, domain knowledge is required to select the appropriate hyperparameters in each case; but that is not always possible. A need is on the rise for tools that automatically solve this issue and give information and guidance to the users on how to solve the problem at hand. This thesis follows an experimental procedure to extract information regarding the appropriate hyperparameters on various supervised learning models. We use datasets with diverse features and characteristics that could assist with the automation of machine learning processes. This is approached through already existing optimization frameworks that have been proven to achieve great results on hyperparameter tuning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []