Self-validated ensemble models for design of experiments

2021 
Abstract One of the possible objectives when designing experiments is to build or formulate a model for predicting future observations. When the primary objective is prediction, some typical approaches in the planning phase are to use well-established small-sample experimental designs in the design phase (e.g., Definitive Screening Designs) and to construct predictive models using widely used model selection algorithms such as LASSO. These design and analytic strategies, however, do not guarantee high prediction performance, partly due to the small sample sizes that prevent partitioning the data into training and validation sets, a strategy that is commonly used in machine learning models to improve out-of-sample prediction. In this work, we propose a novel framework for building high-performance predictive models from experimental data that capitalizes on the advantage of having both training and validation sets. However, instead of partitioning the data into two mutually exclusive subsets, we propose a weighting scheme based on the fractional random weight bootstrap that emulates data partitioning by assigning anti-correlated training and validation weights to each observation. The proposed methodology, called Self-Validated Ensemble Modeling (SVEM), proceeds in the spirit of bagging so that it iterates through bootstraps of anti-correlated weights and fitted models, with the final SVEM model being the average of the bootstrapped models. We investigate the performance of the SVEM algorithm with several model-building approaches such as stepwise regression, Lasso, and the Dantzig selector. Finally, through simulation and case studies, we show that SVEM generally generates models with better prediction performance in comparison to one-shot model selection approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    1
    Citations
    NaN
    KQI
    []