Learning Low-Wastage Memory Allocations for Scientific Workflows at IceCube

2019 
In scientific computing, scheduling tasks with heterogeneous resource requirements still requires users to estimate the resource usage of tasks. These estimates tend to be inaccurate in spite of laborious manual processes used to derive them. We show that machine learning outperforms user estimates, and models trained at runtime improve the resource allocation for workflows. We focus on allocating main memory in batch systems, which enforce resource limits by terminating jobs.The key idea is to train prediction models that minimize the costs resulting from prediction errors rather than minimizing prediction errors. In addition, we detect and exploit opportunities to predict resource usage of individual tasks based on their input size.We evaluated our approach on a 10 month production log from the IceCube South Pole Neutrino Observatory experiment. We compare our method to the performance of the current production system and a state-of-the-art method. We show that memory allocation quality can be increased from about 50% to 70%, while at the same time allowing users to provide only rough estimates of resource usage.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []