INFaaS: A Model-less Inference Serving System

2019 
Despite existing work in machine learning inference serving, ease-of-use and cost efficiency remain key challenges. Developers must manually match the performance, accuracy, and cost constraints of their applications to decisions about selecting the right model and model optimizations, suitable hardware architectures, and auto-scaling configurations. These interacting decisions are difficult to make for users, especially when the application load varies, applications evolve, and the available resources vary over time. Thus, users often end up making decisions that overprovision resources. This paper introduces INFaaS, a model-less inference-as-a-service system that relieves users of making these decisions. INFaaS provides a simple interface allowing users to specify their inference task, and performance and accuracy requirements. To implement this interface, INFaaS generates and leverages model-variants, versions of a model that differ in resource footprints, latencies, costs, and accuracies. Based on the characteristics of the model-variants, INFaaS automatically navigates the decision space on behalf of users to meet user-specified objectives: (a) it selects a model, hardware architecture, and any compiler optimizations, and (b) it makes scaling and resource allocation decisions. By sharing models across users and hardware resources across models, INFaaS achieves up to 150x cost savings, 1.5x higher throughput, and violates latency objectives 1.5x less frequently, compared to Clipper and TensorFlow Serving.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    9
    Citations
    NaN
    KQI
    []