Efficient Parallel Self-Adjusting Computation

2021 
Self-adjusting computation is an approach for automatically producing dynamic algorithms from static ones. It works by tracking control and data dependencies, and propagating changes through the dependencies when making an update. Extensively studied in the sequential setting, some results on parallel self-adjusting computation exist, but are only applicable to limited classes of computations, or are ad-hoc systems with no theoretical analysis of their performance. In this paper, we present the first system for parallel self-adjusting computation that applies to a wide class of nested parallel algorithms and provides theoretical bounds on the work and span of the resulting dynamic algorithms. Our bounds relate a "distance" measure between computations on different inputs to the cost of propagating an update. The main innovation in the paper is in using Series-Parallel trees (SP trees) to track sequential and parallel control dependencies to allow change propagation to be applied safely in parallel. We demonstrate several example applications, including algorithms for dynamic sequences and dynamic trees. Lastly, we show experimentally that our system allows algorithms to produce updated results over large datasets significantly faster than from-scratch execution, saving both work and parallel time.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []