Generating Robust Parallel Programs via Model Driven Prediction of Compiler Optimizations for Non-determinism

2020 
Execution orders in parallel programs are governed by non-determinism and can vary substantially across different executions even on the same input. Thus, a highly non-deterministic program can exhibit rare execution orders never observed during testing. It is desirable to reduce non-determinism to suppress corner case behavior in production cycle (making the execution robust or bug-free) and increase non-determinism for reproducing bugs in the development cycle. Performance-wise different optimization levels (e.g. from O0 to O3) are enabled during development , however, non-determinism-wise, developers have no way to select right compiler optimization level in order to increase non-determinism for debugging or to decrease it for robustness. The major source of non-determinism is the underlying execution model, primarily determined by the processor architecture and the operating system (OS). Architectural artifacts such as cache misses and TLB misses characterize and shape the non-determinism. In this work, we seek to capture such sources of non-determinism through an architectural model based on hardware performance counters and use the model for predicting the appropriate compiler optimization level for generating a robust parallel program, which has minimal non-determinism in production. As a side effect, the generated model also allows maximizing non-determinism for debugging purposes. We demonstrate our technique on the PARSEC benchmark suite, and among other results show that the generated robust program decreases non-deterministic behavior up to 66.48%, and as a practical measure we also show that a known race condition plus randomly injected ones are rendered benign in the robust parallel program generated by our framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    1
    Citations
    NaN
    KQI
    []