MLTL Benchmark Generation via Formula Progression

2018 
Safe cyber-physical system operation requires runtime verification (RV), yet the burgeoning collection of RV technologies remain comparatively untested due to a dearth of benchmarks with oracles enabling objectively evaluating their performance. Mission-time LTL (MLTL) adds integer temporal bounds to LTL to intuitively describe missions of such systems. An MLTL benchmark for runtime verification is a 3-tuple consisting of (1) an MLTL specification \(\varphi \); (2) a set of finite input streams representing propositional system variables (call this computation \(\pi \)) over the alphabet of \(\varphi \); (3) an oracle stream of \(\langle v, t \rangle \) pairs where verdict v is the result (true or false) for time t of evaluating whether \(\pi _t\,\models \,\varphi \) (computation \(\pi \) at time t satisfies formula \(\varphi \)). We introduce an algorithm for reliably generating MLTL benchmarks via formula progression. We prove its correctness, demonstrate it executes efficiently, and show how to use it to generate a variety of useful patterns for the evaluation and comparative analysis of RV tools.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    6
    Citations
    NaN
    KQI
    []