Knowing When to Stop: Evaluation and Verification of Conformity to Output-Size Specifications.

2019 
Neural architectures able to generate variable-length outputs are extremely effective for applications like Machine Translation and Image Captioning. In this paper, we study the vulnerability of these models to attacks aimed at changing the output-size that can have undesirable consequences including increased computation and inducing faults in downstream modules that expect outputs of a certain length. We show the existence and construction of such attacks with two key contributions. First, to overcome the difficulties of discrete search space and the non-differentiable adversarial objective function, we develop an easy-to-compute differentiable proxy objective that can be used with gradient-based algorithms to find output-lengthening inputs. Second, we develop a verification approach to formally prove that the network cannot produce outputs greater than a certain length. Experimental results on Machine Translation and Image Captioning models show that our adversarial output-lengthening approach can produce outputs that are 50 times longer than the input, while our verification approach can, given a model and input domain, prove that the output length is below a certain size.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    5
    Citations
    NaN
    KQI
    []