Attack Is the Best Defense: A Multi-Mode Poisoning PUF Against Machine Learning Attacks

2021 
Resistance to modeling attacks is an important issue for Physical Unclonable Functions (PUFs). Deep learning, the state-of-the-art modeling attack, has recently been shown to be able to break many newly developed PUFs. Since then, many more complex PUF structures or challenge obfuscations have been proposed to resist deep learning attacks. However, the proposed methods typically focus on increasing the nonlinearity of PUF structure and challenge-response mapping. In this paper, we explore another direction with a multi-mode poisoning approach for a classic PUF (MMP PUF) in which each working mode is a simple add-on function for a classic PUF. By dividing the original challenge space for each working mode, the proposed MMP PUF generates a multi-modal challenge-response dataset that poisons machine learning algorithms. To validate the idea, we design two working mode types, challenge shift and response flip, as examples with widely-used delay-based Arbiter PUF. Experimental results show that our approach respectively achieves 74.37%, 68.08%, and 50.09% accuracy for dual-mode shift, quad-mode circular shift and dual-mode flip with deep learning models trained on over 3 million challenge-response pairs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    1
    Citations
    NaN
    KQI
    []