Direct Adversarial Attack on Stego Sandwiched Between Black Boxes

2019 
Due to the amazing progresses in deep learning techniques, steganography has now been challenged to tackle not only artificial feature-based but also effective deep-learning-based steganalysis. Recent steganographers have tried to conduct adversarial attacks to defend the steganalysis networks by fine-tuning the embedding details with the help of adversarial information, which, however, mostly are white-box attacks. This research studies a novel method to conduct stegano-graphic adversarial attacks in practical scenario where stegos are sandwiched between black boxes. In our case, the toolboxes to generate stegos are steganographic black boxes where embedding adjustments are prohibited, and networks to detect stegos are semi-black boxes where most of the steganalysis networks’ details are unavailable. By reforming few-pixel-attack into the form of extraction conservation noises and add them directly onto stegos, we ensure the message extraction and launch the attack in practical scenario. Experiments show that the proposed method can significantly boost the error rate of the deep-learning-based steganalysis and at the same time keep a comparable error rate when facing artificial feature-based steganalysis.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    4
    Citations
    NaN
    KQI
    []