Anti-Forensics for Face Swapping Videos via Adversarial Training

2021 
Generating falsified faces by artificial intelligence, widely known as DeepFake, attracted attention worldwide since 2017. Given the potential threat brought by this novel technique, forensics researchers dedicated themselves to detect the video forgery. Except for exposing falsified faces, there could be extended research directions for DeepFake such as anti-forensics. It can disclose the vulnerability of current DeepFake forensics methods. Besides, it could also enable DeepFake videos as tactical weapons if the falsified faces are more subtle to be detected. In this paper, we propose a GAN model to behave as an anti-forensics tool. It features a novel architecture with additional supervising modules for enhancing image visual quality. Besides, a loss function is designed to improve the efficiency of the proposed model. Evaluated by experiments, we show that the DeepFake forensics detectors are susceptible to attacks launched by the proposed method. Besides, the proposed method can efficiently produce anti-forensics videos in satisfying visual quality without noticeable artifacts. Compared with the other anti-forensics approaches, this is tremendous progress achieved for DeepFake anti-forensics. The attack launched by our proposed method can be truly regarded as DeepFake anti-forensics as it can fool detecting algorithms and human eyes simultaneously.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []