Robust Texture-Aware Computer-Generated Image Forensic: Benchmark and Algorithm.

2021 
With advances in rendering techniques and generative adversarial networks, computer-generated (CG) images tend to be indistinguishable from photographic (PG) images. Revisiting previous works towards CG image forensic, we observed that existing datasets are constructed years ago and limited in both quantity and diversity. Besides, current algorithms only consider the global visual features for forensic, ignoring finer differences between CG and PG images. To mitigate these problems, we first contribute a Large-Scale CG images Benchmark (LSCGB), and then propose a simple yet strong baseline model to address the forensic task. On the one hand, the introduced benchmark has three superior properties, 1) large-scale: the benchmark contains 71168 CG and 71168 PG images with the corresponding expert-annotated labels. It is orders of magnitude bigger than previous datasets. 2) high diversity: we collect CG images from 4 different scenes generated by various rendering techniques. The PG images are varied in terms of image content, camera models, and photographer styles. 3) small bias: we carefully filter the collected images to ensure that the distributions of color, brightness, tone and saturation between CG and PG images are close. Furthermore, inspired by an empirical study on texture difference between CG and PG images, an effective texture-aware network is proposed to improve forensic accuracy. Concretely, we first strengthen texture information of multilevel features extracted from a backbone. Then, the relations among feature channels are explored by learning its gram matrix. Each feature channel represents a specific texture pattern. The gram matrix is thus able to embed the finer texture differences. Experimental results demonstrate that this baseline surpasses the existing methods. The benchmark is publically available at https://github.com/wmbai/LSCGB.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []