Attention-Based Two-Stream Convolutional Networks for Face Spoofing Detection
2019
Since the human face preserves the richest information for recognizing individuals, face recognition has been widely investigated and achieved great success in various applications in the past decades. However, face spoofing attacks (e.g., face video replay attack) remain a threat to modern face recognition systems. Though many effective methods have been proposed for anti-spoofing, we find that the performance of many existing methods is degraded by illuminations. It motivates us to develop illumination-invariant methods for anti-spoofing. In this paper, we propose a two-stream convolutional neural network (TSCNN), which works on two complementary spaces: RGB space (original imaging space) and multi-scale retinex (MSR) space (illumination-invariant space). Specifically, the RGB space contains the detailed facial textures, yet it is sensitive to illumination; MSR is invariant to illumination, yet it contains less detailed facial information. In addition, the MSR images can effectively capture the high-frequency information, which is discriminative for face spoofing detection. Images from two spaces are fed to the TSCNN to learn the discriminative features for anti-spoofing. To effectively fuse the features from two sources (RGB and MSR), we propose an attention-based fusion method, which can effectively capture the complementarity of two features. We evaluate the proposed framework on various databases, i.e., CASIA-FASD, REPLAY-ATTACK, and OULU, and achieve very competitive performance. To further verify the generalization capacity of the proposed strategies, we conduct cross-database experiments, and the results show the great effectiveness of our method.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
63
References
43
Citations
NaN
KQI