Face Anti-Spoofing with Deep Neural Network Distillation
2020
One challenging aspect in face anti-spoofing (or presentation attack detection, PAD) refers to the difficulty of collecting enough and representative attack samples for an application-specific environment. In view of this, we tackle the problem of training a robust PAD model with limited data in an application-specific domain. We propose to leverage data from a richer and related domain to learn meaningful features through the concept of neural network distilling. We first train a deep neural network based on reasonably sufficient labeled data in an attempt to “teach” a neural network for the application-specific domain for which training samples are scarce. Subsequently, we form training sample pairs from both domains and formulate a novel optimization function by considering the cross-entropy loss, as well as maximum mean discrepancy of features and paired sample similarity embedding for network distillation. Thus, we expect to capture spoofing-specific information and train a discriminative deep neural network on the application-specific domain. Extensive experiments validate the effectiveness of the proposed scheme in face anti-spoofing setups.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
6
Citations
NaN
KQI