Robust adapted principal component analysis for face recognition
2009
Recognizing faces with uncontrolled pose, illumination, and expression is a challenging task due to the fact that features insensitive to one variation may be highly sensitive to the other variations. Existing techniques dealing with just one of these variations are very often unable to cope with the other variations. The problem is even more difficult in applications where only one gallery image per person is available. In this paper, we describe a recognition method, Adapted Principal Component Analysis (APCA), that can simultaneously deal with large variations in both illumination and facial expression using only a single gallery image per person. We have now extended this method to handle head pose variations in two steps. The first step is to apply an Active Appearance Model (AAM) to the non-frontal face image to construct a synthesized frontal face image. The second is to use APCA for classification robust to lighting and pose. The proposed technique is evaluated on three public face databases — Asian Face, Yale Face, and FERET Database — with images under different lighting conditions, facial expressions, and head poses. Experimental results show that our method performs much better than other recognition methods including PCA, FLD, PRM and LTP. More specifically, we show that by using AAM for frontal face synthesis from high pose angle faces, the recognition rate of our APCA method increases by up to a factor of 4.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
19
References
6
Citations
NaN
KQI