Extraction of human face and transformable region by facial expression based on extended labeled graph matching

2004 
This paper considers the recognition of moving images and proposes a new framework in which the local features, the global structure, and motion information are handled comprehensively. The method is applied to the extraction of facial expressions and its effectiveness is demonstrated. In most conventional moving image recognition methods, recognition is performed on the basis of the sequence of recognition results for individual frames. However, the authors believe that the motion information should be more positively utilized in the recognition of the individual frames. When the time-series data are combined into the recognition of a still image, however, the amount of information is tremendous, making the processing complicated. The method proposed in this paper extends the concept of the labeled graph matching method, which has been used for still images, to moving images. The proposed method handles sparse graphs and can prevent an increase in the amount of computation. By dynamically adjusting the features to be handled according to the stage of processing, complex processing in dynamic image recognition can be integrated in a simple and straightforward way. As practical examples, the human head, and parts of the head, are extracted, indicating the effectiveness of the proposed method. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 87(10): 35–43, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20106
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []