Learning deep features for image emotion classification

2015 
Images can both express and affect people's emotions. It is intriguing and important to understand what emotions are conveyed and how they are implied by the visual content of images. Inspired by the recent success of deep convolutional neural networks (CNN) in visual recognition, we explore two simple, yet effective deep learning-based methods for image emotion analysis. The first method uses off-the-shelf CNN features directly for classification. For the second method, we fine-tune a CNN that is pre-trained on a large dataset, i.e. ImageNet, on our target dataset first. Then we extract features using the fine-tuned CNN at different location at multiple levels to capture both the global and local information. The features at different location are aggregated using the Fisher Vector for each level and concatenated to form a compact representation. From our experimental results, both the deep learning-based methods outperforms traditional methods based on generic image descriptors and hand-crafted features.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    29
    Citations
    NaN
    KQI
    []