Manga-MMTL: Multimodal Multitask Transfer Learning for Manga Character Analysis.

2021 
In this paper, we introduce a new pipeline to learn manga character features with visual information and verbal information in manga image content. Combining these set of information is crucial to go further into comic book image understanding. However, learning feature representations from multiple modalities is not straightforward. We propose a multitask multimodal approach for effectively learning the feature of joint multimodal signals. To better leverage the verbal information, our method learn to memorize the content of manga albums by additionally using the album classification task. The experiments are carried out on Manga109 public dataset which contains the annotations for characters, text blocks, frame and album metadata. We show that manga character features learnt by the proposed method is better than all existing single-modal methods for two manga character analysis tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    1
    Citations
    NaN
    KQI
    []