A Deep Collaborative Framework for Face Photo-Sketch Synthesis

2019 
Great breakthroughs have been made in the accuracy and speed of face photo–sketch synthesis in recent years. Regression-based methods have gained increasing attention, which benefit from deeper and faster end-to-end convolutional neural networks. However, most of these models typically formulate the mapping from photo domain $X$ to sketch domain $Y$ as a unidirectional feedforward mapping, $G: X \to Y$ , and vice versa, $F: Y \to X$ ; thus, the utilization of mutual interaction between two opposite mappings is lacking. Therefore, we proposed a collaborative framework for face photo–sketch synthesis. The concept behind our model was that a middle latent domain $\widetilde {Z}$ between the photo domain $X$ and the sketch domain $Y$ can be learned during the learning procedure of $G: X \to Y$ and $F: Y \to X$ by introducing a collaborative loss that makes full use of two opposite mappings. This strategy can constrain the two opposite mappings and make them more symmetrical, thus making the network more suitable for the photo–sketch synthesis task and obtaining higher quality generated images. Qualitative and quantitative experiments demonstrated the superior performance of our model in comparison with the existing state-of-the-art solutions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    37
    Citations
    NaN
    KQI
    []