Font2Fonts: A modified Image-to-Image translation framework for font generation

2020 
Generating a font from scratch requires font domain knowledge additionally, it's a labor intensive and time-consuming task. With the remarkable success of deep learning methods for image synthesis, many researchers are focusing on generating fonts by utilizing these methods. In order to utilize these deep learning methods for font generation, language specific font image datasets are manually prepared which is a cumbersome and time-consuming task. Additionally, existing supervised image-to-image translation methods like pix2pix are able to do only one-to-one domain translation therefore they cannot be applied to font generation task which is multi-domain. In this paper, we propose a model, Font2Fonts, a conditional generative adversarial network (GAN) for font synthesis in a supervised setting. Unlike pix2pix which can only translate from one font domain to the other, Font2Fonts is a multi-domain translation model. The proposed method can synthesize high quality diverse font images using a single end-to-end network. By our qualitative and quantitative experiments, we verify the effectiveness of our proposed model. Moreover, we also propose a Unicode-based module for automatically generating font image dataset. Our proposed Unicode-based method can be easily applied for preparing font dataset of various language characters.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []