Object-Centric Street Scene Synthesis with Generative Adversarial Networks

2020 
We present a new method to synthesise street scene images from instance information using a conditional Generative Adversarial Network (GAN). These conditional GANs made a variety of applications possible, but remain limited to lower resolutions. The aim of this paper is twofold, synthesise realistic street scenes with a focus on objects and increase the resolution using a divide-and-conquer principle. The application for these images is to allow a performance improvement to an object detection network by augmenting its training data. The idea is that our network generates realistic enough objects, from their instance information, in order to be used by the object detection network. This allows for cheaper data augmentation since gathering training pairs is fairly expensive. Higher resolution images are possible using our proposed network type, which follows a divide-and-conquer principle. First, all the objects in the scene are separately generated from a class specific network. Then, the background is generated by a background specific network dividing the complex scene synthesis task into simpler tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []