Neural Networks for End-to-End Refinement of Simulated Sensor Data for Automotive Applications

2019 
The rising use of Artificial Intelligence (AI) for Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AVs) comes with the need of comprehensive tests, verification and validation. This is hardly achievable in real test drives alone, but validation with simulated sensors in Virtual Testbeds (VTBs) becomes a popular supplement. To reduce the gap between simulation and reality, Digital Twins of real sensors need to generate data as realistic as possible. Instead of classical methods such as rasterization or ray tracing, a novel approach based on neural networks is developed and evaluated. Based on the concept of Generative Adversarial Networks (GANs) a classification network is trained to distinguish real from simulated images. At the same time the classifier is used as critic to improve a generation network that refines simulated sensor images to look more realistic. This contribution gives an overview of recent research in image-to-image translation with GANs and suggests a framework to generate more realistic sensor images for–but not limited to–automotive applications. State of the art image-to-image translation architectures are evaluated and several methods are suggested to deal with drawbacks and shortcomings. An evaluation metric according to the subjective assessment of a more realistic color distribution in the refined sensor images is introduced. Finally, the potential of the novel approach to be used in VTBs is analyzed and discussed.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    2
    Citations
    NaN
    KQI
    []