Deep Fusion-based Visible and Thermal Camera Forecasting using Seq2Seq GAN.

2021 
Forecasting or estimating the future scenes is an important task in autonomous driving. The future scene is predicted using the current and past scenes. Typically, the scene is forecast as either visible camera images or semantic maps. The forecast scene can be used for planning, navigation and control tasks. However, the scene forecast in the form of visible images are susceptible to varying illumination, varying environmental and adverse weather conditions. In our work, we address this limitation using a novel deep learning based visible and thermal camera forecasting algorithm termed as the Seq2Seq. The Seq2Seq, a conditional GAN framework, forecasts the visible as well as the thermal images using the current and past visible-thermal camera images. The generator model is a deep sensor fusion based on the encoder-decoder architecture with a convolution LSTM branch. The discriminator model is also a deep sensor fusion based on the patchGAN architecture. The Seq2Seq is validated using the KAIST public dataset. The results show that the proposed framework can accurately forecast the future visible and thermal images. Moreover, we also demonstrate the application of the Seq2Seq, by performing semantic segmentation on the forecast visible-thermal images using the MFNet.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []