Transfer Learning in Autonomous Driving Using Real-World Samples.

2021 
The Sim2Real gap is a topic that has been receiving a great deal of attention lately. Many Artificial Intelligence techniques, for example Reinforcement Learning, require millions of iterations to achieve satisfactory performance. This requirement often forces these techniques to solely train in simulation. If the gap between the simulated environment and the target environment is too broad, however, the trained agents will lose out on performance when deployed. Bridging this gap lowers the performance loss during deployment, in turn improving the effectiveness of these agents. This paper proposes a new technique to tackle this issue. The technique focuses on the use of demonstration samples gathered in the target environment and is based on two transfer learning fundamentals. By combining the advantages of Domain Randomization and Domain Adaptation, agents are able to transfer training performance to the target environment more successfully. Experimental results show a strong decrease in performance loss during deployment when the agent is exposed to the demonstration samples during training. The proposed technique describes a methodology that we believe can be applied in fields other than autonomous driving in order to improve transfer learning performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []