End-to-End Deep Learning Applied in Autonomous Navigation using Multi-Cameras System with RGB and Depth Images.

2019 
The present work demonstrates how an autonomous navigation system of ‘End-to-End’ deep learning principles is directly improved in its response process, depending on the information obtained by different input images configurations. For this, a methodology was developed to allow working with RGB and depth images, which were obtained through a Microsoft Kinect V2 sensor device. Three cameras were used for this experiment. The images of the different cameras were concatenated or grouped, generating new and different input configurations from the vision system. To develop the presented methodology, two support and validation systems were implemented. Through the process of computer simulation, it was able to test the first approaches and define the most important ones. In order to validate the proposed methodology and solutions in real world situations, a 1/4 scale automotive vehicle was prototyped. Finally, the experiments shows the importance of the use of multi-cameras systems for a better performance of autonomous navigation systems based on End-to-End learning approach, heaving an average error of 2.41 degrees in the best configuration tested, with three RGB cameras.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []