An Experimental Study for Tracking Ability of Deep Q-Network under the Multi-Objective Behaviour using a Mobile Robot with LiDAR

2021 
The Reinforcement Learning (RL) had been attracting attention for a long time that because it can be easily applied to real robots. On the other hand, in Q-Learning one of RL methods, since it contains the Q-table and grind environment is updated, especially, a large amount of Q-tables are required to express continuous “states,” such as smooth movements of the robot arm. Moreover, there was a disadvantage that calculation could not be performed real-time in case of amount of states and actions. The Deep Q-Network (DQN), on the other hand, uses convolutional neural network to estimate the Q-value itself, so that it can obtain an approximate function of the Q-value. From this characteristic of calculation that ignoring the amount of discrete states, this method has attracted attention, in recent. However, it seems to the following of multitasking and moving goal point that Q-Learning was not good at has been inherited by DQN. In this paper, the authors have improvements the multi-purpose execution of DQN by changing the exploration ratio as known as epsilon dynamically, has been tried. As the verification experiment, in the actual environment, a mobile crawler that mounting the NVIDIA Jetson NX and 2D LiDAR with the improvements DQN has been applied, to verify the object tracking ability, as a moving target position. As the result, the authors have confirmed that the improve its weak point.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []