An Improved Q-Learning Algorithm for Path Planning in Maze Environments

2021 
The path planning is the problem of finding the optimal paths in a given environment, which has become an important way to test the intelligent learning algorithms. In AI-based path planning, the earliest and more in-depth issue is Intelligent Obstacle Avoidance, that is, an agent needs to successfully avoid all obstacles or traps in an unknown environment. Compared with other learning methods, RL (Reinforcement Learning) has inherent advantages in path planning. Unlike most machine learning methods, RL is an unsupervised active learning method. It can not only effectively achieve intelligent obstacle avoidance, but also find the optimal path from unfamiliar environment such as maze through many experiments. Q-learning algorithm is recognized as one of the most typical RL algorithms. Its advantages are simple and practical, but it also has the significant disadvantage of slow convergence speed. This paper gives a called ɛ-Q-Learning algorithm, which is an improvement to the traditional Q-Learning algorithm by using Dynamic Search Factor technology. Experiments show that compared with the existing Q-Learning algorithms, ɛ-Q-Learning can find out a better optimal paths with lower costs of searching.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []