Effect of Initial Conditioning of Reinforcement Learning Agents on Feedback Control Tasks over Continuous State and Action Spaces

2014 
Reinforcement Learning (RL) methods have been proposed as an alternative approach to feedback control problems. These algorithms require little input from the system designer and can adapt their behavior to the dynamics of the system. Nevertheless, one of the issues when tackling with a feedback control task with continuous state and action spaces from scratch is the enormous amount of interaction with the system required for the agent to learn an acceptable policy. In this paper, we measure empirically the performance gain achieved from performing a conditioning training phase with the agents using randomly set PID controllers in two feedback control problems: the speed control of an underwater vehicle, and the pitch control of an airplane.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    4
    Citations
    NaN
    KQI
    []