An Empirical Study of Actor-Critic Methods for Feedback Controllers of Ball-Screw Drivers
2013
In this paper we study the use of Reinforcement Learning Actor-Critic methods to learn the control of a ball-screw feed drive. We have tested three different actors: Q-value based, Policy Gradient and CACLA actors. We have paid special attention to the sensibility to suboptimal learning gain tuning. As a benchmark, we have used randomly-initialized PID controllers. CACLA provides an stable control comparable to the best heuristically tuned PID controller, despite its lack of knowledge of the actual error value.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
8
References
3
Citations
NaN
KQI