Q-learning with Nearest Neighbors
2018
We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a d-dimensional state space and the discounted factor γ∈(0,1), given an arbitrary sample path with covering time'' L, we establish that the algorithm is guaranteed to output an e-accurate estimate of the optimal Q-function using \Ot(L/(e3(1−γ)7)) samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as \Ot(1/ed), so the sample complexity scales as \Ot(1/ed+3). Indeed, we establish a lower bound that argues that the dependence of \Omegat(1/ed+2) is necessary.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
39
References
23
Citations
NaN
KQI