Quantum agents in the Gym: a variational quantum algorithm for deep Q-learning.

2021 
Quantum machine learning (QML) has been identified as one of the key fields that could reap advantages from near-term quantum devices, next to optimization and quantum chemistry. Research in this area has focused primarily on variational quantum algorithms (VQAs), and several proposals to enhance supervised, unsupervised and reinforcement learning (RL) algorithms with VQAs have been put forward. Out of the three, RL is the least studied and it is still an open question whether VQAs can be competitive with state-of-the-art classical algorithms based on neural networks (NNs) even on simple benchmark tasks. In this work, we introduce a training method for parametrized quantum circuits (PQCs) to solve RL tasks for discrete and continuous state spaces based on the deep Q-learning algorithm. To evaluate our model, we use it to solve two benchmark environments from the OpenAI Gym, Frozen Lake and Cart Pole. We provide insight into why the performance of a VQA-based Q-learning algorithm crucially depends on the observables of the quantum model and show how to choose suitable observables based on the RL task at hand. We compare the performance of our model to that of a NN for agents that need similar time to convergence, and find that our quantum model needs approximately one-third of the parameters of the classical model to solve the Cart Pole environment in a similar number of episodes on average. We also show how recent separation results between classical and quantum agents for policy gradient RL can be adapted to quantum Q-learning agents, which yields a quantum speed-up for Q-learning. This work paves the way towards new ideas on how a quantum advantage may be obtained for real-world problems in the future.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    8
    Citations
    NaN
    KQI
    []