Learning Transferable Concepts in Deep Reinforcement Learning.
2021
While humans and animals learn incrementally during their lifetimes and exploit their experience to solve new tasks, standard deep reinforcement learning methods specialize to solve only one task at a time and, as a result, the information they acquire is hardly reusable in new situations. Here, we introduce a new perspective on the problem of leveraging prior knowledge to solve future unknown tasks. We show that learning discrete concept-like representations of sensory inputs can provide a high-level abstraction that is common across multiple tasks, thus facilitating the transference of information. In particular, we show that it is possible to learn such representations by self-supervision, following an information theoretic approach, and that they improve the sample efficiency by providing prior policies that guide the policy learning process. Our method is able to learn concepts in locomotive tasks that reduce the number of optimization steps in both known and unknown tasks, opening a new path to endow artificial agents with generalization abilities.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
76
References
0
Citations
NaN
KQI