Agent-Centric Representations for Multi-Agent Reinforcement Learning.
2021
Object-centric representations have recently enabled significant progress in
tackling relational reasoning tasks. By building a strong object-centric
inductive bias into neural architectures, recent efforts have improved
generalization and data efficiency of machine learning algorithms for these
problems. One problem class involving relational reasoning that still remains
under-explored is multi-agent reinforcement learning (MARL). Here we
investigate whether object-centric representations are also beneficial in the
fully cooperative MARL setting. Specifically, we study two ways of
incorporating an agent-centric inductive bias into our RL algorithm: 1.
Introducing an agent-centric attention module with explicit connections across
agents 2. Adding an agent-centric unsupervised predictive objective (i.e. not
using action labels), to be used as an auxiliary loss for MARL, or as the basis
of a pre-training step. We evaluate these approaches on the Google Research
Football environment as well as DeepMind Lab 2D. Empirically, agent-centric
representation learning leads to the emergence of more complex cooperation
strategies between agents as well as enhanced sample efficiency and
generalization.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
27
References
2
Citations
NaN
KQI