Multi-Actor-Attention-Critic Reinforcement Learning for Central Place Foraging Swarms
2021
Multiple agents with relatively low cost, decentralized control, and robustness have the advantages of completing a foraging task more efficiently than a single advanced robot. Despite many foraging algorithms are efficient in multiple robot systems, most are pre-designed or not very adaptive to different environments since they have to evolve the parameters of the foraging algorithm in each different environment. Besides, designing an efficient collision avoidance strategy for multiple agents is a challenge. Addressing these issues, we introduce the multi-actor-attention-critic(MAAC) reinforcement learning method into the multiple foraging agents. We train the foraging strategy for multiple simulated agents. We compare our approach with existing foraging algorithms for multiple robots, the Central Place Foraging Algorithm (CPFA) and the Distributed Deterministic Spiral Algorithm (DDSA). Experimental results demonstrate that our approach outperforms the two algorithms. Also, we illustrate that our approach has a better performance in avoiding obstacles and adapting to different environments.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
18
References
0
Citations
NaN
KQI