Coordinated Sensing Coverage in Sensor Networks using Distributed Reinforcement Learning

2006 
A multi-agent system (MAS) approach on wireless sensor networks (WSNs) comprising sensor-actuator nodes is very promising as it has the potential to tackle the resource constraints inherent in these networks by efficiently coordinating the activities among the nodes. In this paper, we consider the coordinated sensing coverage problem and study the behavior and performance of four distributed reinforcement learning (DRL) algorithms: (i) fully distributed Q-learning, (ii) Distributed Value Function (DVF), (iii) Optimistic DRL, and (iv) Frequency Maximum Q-learning (FMQ). We present results from simulation studies and actual implementation of these DRL algorithms on Crossbow Mica2 motes, and compare their performance in terms of incurred communication and computational costs, energy consumption and the achieved level of sensing coverage. Issues such as convergence to local or global optima, as well as speed of convergence are also considered. These implementation results show that the DVF agents outperform other agents in terms of both convergence and energy consumption.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    9
    Citations
    NaN
    KQI
    []