Visual surveillance coverage: strategies and metrics

2005 
Many sensor systems such as security cameras and satellite photography are faced with the problem of where they should point their sensors at any given time. With directional control of the sensor, the amount of space available to cover far exceeds the field-of-view of the sensor. Given a task domain and a set of constraints, we seek coverage strategies that achieve effective area coverage of the environment. We develop metrics that measure the quality of the strategies and give a basis for comparison. In addition, we explore what it means for an area to be "covered" and how that is affected by the domain, the sensor constraints, and the algorithms. We built a testbed in which we implement and run various sensor coverage strategies and take measurements on their performance. We modeled the domain of a camera mounted on pan and tilt servos with appropriate constraints and time delays on movement. Next, we built several coverage strategies for selecting where the camera should look at any given time based on concepts such as force-mass systems, scripted movements, and the time since an area was last viewed. Finally, we describe several metrics with which we can compare the effectiveness of different coverage strategies. These metrics are based on such things as how well the whole space is covered, how relevant the covered areas are to the domain, how much time is spent acquiring data, how much time is wasted while moving the servos, and how well the strategies detect new objects moving through space.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    8
    Citations
    NaN
    KQI
    []