Task-driven RGB-Lidar Fusion for Object Tracking in Resource-Efficient Autonomous System

2021 
Autonomous mobile systems such as vehicles or robots are equipped with multiple sensor modalities including Lidar, RGB, and Radar. The fusion of multi-modal information can enhance task accuracy but indiscriminate sensing and fusion in all modality increases demand on available system resources. This paper presents a task-driven approach to input fusion that minimizes utilization of resource-heavy sensors and demonstrates its application to Visual-Lidar fusion for object tracking and path planning. Proposed spatiotemporal sampling algorithm activates Lidar only at regions-of-interest identified by analyzing visual input and reduces the Lidar ‘base frame rate’ according to kinematic state of the system. This significantly reduces Lidar usage, in terms of data sensed/transferred and potentially power consumed, without severe reduction in performance compared to both a baseline decision-level fusion and state-of-the-art deep multi-modal fusion.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []