BridgeLoc: Bridging Vision-Based Localization for Robots

2017 
In this paper, we study vision-based localization for robots. We anticipate that numerous mobile robots will serve or interact with humans in indoor scenarios such as healthcare, entertainment, and public service. Such scenarios entail accurate and scalable indoor visual robot localization, the subject of this work. Most existing vision-based localization approaches suffer from low localization accuracy and scalability issues due to visual environmental features’ limited effective range and detection accuracy. In light of infrastructural cameras’ wide indoor deployment, this paper proposes BRIDGELOC, a novel vision-based indoor robot localization system that integrates both robots’ and infrastructural cameras. BRIDGELOC develops three key technologies: robot and infrastructural camera view bridging, rotation symmetric visual tag design, and continuous localization based on robots’ visual and motion sensing. Our system bridges robots’ and infrastructural cameras’ views to accurately localize robots. We use visual tags with rotation symmetric patterns to extend scalability greatly. Our continuous localization enables robot localization in areas without visual tags and infrastructural camera coverage. We implement our system and build a prototype robot using commercial off-the-shelf hardware. Our real-world evaluation validates BRIDGELOC’s promise for indoor robot localization.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []