SeDAR: Reading floorplans like a human
2019
The use of human-level semantic information to
aid robotic tasks has recently become an important area for
both Computer Vision and Robotics. This has been enabled
by advances in Deep Learning that allow consistent and robust
semantic understanding. Leveraging this semantic vision
of the world has allowed human-level understanding to
naturally emerge from many different
approaches. Particularly, the use of semantic information to
aid in localisation and reconstruction has been at the forefront
of both fields. Like robots, humans also require the ability to localise
within a structure. To aid this, humans have designed highlevel
semantic maps of our structures called floorplans. We
are extremely good at localising in them, even with limited
access to the depth information used by robots. This is because
we focus on the distribution of semantic elements,
rather than geometric ones. Evidence of this is that humans
are normally able to localise in a floorplan that has not been
scaled properly. In order to grant this ability to robots, it is
necessary to use localisation approaches that leverage the
same semantic information humans use. In this paper, we present a novel method for semantically
enabled global localisation. Our approach relies on
the semantic labels present in the floorplan. Deep Learning
is leveraged to extract semantic labels from RGB images,
which are compared to the floorplan for localisation. While
our approach is able to use range measurements if available,
we demonstrate that they are unnecessary as we can achieve
results comparable to state-of-the-art without them.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI