Ranging information from ultra-wideband (UWB) ranging radios can be used to improve estimated navigation accuracy of a ground robot with other on-board sensors. However, all ranging-aided navigation methods demand the locations of ranging nodes to be known, which is not suitable for time-pressed situations, dynamic cluttered environments, or collaborative navigation applications. This paper describes a new ranging-aided navigation approach that does not require the locations of ranging radios. Our approach formulates relative pose constraints using ranging readings. The formulation is based on geometric relationships between each stationary ranging node and two ranging antennas on the moving robot across time. Our experiments show that estimated navigation accuracy of the ground robot is substantially enhanced with ranging information using our approach under a variety of scenarios, when ranging nodes are placed at unknown locations. We analyze and compare our performance with a traditional ranging-aided method, which requires mapping the positions of ranging nodes. We also demonstrate the applicability of our approach for collaborative navigation in large-scale unknown environments, by using ranging information from one mobile robot to improve navigation estimation of the other robot. This application does not require the installation of ranging nodes at fixed locations.
Understanding the perceived scene during navigation enables intelligent robot behaviors. Current vision-based semantic SLAM (Simultaneous Localization and Mapping) systems provide these capabilities. However, their performance decreases in visually-degraded environments, that are common places for critical robotic applications, such as search and rescue missions. In this paper, we present SIGNAV, a real-time semantic SLAM system to operate in perceptually-challenging situations. To improve the robustness for navigation in dark environments, SIGNAV leverages a multi-sensor navigation architecture to fuse vision with additional sensing modalities, including an inertial measurement unit (IMU), LiDAR, and wheel odometry. A new 2.5D semantic segmentation method is also developed to combine both images and LiDAR depth maps to generate semantic labels of 3D mapped points in real time. We demonstrate that the navigation accuracy from SIGNAV in a variety of indoor environments under both normal lighting and dark conditions. SIGNAV also provides semantic scene understanding capabilities in visually-degraded environments. We also show the benefits of semantic information to SIGNAV's performance.