logo
    Visual localization and mapping based on road constraints
    0
    Citation
    0
    Reference
    10
    Related Paper
    Abstract:
    Conventional SLAM only requires the construction of sparse maps for localization, while in order to meet the safe driving needs of unmanned vehicles, they need to understand the edges of the road, i.e., with a semantic level of understanding. In addition, unmanned vehicles are more sensitive to lateral errors than longitudinal errors, which requires SLAM algorithms with higher accuracy for lateral errors. We investigate the ORB-SLAM3 algorithm by introducing satellite maps as a priori knowledge, using the corners in satellite maps to initialize the odometer, remove the accumulated errors, and correct the previous positions; using the results of particle filtering and lane line identification to further optimize the localization results of ORB-SLAM3 and to draw maps with lane semantic information. Our experiments show that our algorithm significantly reduces the cumulative error without loopback, improves the localization accuracy, and yields lane line maps with large engineering applications.
    Keywords:
    Orb (optics)
    Odometer
    Semantic mapping
    Identification
    Line (geometry)
    This paper addresses the simultaneous localization and mapping (SLAM) of a robot in an unknown environment. Several techniques were proposed in the researches to solve this problem employ the different representations of the environment either by points or lines with their information on the extreme geometric positions. The objective of this paper has proposed a solution based on Extended Kalman Filter (EKF) for a single robot. The algorithm developed in our work is the EKF-SLAM which is implemented experimentally using a Pioneer 3-AT mobile robot equipped with a 2D laser telemeter. The obtained results using EKF-SLAM based lines shows the improvements in terms of accuracy and smoothness compared to the EKF-SLAM based points and odometer.
    Odometer
    Smoothness
    Line (geometry)
    Unmanned ground vehicle
    The main aspect of the mobile robot system is the ability to localize itself accurately and simultaneously, create an map of the unknown environment. Simultaneous localization and mapping (SLAM) is required for autonomous navigation. The framework of SLAM is based on Rao-Blackwellized Particle Filter (RBPF) that is applied through G-Mapping algorithm. The 2D LIDAR scanner and odometer is used as main sensor in the simulated robot model. The RBPF algorithm and the data of the LIDAR and odometer are fused to create the 2D map of the simulated environment as well as localization and navigation of the robot in unknown environment. Experiment Implementation is carried in Robotic Operating system (ROS) and gazebo simulator.
    Odometer
    Mobile Robot Navigation
    Monte Carlo localization
    The visual odometer studied in this paper belongs to the front end of visual SLAM, and belongs to a positioning method of pose estimation.The camera captures the environmental information around itself through the visual sensor, compares the feature information of the two frames before and after, and estimates the displacement of the camera.By constantly coordinating and following the highlights of the grouping pictures, the position data of the camera is refreshed progressively, and the camera movement is gotten.In this paper, Circle highlights and Quick corner location are utilized to successfully extricate include focuses, and BRIEF descriptors are utilized to portray the distinguished element focuses to further develop the activity speed of component point coordinating.Focusing on the issue of component point crisscross, RANSAC calculation is acquainted with bring the distinguished element focuses into the calculation for location and hold the focuses that meet the given circumstances.The gray centroid method makes up for the inherent shortcomings of FAST algorithm, which does not have the description of direction and scale.For the issue of element coordinating, this paper utilizes epipolar math and P3P strategies to match 2D-2D and 3D-2D.At last, the Bundle Adjustment is utilized to enhance the current mistake.At long last, the camera direction is shown in the Matlab stage.This paper plays a decent helper part in the programmed Design of …
    Orb (optics)
    Odometer
    Feature (linguistics)
    This paper presents a SLAM technique that does not use odometer information. It is based on HECTOR SLAM method from Technische Universitat of Darmstadt, but using a different hardware from the proposed and finally without the use of IMU device. The method is based on modified settings of the HECTOR SLAM method and manages to optimize the method based on COTS hardware.
    Odometer
    Laser Scanning
    Tracking (education)
    Citations (2)
    Simultaneous localization and mapping (SLAM) in unknown GPS-denied environments is a very challenging problem due to the complex environment factors and the lack of prior knowledge of such environments. The performance of standard SLAM methods is dependent on odometer measurements, which, however, are unlikely to be reliable in a challenging operating environment. In this paper, a novel odometer-free approach is introduced for unmanned ground-based vehicle (UGV) to perform SLAM using only LIDAR/SONAR scans in the form of discrete point clouds. The proposed odometer-free SLAM (OF-SLAM) approach can precisely align successive sensor scans without involving other auxiliary information, e.g., odometer readings. By converting the accurately aligned point clouds into continuous local grid maps using kernel tricks, OF-SLAM creates a dynamically updating global map of the surrounding environment and further accurately localizes the UGV on the map. Simulation experiments verify the validity and effectiveness of OF-SLAM and demonstrate the proposed approach as an attractive alternative for UGV to perform SLAM and explore complex unknown environments.
    Odometer
    Unmanned ground vehicle
    Occupancy grid mapping
    RANSAC
    Ground truth
    Citations (0)
    Abstract In order to improve the positioning accuracy and system reliability of the visual odometer in outdoor environment, this paper proposes an unmanned vehicle positioning system using monocular camera and Inertial measurement unit (IMU). The system uses improved ORB algorithm to extract image feature points and establishes a tightly coupled visual-inertial odometer system based on sliding window and nonlinear optimization. Verified by experiment, it can adapt to the environment of changing light, and can complete the positioning task of unmanned vehicles in the outdoor environment.
    Odometer
    Orb (optics)
    Sliding window protocol
    Monocular vision
    Monocular
    This paper presents an experimental evaluation of monocular ORB-SLAM applied to underwater scenarios. It is investigated as an alternative SLAM method with minimal instumentation compared to other approaches that integrate different sensors such as inertial and acoustic sensors. ORB-SLAM creates a 3D map based on image frames and estimates the position of the robot by using a feature-based front-end and a graph-based back-end. The performance of ORB-SLAM is evaluated through experiments in different settings with varying lighting, visibility and water dynamics. Results show good performance given the right conditions and demonstrate that ORB-SLAM can work well in the underwater environment. Based on our findings the paper outlines possible enhancements which should further improve on the algorithms performance.
    Orb (optics)
    Monocular
    Visibility
    Position (finance)
    Feature (linguistics)
    FastSLAM is a well-known algorithm with its purpose to process the simultaneous localization and mapping (SLAM). There are two main FastSLAM algorithms, i.e., FastSLAM 1.0 and FastSLAM 2.0. However, the speed of execution is too slow due to the superabundant comparisons of every single existing landmarks. Thus computationally efficient SLAM (CESLAM) was presented to deal with the problem and to achieve the goal of real-time processing design. Nevertheless, there is a great possibility that large errors may occur, because the original CESLAM only takes odometer information to estimate the robot's pose in particles. Therefore, this paper not only utilizes the odometer information but also the measurement information from sensors. Finally, simulation results are illustrated that the modified version of CESLAM algorithm can effectively ameliorate the accuracy of localization and mapping.
    Odometer
    Citations (1)
    To bring down the number of traffic accidents and increase people’s mobility companies, such as Robot Engineering Systems (RES) try to put automated vehicles on the road. RES is developing the WEpod, a shuttle capable of autonomously navigating through mixed traffic. This research has been done in cooperation with RES to improve the localization capabilities of the WEpod. The WEpod currently localizes using its GPS and lidar sensors. These have proven to be not accurate and reliable enough to safely navigate through traffic. Therefore, other methods of localization and mapping have been investigated. The primary method investigated in this research is monocular Simultaneous Localization and Mapping (SLAM). Based on literature and practical studies, ORB-SLAM has been chosen as the implementation of SLAM. Unfortunately, ORB-SLAM is unable to initialize the setup when applied on WEpod images. Literature has shown that this problem can be solved by adding depth information to the inputs of ORB-SLAM. Obtaining depth information for the WEpod images is not an arbitrary task. The sensors on the WEpod are not capable of creating the required dense depth-maps. A Convolutional Neural Network (CNN) could be used to create the depth-maps. This research investigates whether adding a depth-estimating CNN solves this initialization problem and increases the tracking accuracy of monocular ORB-SLAM. A well performing CNN is chosen and combined with ORB-SLAM. Images pass through the depth estimating CNN to obtain depth-maps. These depth-maps together with the original images are used in ORB-SLAM, keeping the whole setup monocular. ORB-SLAM with the CNN is first tested on the Kitti dataset. The Kitti dataset is used since monocular ORB- SLAM initializes on Kitti images and ground-truth depth-maps can be obtained for Kitti images. Monocular ORB-SLAM’s tracking accuracy has been compared to ORB-SLAM with ground-truth depth-maps and to ORB-SLAM with estimated depth-maps. This comparison shows that adding estimated depth-maps in- creases the tracking accuracy of ORB-SLAM, but not as much as the ground-truth depth images. The same setup is tested on WEpod images. The CNN is fine-tuned on 7481 Kitti images as well as on 642 WEpod images. The performance on WEpod images of both CNN versions are compared, and used in combination with ORB-SLAM. The CNN fine-tuned on the WEpod images does not perform well, missing details in the estimated depth-maps. However, this is enough to solve the initialization problem of ORB-SLAM. The combination of ORB-SLAM and the Kitti fine-tuned CNN has a better tracking accuracy than ORB-SLAM with the WEpod fine-tuned CNN. It has been shown that the initialization problem on WEpod images is solved as well as the tracking accuracy is increased. These results show that the initialization problem of monocular ORB-SLAM on WEpod images is solved by adding the CNN. This makes it applicable to improve the current localization methods on the WEpod. Using only this setup for localization on the WEpod is not possible yet, more research is necessary. Adding this setup to the current localization methods of the WEpod could increase the localization of the WEpod. This would make it safer for the WEpod to navigate through traffic. This research sets the next step into creating a fully autonomous vehicle which reduces traffic accidents and increases the mobility of people.
    Orb (optics)
    Initialization
    Monocular
    Citations (1)
    [abstFig src='/00280004/06.jpg' width='300' text='Monocular Visual Localization in Tsukuba Challenge 2015. Left: result of localization inside the map created by ORB-SLAM. Right: position tracking at starting point.' ] For the 2015 Tsukuba Challenge, we realized an implementation of vision-based localization based on ORB-SLAM. Our method combined mapping based on ORB-SLAM and Velodyne LIDAR SLAM, and utilized these maps in a localization process using only a monocular camera. We also apply sensor fusion method of odometer and ORB-SLAM from all maps. The combined method delivered better accuracy than the original ORB-SLAM, which suffered from scale ambiguities and map distance distortion. This paper reports on our experience when using ORB-SLAM for visual localization, and describes the difficulties encountered.
    Orb (optics)
    Odometer
    Monocular
    Bundle adjustment
    Citations (36)