Abstract PNT stands for Positioning, Navigation, and Timing. Space-based PNT refers to the capabilities enabled by GNSS, and enhanced by Ground and Space-based Augmentation Systems (GBAS and SBAS), which provide position, velocity, and timing information to an unlimited number of users around the world, allowing every user to operate in the same reference system and timing standard. Such information has become increasingly critical to the security, safety, prosperity, and overall qualityof-life of many citizens. As a result, space-based PNT is now widely recognized as an essential element of the global information infrastructure. This paper discusses the importance of the availability and continuity of PNT information, whose application, scope and significance have exploded in the past 10–15 years. A paradigm shift in the navigation solution has been observed in recent years. It has been manifested by an evolution from traditional single sensor-based solutions, to multiple sensor-based solutions and ultimately to collaborative navigation and layered sensing, using non-traditional sensors and techniques – so called signals of opportunity. A joint working group under the auspices of the International Federation of Surveyors (FIG) and the International Association of Geodesy (IAG), entitled ‘Ubiquitous Positioning Systems’ investigated the use of Collaborative Positioning (CP) through several field trials over the past four years. In this paper, the concept of CP is discussed in detail and selected results of these experiments are presented. It is demonstrated here, that CP is a viable solution if a ‘network’ or ‘neighbourhood’ of users is to be positioned / navigated together, as it increases the accuracy, integrity, availability, and continuity of the PNT information for all users.
High-Accuracy and high-efficiency 3-D sensing and associated data processing techniques are urgently needed for today's roadway inventory, infrastructure health monitoring, autonomous driving, connected vehicles, urban modeling, and smart cities. 3D geospatial data acquired by digital photogrammetry or laser scanning or LiDAR systems have become one of the most critical data sources to support the above-mentioned applications. While progress has been made to applying 3D sensory data to those applications related to intelligent transportation systems (ITS), such as road network extraction, platform localization, obstacle avoidance, high-definition map generation, and transportation infrastructure inventory, many essential questions remain regarding the processing and understanding such massive 3D datasets in ITS-related applications. The authors have selected four articles for review in this Special issue. A summary of these articles is outlined below.
Practical experiences have shown that the standard deviations (STD) obtained from kinematic GPS processing software may not always reflect the actual error due to the lack of information about functional correlation. Neglecting the physical correlation between epochs, systematic errors and improperly modeled parameters may cause incorrect estimation of the standard deviation. A number of the GPS processing software consider the variance-covariance matrix of the observations without correlation or simply deal with the diagonal components only. As a consequence, the a posteriori standard deviations of the estimated coordinates could be too optimistic and may not represent the actual quality of the estimated coordinates. This is particularly important when the improperly estimated STDs are used as a priori values for GPS/IMU (Global Positioning System/Inertial measurement unit) Kalman filtering, and consequently defining the georeferencing error budget of imaging sensors, such as airborne digital camera, LiDAR (Light Detection and Ranging) and IfSAR (Interferometric synthetic aperture radar). The primary objective of the research presented in this paper is to assess the STD reliability of the kinematic GPS processing using the NGS KARS and Applanix POSGPS software packages. In this research, the scale factor (SF) was used as a reliability indicator for software generated STDs. In order to assess the STD reliability of kinematic GPS processing, two data sets from the San Andreas and San Jacinto Faults LiDAR Mapping project were used. Since there is no easily available absolute reference for an airborne trajectory, in our investigations one base station was used as a rover, called simulated rover, to create kinematic solutions with respect to several other reference stations with varying separation to simulated rover. The two daily data sets from the LiDAR Mapping project were processed in kinematic mode using L3 (ion-free) observations. The single baseline solution from NGS KARS and Applanix POSGPS software and multi-baseline (network) solutions from Applanix POSGPS were obtained, and the scale factor was computed by comparing the software generated STD, called a formal error, to the weighted standard deviation. The reliability of STD, computed from NGS KARS and Applanix POSGPS were analyzed.
Cooperative networks of low-cost unmanned aerial vehicles (UAVs) are attracting researchers because of their potential to enhance UAV performance. Cooperative networks can be used in many applications, including assisted guidance and navigation, surveillance, search and rescue, disaster management, defense, mapping, precision agriculture, and mineral exploration. Such cooperative networks of UAVs can act as ad hoc networks and share information among different network nodes. Such information sharing makes these nodes more robust and efficient for the intended purpose. The location of UAVs is traditionally determined using a global navigation satellite system (GNSS), which limits the use of UAVs in regions that lack GNSS. However, the location of UAVs can be determined even in environments without GNSS through a cooperative network if a few of the nodes have access to GNSS. This is achieved by sharing the information among the nodes of the network. Information sharing in a cooperative network further results in improvement in the proportional accuracy of the nodes in cases where GNSS is available to all nodes. This study investigated a mathematical model and operational framework for cooperative localization of UAVs using GNSS, microelectromechanical systems (MEMS), inertial navigation system (INS), and UWB (ultra-wide-band) sensors under different architectures. This paper briefly discusses the practical feasibility of different distributed architectures and provides a comparison of distributed and centralized architectures. The study analyzed the proposed network using numerical simulation and investigated changes in performance with respect to different parameters. The simulation results show that the centralized architecture generally provided higher localization accuracy compared with the distributed architecture. It was also observed that reliable and consistent localization can be achieved, irrespective of the size of the network, by using a cooperative approach even if only four nodes have GNSS access in the network if there is good connectivity among the nodes. Further, the simulation results demonstrate that a cooperative approach benefits all the nodes in terms of improved localization accuracy even if all the nodes have access to GNSS.
Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.
Aerial triangulation controlled by GPS observations in the aircraft has been established as a precise method of photogrammetric point determination without the need of ground control. If the GPS observations are available for blocks of aerial photos, the aerial triangulation can be carried out without any ground control points. Unfortunately, this method cannot be applied for single flight lines, since the GPS observations do not recover the roll angle of the aircraft. Therefore, ground control is mandatory for GPS controlled strip triangulation. This paper investigates GPS controlled strip triangulation using known, linear features on the ground that are approximately parallel to the flight line. This described technique models the linear feature in the images by low order polynomials and forces the known line on the ground onto this function. Thus, the roll angle can be determined. We investigate the effects of different GPS measurement accuracies both in the air and on the ground on the results. Experiments using simulated and real data are presented. We also show that this new technique is useful for mapping railroads.
Full Waveform LiDAR data have been available for many years, yet applications just recently started discovering its potential in airborne topographic surveying. Forestry and earth sciences applications have been traditionally using waveform processing for many years, but topographic mapping has just started exploring the benefits of waveform. The potential advantages are improved point cloud generation, better object surface characterization, and support for object classification. However, there are several implementations and performance issues, such as the availability of waveform processing tools and waveform compression methods that should be addressed before applications can take full advantage of the availability of waveform data. The paper provides an overview of the waveform application potential in both airborne and mobile LiDAR mapping applications.
This chapter first presents an overview on the need of strip adjustment and the common processing techniques. Next, a list of error sources for the misregistration between strips is provided. This chapter moves on to the discussion on the selection of overlapping areas to assure a reliable adjustment outcome. Presented as fundamentals for strip adjustment, a number of methods for the representation, interpolation, and matching of surfaces are described. In light of such fundamentals, this chapter then provides a detailed review on different strip adjustment techniques in a chronological order. This chapter concludes with a summary on the properties and limitations of the reported adjustment techniques and their future development.
Abstract. In this paper, the use of waveform data in urban areas is studied. Full waveform is generally used in non-urban areas, where it can provide better vertical structure description of vegetation compared to discrete return systems. However, waveform could be potentially useful for classification in urban areas, where classification methods can be extended to include parameters derived from waveform analysis. Besides common properties, also sensed by multi-echo systems (intensity, number of returns), the shape of the waveform also depends on physical properties of the reflecting surface, such as material, angle of incidence, etc. The main goal of this investigation is to identify relevant parameters, derived from waveform that are related to surface material or object class. This paper uses two waveform parameterization approaches: Gaussian shape fitting and discrete wavelet transformation. The two classification methods tested are: supervised Bayes classification and unsupervised Self-Organizing Map (SOM) classification. The results of these methods were compared to each other and to manual classification. The initial conclusion is that, though waveform data contains classification information, the waveform shape by itself is not enough to perform classification in urban regions, and, consequently, it should be combined with the point cloud geometry.