Abstract PNT stands for Positioning, Navigation, and Timing. Space-based PNT refers to the capabilities enabled by GNSS, and enhanced by Ground and Space-based Augmentation Systems (GBAS and SBAS), which provide position, velocity, and timing information to an unlimited number of users around the world, allowing every user to operate in the same reference system and timing standard. Such information has become increasingly critical to the security, safety, prosperity, and overall qualityof-life of many citizens. As a result, space-based PNT is now widely recognized as an essential element of the global information infrastructure. This paper discusses the importance of the availability and continuity of PNT information, whose application, scope and significance have exploded in the past 10–15 years. A paradigm shift in the navigation solution has been observed in recent years. It has been manifested by an evolution from traditional single sensor-based solutions, to multiple sensor-based solutions and ultimately to collaborative navigation and layered sensing, using non-traditional sensors and techniques – so called signals of opportunity. A joint working group under the auspices of the International Federation of Surveyors (FIG) and the International Association of Geodesy (IAG), entitled ‘Ubiquitous Positioning Systems’ investigated the use of Collaborative Positioning (CP) through several field trials over the past four years. In this paper, the concept of CP is discussed in detail and selected results of these experiments are presented. It is demonstrated here, that CP is a viable solution if a ‘network’ or ‘neighbourhood’ of users is to be positioned / navigated together, as it increases the accuracy, integrity, availability, and continuity of the PNT information for all users.
Practical experiences have shown that the standard deviations (STD) obtained from kinematic GPS processing software may not always reflect the actual error due to the lack of information about functional correlation. Neglecting the physical correlation between epochs, systematic errors and improperly modeled parameters may cause incorrect estimation of the standard deviation. A number of the GPS processing software consider the variance-covariance matrix of the observations without correlation or simply deal with the diagonal components only. As a consequence, the a posteriori standard deviations of the estimated coordinates could be too optimistic and may not represent the actual quality of the estimated coordinates. This is particularly important when the improperly estimated STDs are used as a priori values for GPS/IMU (Global Positioning System/Inertial measurement unit) Kalman filtering, and consequently defining the georeferencing error budget of imaging sensors, such as airborne digital camera, LiDAR (Light Detection and Ranging) and IfSAR (Interferometric synthetic aperture radar). The primary objective of the research presented in this paper is to assess the STD reliability of the kinematic GPS processing using the NGS KARS and Applanix POSGPS software packages. In this research, the scale factor (SF) was used as a reliability indicator for software generated STDs. In order to assess the STD reliability of kinematic GPS processing, two data sets from the San Andreas and San Jacinto Faults LiDAR Mapping project were used. Since there is no easily available absolute reference for an airborne trajectory, in our investigations one base station was used as a rover, called simulated rover, to create kinematic solutions with respect to several other reference stations with varying separation to simulated rover. The two daily data sets from the LiDAR Mapping project were processed in kinematic mode using L3 (ion-free) observations. The single baseline solution from NGS KARS and Applanix POSGPS software and multi-baseline (network) solutions from Applanix POSGPS were obtained, and the scale factor was computed by comparing the software generated STD, called a formal error, to the weighted standard deviation. The reliability of STD, computed from NGS KARS and Applanix POSGPS were analyzed.
Kinematic orbit determination of the Low Earth Orbiters
(LEO) based on GPS observable offers a viable
alternative to the predominantly dynamic orbit determination approaches. Since kinematic orbit
determination (OD) requires neither dynamic force
models nor the physical properties of the LEO, the OD
procedures are much simpler and computationally
efficient than the dynamic OD approach. However, the
quality of kinematic POD strongly depends on the quality
and continuity of GPS data, and the geometry between the
GPS satellites, LEO and the ground stations.
The main purpose of this paper is to present the analysis
of the GPS data quality and the data prescreening
procedures for the kinematic orbit determination of the
CHAMP satellite (German CHAllenging Minisatellite
Payload, launched on July 15, 2000). Since, as mentioned
above, the accuracy of the kinematic orbit determination
highly depends on the configuration of GPS satellites as
well as the ground stations, the discussion and the test
results related to various configurations are presented.
Since kinematic method displays a strong dependence on
the data quality and continuity, the data prescreening is of
a major importance. The prescreening of the data is
mainly composed of detection of cycle slips (CS) for
CHAMP and the ground stations. Two test quantities,
namely the one-way ion-residual of phase and
phase/range linear combinations, as well as wide-lane
combination are used for the CS detection. While the
ground stations show less than 3 % of cycle slips,
CHAMP shows much higher number (about 5.5%) of
cycle slips for the period of 24 hours, based on the test
data analyzed here. It should be mentioned however, that
the classical methods of CS detection were found not
fully reliable for a LEO moving very fast (CHAMP: ~7.6
km/sec) in the middle of the ionospheric layer. In order to
analyze the reliability of the classical method of CS
detection in that case, a dynamic solution was used as a
true reference. The CS analysis, their effect on the
continuity and the quality of the orbit, the effects of the
geometry as well as the elevation cut off and processing
batch length on the final kinematic orbit accuracy are
presented in this paper.
In global navigation satellite system precise positioning, double differencing of the observations is the common approach that allows for significant reduction of correlated atmospheric effects. However, with growing distance between the receivers, tropospheric errors decorrelate causing large residual errors affecting the carrier phase ambiguity resolution and positioning quality. This is especially true in the case of height differences between the receivers. In addition, the accuracy achieved by using standard atmosphere models is usually unsatisfactory when the tropospheric conditions at the receiver locations are significantly different from the standard atmosphere. This paper presents an evaluation of three different approaches to troposphere modeling: (a) neglecting the troposphere, (b) using a standard atmosphere model, and (c) estimating tropospheric delays at the reference station network and providing interpolated tropospheric corrections to the user. All these solutions were repeated with various constraints imposed on the tropospheric delays in the least-squares adjustment. The quality of each solution was evaluated by analyzing the residual height errors calculated by comparing the estimated results to the reference coordinates. Several permanent GPS stations of the EUPOS (European Position Determination System) active geodetic network located in the Carpathian Mountains were selected as a test reference network. The distances between the reference stations ranged from 64 to 122 km. KRAW station served as a simulated user receiver located inside the reference network. The user receiver ellipsoidal height is 267 m and the reference station heights range from 277 to 647 m. The results show that regardless of station height differences, it is recommended to model the tropospheric delays at the reference stations and interpolate them to the user receiver location. The most noticeable influence of the residual (unmodeled) tropospheric errors is observed in the station height component. In many cases, mismodeling of the troposphere disrupts ambiguity resolution and, therefore, prevents the user from obtaining an accurate position.
Full Waveform LiDAR data have been available for many years, yet applications just recently started discovering its potential in airborne topographic surveying. Forestry and earth sciences applications have been traditionally using waveform processing for many years, but topographic mapping has just started exploring the benefits of waveform. The potential advantages are improved point cloud generation, better object surface characterization, and support for object classification. However, there are several implementations and performance issues, such as the availability of waveform processing tools and waveform compression methods that should be addressed before applications can take full advantage of the availability of waveform data. The paper provides an overview of the waveform application potential in both airborne and mobile LiDAR mapping applications.
Technological advances in positioning and imaging sensors, combined with the explosion in wireless mobile communication systems that occurred during the last decade of the twentieth century, practically redefined and substantially extended the concept of mobile mapping. The advent of the first mobile mapping systems (MMS) in the early 1990s initiated the process of establishing modern, virtually ground-control-free photogrammetry and digital mapping. By the end of the last decade, mobile mapping technology had made remarkable progress, evolving from rather simple land-based systems to more sophisticated, real-time multitasking and multisensor systems, operational in land and airborne environments. New specialized systems, based on modern imaging sensors, such as CCD (charge-coupled device) cameras, lidar (Light Detection and Ranging) and hyperspectral/multispectral scanners, are being developed, aimed at automatic data acquisition for geoinformatics, thematic mapping, land classification, terrain modeling, emergency response, homeland security, etc. This paper provides an overview of the mobile mapping concept, with a special emphasis on the MMS paradigm shift from the post-mission to near-real-time systems that occurred in the past few years. A short review of the direct georeferencing concept is given, and the major techniques (sensors) used for platform georegistration, as well as the primary radiolocation techniques based on wireless networks, are presented. An overview of the major imaging sensors and the importance of multisensor system calibration are also provided. Future perspectives of mobile mapping and its extension towards telegeoinformatics are also discussed. Some examples of mobile geospatial technology used in automatic object recognition, real-time highway centerline mapping, thematic mapping, and city modeling with lidar and multispectral imagery are included.
3D Flash LADAR cameras provide ranging information at a high frame rate, offering the opportunity to navigate from imaged features. This requires the identification of common features from multiple images acquired from different positions. It is necessary to determine if the features exhibit a static condition, such that moving elements in the imageries are not utilized as part of the navigation solution. Extending previous research (ION April 2006, ION September 2006) in the utilization of implicit polynomials for feature extraction, a new algorithmic approach for the extraction of static and non-static features is exhibited.