Download This Paper Open PDF in Browser Add Paper to My Library Share: Permalink Using these links will ensure access to this page indefinitely Copy URL Copy DOI
Accurate identification of accident-prone areas and understanding the influential urban environmental factors for ride-hailing vehicles are of paramount importance for ensuring road safety. While past research has predominantly focused on the influences of ride-hailing drivers’ behaviors on road accidents, the role of various urban environmental factors has long been understudied. To address this gap, this work presents an interpretable machine learning approach to predict accident-prone areas for ride-hailing vehicles, based on intertwined urban environmental features. The task of identifying accident-prone areas is formulated as a binary classification problem. The accident-prone areas in the cities are predicted with two ensemble decision trees models, and the crowdsourced accident datasets from two major Chinese cities are used for model validation. The results demonstrate that the models can achieve good accuracy (over 0.7) in identifying accident-prone/non-prone areas. Notably, our methodology excels in delivering a high true positive rate (0.74) coupled with a low miss rate, attesting to the model's practicality for accurately predicting accident-prone areas. The model interpretation results pinpoint topographical features such as elevation and slope, as critical influencers of ride-hailing road accidents. The crossregion transferability analysis reveals that the feature importance and model transferability vary among cities, which suggests that the urban environmental determinants for ride-hailing accidents are not uniform, necessitating localized model adjustments for accurate accident-prone area prediction. The datadriven machine learning models presented provides a useful tool for predicting accident-prone areas and, the findings reveal novel insights on intertwined urban features’ influence on ride-hailing accident, facilitating integrated urban design towards road safety improvement and risk mitigation.
Indoor positioning has attracted many attentions in recent years. Wi-Fi, geomagnetic and inertial sensors are commonly used sources to reconstruct pedestrian's trajectory. But the error drift of inertial sensors and the labor-intensive fingerprints construction constrain their popularity. The purpose of this paper is to design a Wi-Fi, geomagnetic and pedestrian dead-reckoning (PDR) composed indoor positioning system which is accurate, practical and less labor-intensive. In order to achieve this goal, we design an indoor positioning system based on walking-surveyed Wi-Fi fingerprint and corner reference trajectory-geomagnetic database (CRTDB). Firstly, we propose a trajectory optimization algorithm which makes use of landmark observations to optimize historical positions of PDR by means of Gauss-Newton algorithm. Next, we construct the walking-surveyed Wi-Fi fingerprint database and the CRTDB using the optimized PDR trajectories. Secondly, we propose a Dynamic Time Warping (DTW) and Pearson Correlation Coefficient (PCC) combined CRTDB matching algorithm to find the coordinates of the observed corners. Thirdly, we propose a PDR, Wi-Fi fingerprint and CRTDB fused positioning system based on Kalman filtering (KF). Finally, we evaluate the performance and effectiveness of our proposed algorithms on two open datasets. The experimental results show that with our proposed CRTDB optimizing, the median error is improved by more than 33% and 27% respectively compared with that of PDR and Wi-Fi fusion.
Ubiquitous indoor positioning technology plays an important role in providing indoor location-based services (iLBS) for the public. At this stage, crowdsourced multi-modal data fusion is regarded as an effective way to realize ubiquitous indoor positioning, especially for large-scale indoor spaces based on public daily-life trajectories and local positioning stations. Therefore, an effective uncertainty error evaluation method for daily-life trajectories is the key to generating a high-quality crowdsourced navigation database and further improving the performance of the final multi-source fusion system. To solve this problem, this paper proposes a deep-learning approach for autonomously evaluating the uncertainty error of crowdsourced daily-life trajectories, by learning and analyzing motion features extracted from pedestrian trajectories comprehensively from spatial and temporal perspectives. A novel deep-learning structure taking into account the spatiotemporal characteristics of the trajectory is modeled and related spatiotemporal features are extracted and modeled as the input vector of the proposed deep-learning structure. Real-world experimental results under generated trajectory datasets from large-scale indoor scenarios indicate that the proposed deep-learning structure can autonomously evaluate the uncertainty error of crowdsourced trajectories and realize much more accurate navigation database generation performance compared with existing state-of-the-art algorithms.
To solve the problems in traditional home control network,such as complex cabling and blind communication and high power consumption,the paper designed and implemented the embedded smart home system based on ZigBee and GPRS.The system was based on ARM11 embedded processor S3C6410,ARM-Linux real-time operation system,built a home wireless network to realize the wireless link of the controller and the home intelligent modules using Zigbee technology,and implemented the wireless connection of the controller and the mobile phone using GPRS network.The tests show that the system has good real time performance,high reliability,convenient to extend,easy to operate,and has a wide market application value.
Constructing colorized point clouds from mobile laser scanning and images is a fundamental work in surveying and mapping. It is also an essential prerequisite for building digital twins for smart cities. However, existing public datasets are either in relatively small scales or lack accurate geometrical and color ground truth. This paper documents a multisensorial dataset named PolyU-BPCoMA which is distinctively positioned towards mobile colorized mapping. The dataset incorporates resources of 3D LiDAR, spherical imaging, GNSS and IMU on a backpack platform. Color checker boards are pasted in each surveyed area as targets and ground truth data are collected by an advanced terrestrial laser scanner (TLS). 3D geometrical and color information can be recovered in the colorized point clouds produced by the backpack system and the TLS, respectively. Accordingly, we provide an opportunity to benchmark the mapping and colorization accuracy simultaneously for a mobile multisensorial system. The dataset is approximately 800 GB in size covering both indoor and outdoor environments. The dataset and development kits are available at https://github.com/chenpengxin/PolyU-BPCoMa.git.