Traffic analysis is crucial for urban operations and planning, while the availability of dense urban traffic data beyond loop detectors is still scarce. We present a large-scale floating vehicle dataset of per-street segment traffic information, Metropolitan Segment Traffic Speeds from Massive Floating Car Data in 10 Cities (MeTS-10), available for 10 global cities with a 15-minute resolution for collection periods ranging between 108 and 361 days in 2019–2021 and covering more than 1500 square kilometers per metropolitan area. MeTS-10 features traffic speed information at all street levels from main arterials to local streets for Antwerp, Bangkok, Barcelona, Berlin, Chicago, Istanbul, London, Madrid, Melbourne, and Moscow. The dataset leverages the industrial-scale floating vehicle Traffic4cast data with speeds and vehicle counts provided in a privacy-preserving spatio-temporal aggregation. We detail the efficient matching approach mapping the data to the OpenStreetMap (OSM) road graph. We evaluate the dataset by comparing it with publicly available stationary vehicle detector data (for Berlin, London, and Madrid) and the Uber traffic speed dataset (for Barcelona, Berlin, and London). The comparison highlights the differences across datasets in spatio-temporal coverage and variations in the reported traffic caused by the binning method. MeTS-10 enables novel, city-wide analysis of mobility and traffic patterns for ten major world cities, overcoming current limitations of spatially sparse vehicle detector data. The large spatial and temporal coverage offers an opportunity for joining the MeTS-10 with other datasets, such as traffic surveys in traffic planning studies or vehicle detector data in traffic control settings.
Modeling place functions from a computational perspective is a prevalent research topic. Trajectory embedding, as a neural-network-backed dimension reduction technology, allows the possibility to put places with similar social functions at close locations in the embedding space if the places share similar chronological context as part of a trajectory. The embedding similarity was previously proposed as a new metric for measuring the similarity of place functions. This study explores if this approach is meaningful for geographical units at a much smaller geographical granularity compared to previous studies. In addition, this study investigates if the geographical distance can influence the embedding similarity. The empirical evaluations based on a big vehicle trajectory data set confirm that the embedding similarity can be a metric proxy for place functions. However, the results also show that the embedding similarity is still bounded by the distance at the local scale.
Simplification of building footprints is an essential task in topographic map generalization from large to medium scales. The traditional rule- or constraint-based algorithms commonly require cartographers to enumerate and formalize as many scenarios as possible. Recently, some studies have introduced deep learning to image map generalization, whose outputs, however, may exhibit deformed boundaries due to pure image input. Vector maps are thus a reasonable solution to avoid such issues because of their accurate, object-based geometric representation. However, few existing studies have aimed to simplify buildings in vector maps with the help of neural networks. Building simplification in vector maps can be regarded as the joint contribution from two elementary operations on vertices of building polygons: remove redundant vertices and move kept vertices. This research proposes a multi-task learning method with graph convolutional neural networks. The proposed method formulates the building simplification problem as a joint task of node removal classification and node movement regression. A multi-task graph convolutional neural network model (MT_GCNN) is developed to learn node removal and movement simultaneously. The model was evaluated with a map from Stuttgart, Germany that contains 8494 buildings generalized from the source scale of 1:5,000 to the target scale of 1:10,000. The experimental results show that the proposed method can generate 80% of the buildings with positional errors of less than 0.2 m, 95% with a shape difference under 0.5, and around 98% with an area difference under 0.1 of IoU, compared to the ground truth target buildings, thus demonstrating the feasibility of the proposed method. The code is available at: https://github.com/chouisgiser/MapGeneralizer.
Enterprise culture is enterprise leadership, led by the general staff on the basis of long-term practice, after years of cultivation, maintain and create, containing its values, beliefs, norms, the core factors such as traditional style comes from the enterprise itself. Once the enterprise formed its own corporate culture, with the corresponding stability and independence, in turn, in the enterprise guidance function, condensation function, incentive function also can have a great active role. Hebei Hengshui rubber industry enterprise culture construction Group has carried out the questionnaire survey, through the problems existing in the enterprise culture construction for research and analysis, and gives the corresponding promotion strategy, in order to improve the enterprise culture construction of Hebei Hengshui rubber industry, enhance the comprehensive competitiveness of Hengshui rubber industry, and Hebei rubber industry to make some contribution to the development of the economy as a whole.
Poverty is a primary obstacle to achieving sustainable development. Therefore, exploring the spatiotemporal dynamics and causes of poverty is of great significance to the sustainable poverty reduction of the “post poverty alleviation era” in China. This paper used the multisource big data of 2022 counties in China from 2000 to 2015 to establish a comprehensive evaluation framework to explore the multidimensional poverty situation in China. The results showed the following findings: There is an obvious spatiotemporal heterogeneity of multidimensional poverty, showing a typical stair-like gradient from high in the west to low in the east, with the poverty level in state-designated poverty counties higher and intensifying over time. The spatial differentiation of multidimensional poverty is contributed to by multiple factors, in which the geographical condition has a stronger impact on state-designated poverty counties, while natural endowment and human resources have a stronger effect on non-state-designated poverty counties. These things considered, the regional poverty causes were relatively stable before 2015, but the poverty spatial agglomeration of some regions in the Northwest, Northeast, and Yangtze River Economic Belt has undergone significant changes after 2015. These findings can help policymakers better target plans to eliminate various types of poverty in different regions.
One promising way to accelerate transformer training is to reuse small pretrained models to initialize the transformer, as their existing representation power facilitates faster model convergence. Previous works designed expansion operators to scale up pretrained models to the target model before training. Yet, model functionality is difficult to preserve when scaling a transformer in all dimensions at once. Moreover, maintaining the pretrained optimizer states for weights is critical for model scaling, whereas the new weights added during expansion lack these states in pre-trained models. To address these issues, we propose TripLe, which partially scales a model before training, while growing the rest of the new parameters during training by copying both the warmed-up weights with the optimizer states from existing weights. As such, the new parameters introduced during training will obtain their training states. Furthermore, through serializing the scaling of model width and depth, the functionality of each expansion can be preserved. We evaluate TripLe in both single-trial model scaling and multi-trial neural architecture search (NAS). Due to the fast training convergence of TripLe, the proxy accuracy from TripLe better reveals the model quality compared to from-scratch training in multi-trial NAS. Experiments show that TripLe outperforms from-scratch training and knowledge distillation (KD) in both training time and task performance. TripLe can also be combined with KD to achieve an even higher task accuracy. For NAS, the model obtained from TripLe outperforms DeiT-B in task accuracy with 69% reduction in parameter size and FLOPs.