Very high-resolution remote sensing images hold promising applications in ground observation tasks, paving the way for highly competitive solutions using image processing techniques for land cover classification. To address the challenges faced by convolutional neural network (CNNs) in exploring contextual information in remote sensing image land cover classification and the limitations of vision transformer (ViT) series in effectively capturing local details and spatial information, we propose a local feature acquisition and global context understanding network (LFAGCU). Specifically, we design a multidimensional and multichannel convolutional module to construct a local feature extractor aimed at capturing local information and spatial relationships within images. Simultaneously, we introduce a global feature learning module that utilizes multiple sets of multi-head attention mechanisms for modeling global semantic information, abstracting the overall feature representation of remote sensing images. Validation, comparative analyses, and ablation experiments conducted on three different scales of publicly available datasets demonstrate the effectiveness and generalization capability of the LFAGCU method. Results show its effectiveness in locating category attribute information related to remote sensing areas and its exceptional generalization capability. Code is available at https://github.com/lzp-lkd/LFAGCU .
Due to the divergence of accuracy caused by inertial measurement unit (IMU) cumulative error, it is difficult for a single IMU equipment to realize vehicle positioning. Therefore, this paper proposes an IMU pose state estimation algorithm based on modulation long short-term memory-unscented Kalman filter (ML-UKF) algorithm. First, the algorithm improves the memory mode of LSTM network by using Modulation LSTM neural network and establishes IMU state model and observation model. Then, in order to adapt to the application of deep learning algorithm in UKF, an equal spacing sigma sampling method is proposed. Finally, the effect of IMU pose state estimation is verified by experiments. Results show that the root mean square error of the ML-UKF algorithm is decreases by 65.43% relative to the state of the art, further verifying the effectiveness of the proposed algorithm.
The LED luminaires are nowadays the mainstream of road lighting for the merit of durable, fast response, controllable, energy saving, and environmental friendly. For the evaluation of highway with LED lightings, we have recently developed on-site measurement of the photometric characteristics of lane and luminaire by luminance image, illuminance and spectral illuminance distribution which be evaluated as uniformity, colorimetry and glare parameters that were measured under the different height and spacing of the lampposts in the experimental field for expressway. We applied the image luminance measurement device to achieve the on-site and real time road lighting evaluation especially for the expressway. Some preliminary results were obtained from these experiments. These results will be applied to developing the standards and specification for road lighting in expressway.
Precise and robust extrinsic parameter calibration is fundamental for LiDAR-camera multi-modal sensing applications. However, most existing methods assume that sensors have the same orientation, limiting their effectiveness in feature extraction and feature alignment from different angle of view in multi-angle sensing scenarios. Moreover, the calibration accuracy of existing methods is insufficient for high-performance applications. To address these limitations, we propose a novel automatic extrinsic parameter calibration method utilizing a spherical target. We propose the Curvature Consistency Spherical Detection (CCSD) algorithm for LiDAR point cloud sphere recognition. The CCSD leverages the sphere's structural attributes, enabling robust detection against noise and partial occlusion. To improve camera sphere detection, we present an enhanced ellipse detection technique and compensate the eccentricity error arising from spherical projection based on the principle of perspective transformation. Extensive simulations and real-world experiments demonstrate the proposed method's superiority in accuracy and practicality over state-of-the-art (SOTA) methods.