Abstract This paper proposes a novel distance measurement method using superresolved light‐field images. We first superresolve the light field using the hybrid cross‐resolution input based on PatchMatch and convolutional neural network method, which combines and takes advantage of these two methods. In this way, we can explore the similarity between images and deal with challenging scenes effectively. The scaling factor is up to eight times, which is much larger than the ordinary superresolution scaling factor, and our superresolution method can also maintain a satisfactory accuracy. With the traditional idea of focus ranging, the light‐field 3D measurement method is established. A proper definition evaluation function suitable for light‐field data is selected to evaluate the imaging quality, and the triangle barycenter interpolation method is proposed to measure 3D scenes. Experimental results demonstrate that the proposed method not only improves the quality of the reconstructed high‐resolution light field but also has the ability of the distance measurement. It indicates that the light‐field imaging is a promising 3D measurement technique.
Composite fibers made from aramid III and hydroxylated multiwalled carbon nanotubes (MWCNTs-OH) combine the excellent mechanical and electrical properties of both components, resulting in strong antistatic performance. However, it is of paramount importance to ensure the homogeneous dispersion of multi-walled carbon nanotubes functionalized with hydroxyl groups (MWCNTs-OH) within the aramid III spinning solution and optimize the compatibility between the two constituents to augment the overall performance of the composite fibers. To this end, this investigation successfully accomplished the dispersion of MWCNTs-OH in the spinning solution and probed the dispersion mechanism using molecular dynamics simulations. Moreover, composite fibers, comprising 2.4 weight percent MWCNTs-OH, were initially fabricated using the wet spinning method. These fibers displayed a uniform texture and a tensile strength of 1.210 GPa, signifying a noteworthy enhancement of 113.25% in comparison to the strength prior to modification. With respect to thermal behavior, the fibers exhibited a mass reduction of 21.24% within the temperature range of 0°C–538°C. In the temperature interval from 538°C to 800°C, the mass loss diminished to 10.31%, representing a substantial 71.03% reduction when compared to the unmodified state. Remarkably, even when subjected to temperatures exceeding 800°C, the composite fibers retained a residual mass of 68.45%, indicating a notable 61.17% increase from their initial condition. In terms of electrical properties, the fibers exhibited a specific resistance (ρ) of 3.330 × 10 9 Ω cm, demonstrating effective antistatic behavior. In summary, the antistatic composite fibers studied in this paper can effectively mitigate the hazards of static electricity in various applications, including military protection and engineering equipment in both military and civilian fields.
Autonomous navigation and localization are the foundations of unmanned intelligent systems, therefore, continuous, stable, and reliable position services in unknown environments are especially important for autonomous navigation and localization. Aiming at the problem where GNSS cannot continuously localize in complex environments due to weak signals, poor penetration ability, and susceptibility to interference and that visual navigation and localization are only relative, this paper proposes a GNSS-aided visual dynamic localization method that can provide global localization services in unknown environments. Taking the three frames of images and their corresponding GNSS coordinates as the constraint data, the GNSS coordinate system and world coordinate system transformation matrix are obtained through horn coordinate transformation, and the relative positions of the subsequent image sequences in the world coordinate system are obtained through epipolar geometry constraints, homography matrix transformations, and 2D–3D position and orientation solving, which ultimately yields the global position data of unmanned carriers in GNSS coordinate systems when GNSS is temporarily unavailable. Both the dataset validation and measured data validation showed that the GNSS initial-assisted positioning algorithm could be applied to situations where intermittent GNSS signals exist, and it can provide global positioning coordinates with high positioning accuracy in a short period of time; however, the algorithm would drift when used for a long period of time. We further compared the errors of the GNSS initial-assisted positioning and GNSS continuous-assisted positioning systems, and the results showed that the accuracy of the GNSS continuous-assisted positioning system was two to three times better than that of the GNSS initial-assisted positioning system, which proved that the GNSS continuous-assisted positioning algorithm could maintain positioning accuracy for a long time and it had good reliability and applicability in unknown environments.
In the process of hole-repairing in point cloud, it's difficult to repair by the indeterminate boundary of fragmentary area in the edge of point cloud. In view of this condition, the article advances a method of Fragmentary area repairing on the edge of point cloud based on edge extracting of image and LS-SVM. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image. Then project the training points and sub-pixel edge to the characteristic plane that has being constructed to confirm the bound and position for re-sampling. At last get the equation of fragmentary area to accomplish the repairing by Least-Squares Support Vector Machines. The experimental results demonstrate that the method guarantees accurate fine repairing.
Abstract The unmanned aerial vehicle (UAV) and the unmanned ground vehicle (UGV) can complete complex tasks through information sharing and ensure the mission execution capability of multiple unmanned carrier platforms. At the same time, cooperative navigation can use the information interaction between multi‐platform sensors to improve the relative navigation and positioning accuracy of the entire system. Aiming at the problem of deviation of the system model due to gross errors in sensor measurement data or strong manoeuvrability in complex environments, a robust and adaptive UGV‐UAV relative navigation and positioning algorithm is proposed. In this paper, the relative navigation and positioning of UGV‐UAV is studied based on inertial measurement unit (IMU)/Vision. Based on analyzing the relative kinematics model and sensor measurement model, a leader (UGV)‐follow (UAV) relative navigation model is established. In the implementation of the relative navigation and positioning algorithm, the robust adaptive algorithm and the non‐linear Kalman filter (extended Kalman filter [EKF]) algorithm are combined to dynamically adjust the system state parameters. Finally, a mathematical simulation of the accompanying and landing process in the UGV‐UAV cooperative scene is carried out. The relative position, velocity and attitude errors calculated by EKF, Robust‐EKF and Robust‐Adaptive‐EKF algorithms are compared and analyzed by simulating two cases where the noise obeys the Gaussian distribution and the non‐Gaussian distribution. The results show that under the non‐Gaussian distribution conditions, the accuracy of the Robust‐Adaptive‐EKF algorithm is improved by about two to three times compared with the EKF and Robust‐EKF and can almost reach the relative navigation accuracy under the Gaussian distribution conditions. The robust self‐adaptive relative navigation and positioning algorithm proposed in this paper has strong adaptability to the uncertainty and deviation of system modelling in complex environments, with higher accuracy and stronger robustness.