Previously, we proposed a multithread active vision system with virtual multiple pan-tilt tracking cameras by rapidly switching the viewpoints for the vibration sensing of large-scale structures. We also developed a system using a galvanometer mirror that can switch 500 different viewpoints in 1 s. However, the measurement rate of each observation point is low, and the time density is not always sufficient. In addition, strong multiple illuminations are required for the system owing to the retro reflective markers attached to the object being observed. In this study, we propose a multiple vibration distribution synthesis method for vibration analysis that increases the sampling rate of each observation point in the multi-thread active vision system, which is subsequently modified to a system that requires only one illumination by using corner cubes as markers. Several dynamics-based inspection experiments are conducted for a 4 m long truss-structure bridge model. The proposed method and system are verified via a high-order modal analysis, which was impossible to perform in the previous method and system.
We investigate the effect of appearance variations on the detectability of vibration feature extraction with pixel-level digital filters for high-frame-rate videos. In particular, we consider robust vibrating object tracking, which is clearly different from conventional appearance-based object tracking with spatial pattern recognition in a high-quality image region of a certain size. For 512 × 512 videos of a rotating fan located at different positions and orientations and captured at 2000 frames per second with different lens settings, we verify how many pixels are extracted as vibrating regions with pixel-level digital filters. The effectiveness of dynamics-based vibration features is demonstrated by examining the robustness against changes in aperture size and the focal condition of the camera lens, the apparent size and orientation of the object being tracked, and its rotational frequency, as well as complexities and movements of background scenes. Tracking experiments for a flying multicopter with rotating propellers are also described to verify the robustness of localization under complex imaging conditions in outside scenarios.
Previously, we proposed a multithread active vision system that can virtually realize multiple pan-tilt tracking cameras by rapidly switching the viewpoints for the vibration measurement. However, the measurement rate of observation points is low and the time density of measurement data is not always sufficient. In this study, we propose a multiple vibration distribution synthesis method for vibration analysis that increases the sampling rate of each observation point in the multithread active vision system. The proposed method is verified through a high-order modal analysis for a 4-m long truss-structure bridge model, which was impossible to perform in the previous method.
We developed a motion-blur-free video camera for shooting non-blurred videos of unstable fast-moving objects by implementing an improved actuator-driven frame-by-frame intermittent tracking method on a high-speed vision platform and an external field-programmable gate array board. With our tracking method, the camera frame-timing is controlled so that the speed of the camera viewpoint coincides with the apparent speed of the target object during the camera exposure time. Our motion-blur-free video camera can shoot non-blurred 1024×1024 images of fast-moving objects at 750 fps until 7.5 m/s unidirectionally. Compared with the degradation in video recorded without tracking, our method reduces image degradation from motion blur 1/10 times or less without shortening the exposure time. Its performance was verified by the experimental results of several fast-moving objects using a high-speed conveyor belt system.
In this paper, a concept of vision-based vibration source localization to extract vibration image regions using pixel-level digital filters in a high-frame-rate (HFR) video is proposed. The method can detect periodic changes in the audio frequency range in image intensities at pixels of vibrating objects. Owing to the acute directivity of the optical image sensor, our HFR-vision-based method can localize a vibration source more accurately than acoustic source localization methods. By applying pixel-level digital filters to clipped region-of-interest (ROI) images, in which the center position of a vibrating object is tracked at a fixed position, our method can reduce the latency effect on a digital filter, which may degrade the localization accuracy in vibration source tracking. Pixel-level digital filters for 128×128 ROI images, which are tracked from 512×512 input images, are implemented on a 1000-frames/s vision platform that can measure vibration distributions at 100 Hz or higher. Our tracking system allows a vibrating object to be tracked in real time at the center of the camera view by controlling a pan-tilt active vision system. We present several experimental tracking results using objects vibrating at high frequencies, which cannot be observed by standard video cameras or the naked human eye, including a flying quadcopter with rotating propellers, and demonstrate its performance in vibration source localization with sub-degree-level angular directivity, which is more acute than a few or more degrees of directivity in acoustic-based source localization.
To realize aerial manipulation using a drone, it is required to control the position of the center of gravity. In order to fly safely, it is necessary to obtain the parameters used for the control before takeoff. However, it is not easy to determine the parameters in advance due to changes in parts or batteries or changes in the objects to be transported. Therefore, in this research, we propose a method to estimate these parameters before flight and show its effectiveness by experiment.
Abstract In coal mines, dynamic disasters such as rock bursts seriously threaten the safety of mining activities. Exploring the dynamic behaviors and disaster characteristics in the impact failure process of coal serves as the basis and prerequisite for monitoring and warning rock bursts. In this context, impact failure tests of coal were carried out under different axial static loads and impact velocities to analyze the dynamic behaviors and acoustic emission (AE) response characteristics of coal. The results show that the dynamic behaviors of coal under combined dynamic and static loads are significantly different from those under static loads, and the stress‐strain curve displays double peaks without an obvious compaction stage. As the axial static load grows, the dynamic strength and peak strain both have a quadratic function with the axial static load. When the coal damage intensifies instantaneously, the AE count and energy parameters both witness pulse‐like increases and reach their peak values. The damage effect of axial static loads on coal, though limited, has an extreme point. In contrast, the impact velocity can strengthen the response of AE signals and has linear function relationships with the peak values of AE count and energy. This plays a leading role in the damage to samples and sets a critical point for coal failure and fracture. Compared with the analysis results of stress and strain, the responses of AE signals are more accurate and reliable. Based on AE response characteristics, the damage evolution process of coal under the combined dynamic and static loads can be identified more accurately to reveal the moment corresponding to coal damage and the characteristics of coal failure. The research results are conducive to the further application of AE monitoring methods to early warning of rock burst disasters in coal mining sites.