Over the past few years, power tower accidents often occurred, it caused serious influence to our country electric power safety, and caused great economic loss. In order to improve the efficiency of the power tower safety testing, to protect the safety of staff, some people proposed to climb the tower to replace the robot staff to detect tower. In this paper the design and selection of the robot module on the tower are compared and discussed. And draw the conclusion, find the ideal tower robot design and module model, To provide theoretical guidance for the robot to climb the tower manufacturing.
How to check and assess the whole process of Test and Evaluation on equipment software is one of the major challenges faced by the development of equipment software at present. In order to solve above problems, this paper proposed the development strategies of Test and Evaluation assessment for equipment software in terms of the overall design, data-driven, assessment system and method construction. Then the process of Test and Evaluation assessment on equipment software was checked to verify the development ability, design and organization ability. Therefore, the assessment loop of quality for equipment software as well as the assessment loop of design, development, and stereotyping for equipment software could be gotten through. In this way, the assessment of Test and Evaluation provides supports and traction for equipment software. Moreover, the development strategy of Test and Evaluation assessment on equipment software were made for the operational and in-service usage, which could finally and greatly improve the quality and operational effectiveness of equipment software.
Artistic Image Aesthetic Assessment (AIAA) is an emerging paradigm that predicts the aesthetic score as the popular aesthetic taste for an artistic image. Previous AIAA takes a single image as input to predict the aesthetic score of the image. However, most existing AIAA methods fail dramatically to predict the artistic images with a large variance of artistic subjective voting with only a single image. People are good at employing multiple similar references for making relative comparisons. Motivated by the practice that people considers similar semantics and specific artistic style to keep the consistency of the voting result, we present a novel Semantic and Style based Multiple Reference learning (SSMR) to mimic this natural process. Our novelty is mainly two-fold: (a) Similar Reference Index Generation (SRIG) module that considers artistic attribution of semantics and style to generate the index of reference images; (b) Multiple Reference Graph Reasoning (MRGR) module that employs graph convolutional network (GCN) to initialize and reason by adjusting the weight of edges with intrinsic relationships among multiple images. Our evaluation with the benchmark BAID, VAPS and TAD66K datasets demonstrates that the proposed SSMR outperforms state-of-the-art AIAA methods.
We study the reinforcement learning problem of complex action control in the Multi-player Online Battle Arena (MOBA) 1v1 games. This problem involves far more complicated state and action spaces than those of traditional 1v1 games, such as Go and Atari series, which makes it very difficult to search any policies with human-level performance. In this paper, we present a deep reinforcement learning framework to tackle this problem from the perspectives of both system and algorithm. Our system is of low coupling and high scalability, which enables efficient explorations at large scale. Our algorithm includes several novel strategies, including control dependency decoupling, action mask, target attention, and dual-clip PPO, with which our proposed actor-critic network can be effectively trained in our system. Tested on the MOBA game Honor of Kings, our AI agent, called Tencent Solo, can defeat top professional human players in full 1v1 games.
This paper studies the free vibration characteristics of a rotating variable cross-section flexible beam mounted on a rigid hub system, especially foucses on the coupling effects of the dynamic of the hub on the rotating beam. Firstly, we employ the Hamilton principle to derive the governing equation of motion of the Euler-Bernoulli beam and the equation for the hub’s dynamics, in which the coupling between the hub and the beam is taken into account in the modeling. The coupling inculdes the effects from the centrifugal force and the geometrical nonlinearity of the beam’s deformation. Secondly, the Fourier transformation is used to convert the partial differential equation into an ordinary differential equation, which can then be efficiently solved by using the difference method for obtaining the natural frequencies and mode shapes of the rotating beam. Furthermore, we validate our model by comparing the results from the reduced model with those found in the existing literatures, and then investigate how the natural frequencies and mode shapes of the rotating beam are affected by the geometric features of the beam’s cross-section, the rotating speed and the inertial ratio between the hub and the beam. We conclude that the steady-state response of the coupled nonlinear hub-beam system not only depends on the inherent vibration characteristics of the flexible beam in structural dynamics, but also it is closely related to the external dynamic environment.
We present JueWu-SL, the first supervised-learning-based artificial intelligence (AI) program that achieves human-level performance in playing multiplayer online battle arena (MOBA) games. Unlike prior attempts, we integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner. Tested on Honor of Kings, the most popular MOBA at present, our AI performs competitively at the level of High King players in standard 5v5 games.
This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge, which will be held in conjunction with the New Trends in Image Restoration and Enhancement Workshop (NTIRE) at CVPR 2023. This challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos. The challenge uses the VQA Dataset for Perceptual Video Enhancement (VDPVE), which has a total of 1211 enhanced videos, including 600 videos with color, brightness, and contrast enhancements, 310 videos with deblurring, and 301 deshaked videos. The challenge has a total of 167 registered participants. 61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions. A total of 176 submissions were submitted by 37 participating teams during the final testing phase. Finally, 19 participating teams submitted their models and fact sheets, and detailed the methods they used. Some methods have achieved better results than baseline methods, and the winning methods have demonstrated superior prediction performance.
Although deep salient object detection (SOD) has achieved remarkable progress, deep SOD models are extremely data-hungry, requiring large-scale pixel-wise annotations to deliver such promising results. In this paper, we propose a novel yet effective method for SOD, coined SODGAN, which can generate infinite high-quality image-mask pairs requiring only a few labeled data, and these synthesized pairs can replace the human-labeled DUTS-TR to train any off-the-shelf SOD model. Its contribution is three-fold. 1) Our proposed diffusion embedding network can address the manifold mismatch and is tractable for the latent code generation, better matching with the ImageNet latent space. 2) For the first time, our proposed few-shot saliency mask generator can synthesize infinite accurate image synchronized saliency masks with a few labeled data. 3) Our proposed quality-aware discriminator can select highquality synthesized image-mask pairs from noisy synthetic data pool, improving the quality of synthetic data. For the first time, our SODGAN tackles SOD with synthetic data directly generated from the generative model, which opens up a new research paradigm for SOD. Extensive experimental results show that the saliency model trained on synthetic data can achieve $98.4%$ F-measure of the saliency model trained on the DUTS-TR. Moreover, our approach achieves a new SOTA performance in semi/weakly-supervised methods, and even outperforms several fully-supervised SOTA methods. Code is available at https://github.com/wuzhenyubuaa/SODGAN