As a common method for simplifying network graphs, large graph sampling can reduce the size of large graph data significantly. In this paper, related works are summarized from the following perspectives: random graph sampling techniques, feature-driven large graph sampling techniques, evaluation metrics of large graph sampling and applications of large graph sampling technique. Firstly, random graph sampling is categorized into three types, including random node, random edge, and random walk graph sampling. Secondly, the feature-driven large graph sampling techniques are discussed, including topology-preserving, community structure-preserving, dynamic network association and semantic association feature-driven large graph sampling. Thirdly, the evaluation metrics of large graph sampling techniques are introduced, including topological metrics, visual perception metrics and feature-driven metrics. Finally, the applications of large graph sampling technique in social networks, geographic traffic, biomedical and deep learning are summarized, and the development of large graph sampling method is prospected.
Previous approaches using quadric error metric allow fast and accurate geometric simplifications of meshes, but they don't consider rendering schemes and texture contents. A newly developed framework, image-driven simplification, uses image errors to decide which portions of model to simplify. The image-driven simplification is sensitive to rendering schemes and texture contents, and can produce simplified models with better visual quality. However, it can not deal with some models with highly complex visibility. In this paper, we introduce the rendering error metric and combine it with the quadric error metric. Benefits of using rendering error metric include higher quality simplified meshes, better preserved geometric and color boundaries, and simplification that is sensitive to the content of a texture. In contrast with the original image-driven simplification algorithm, our algorithm samples the model locally per edge collapse to compute the rendering error metric and is independent of specific viewpoints, so it can deal with any kind of models, even highly self-occluded models. We demonstrate the efficiency of the algorithm with a variety of meshes with normal/color/texture attributes.
In order to improve the software reliability early, this paper proposes an efficient algorithm to select fault-prone software module. Based on software module's complexity metrics, the algorithm uses modified cascaded-correlation algorithm as neural network classifier to select the fault-prone software module. Finally, by analyzing the algorithm's application in the project MAP, the paper shows the advantage of the algorithm.
In this work, they propose a one-step leapfrog hybrid implicit-explicit finite-difference time-domain (HIE-FDTD) method for body-of-revolution (BOR). Meanwhile, its Convolutional Perfect Matched Layer (CPML) absorbing boundary condition is implemented. In this method, the implicit difference is applied in the angular direction. All the resultant updating equations are still explicit. However, the stability condition of the proposed method is relaxed. The analytical analysis shows that its time step is only determined by the smaller one of spatial increments Δρ and Δz. A scattering example is provided to demonstrate the new algorithm. At the same time, the relative of reflection error of the CPML is given with comparisons of Mur.
The high computational complex of Super Resolution (SR) is a focused topic in many imaging applications, which involves to solve huge sparse linear systems. Solving such systems usually employs the iterative methods, such as Conjugate Gradient (CG). But in most variational Bayesian SR algorithms, CG method converges slowly with the coefficient matrix being ill-conditioned and takes long execution time. In this paper, we propose Preconditioned Conjugate Gradient (PCG) to solve the problem and analyze the performance of the different PCG solvers, Jacobi and incomplete Cholesky decomposition(IC). Experimental results demonstrate that the new method achieves accelerations compared with the traditional one while maintaining high visual quality of the reconstructed HR image, and, especially, the IC solver has a better performance.
This paper presents a three-step design method to improve performance of repetitive control systems. The performance is enhanced for rejection of high frequency harmonic disturbances by extending the bandwidth of low-pass filter Q in repetitive module. In this method, an interim feedback controller K' is designed firstly to ensure the realizability of the Q design in repetitive module. Then Q filter is designed to satisfy robustness stability of system. Finally the feedback controller K' is redesigned as K to guarantee overall system robustness performance. This method is referred as "K'-Q-K" procedure and is applied to active vibration control of Hexapod. Simulation results demonstrate the improved performance by using the proposed approach.
This paper presents a simultaneous autoregressive (SAR) analysis method to describe the unknown signal-to-noise ratio (SNR) and the texture feature of low-quality real video frames when the ground-truth images are not available. The real video images degraded by the factors, such as electronic noise, oversaturated pixels, motion blur, and compression artifacts, often result in poor motion registration estimation, which makes the performance of the existing video super-resolution (VSR) algorithms lower than expected. It is hard to estimate the SNR of the low-quality real frames without any prior knowledge. To solve this problem, we made a connection between SAR hyperparameters and the SNR of real images. The relationship expression of them was given in this paper. Using the proposed method, well-registered low-quality real video frames can be selected to decrease the root mean squared error (RMSE) of motion estimation of video frames for VSR reconstruction improvement. The anomalous low-quality frame images whose SAR hyperparameters values are inconsistent with others will be considered for removal. Synthetic experiments were designed to illustrate how the SAR hyperparameters values vary with the variation of synthetic parameters. In order to better illustrate the effectiveness of the proposed method, real low-quality videos captured under different conditions were tested under VSR reconstruction experiments. The VSR reconstruction results show that the results obtained using SAR prior analysis have sharper edges and fewer ringing artifacts than the original results. It indicates that the proposed method is helpful to obtain better results of motion registration estimation for low-quality real video images.