Remote sensing image segmentation plays an important role in the field of satellite image research. However, when dealing with the non-linear relationship with high spatial complexity, the accuracy of ground object segmentation is often low. Therefore, this paper proposes two methods of deep learning to improve the accuracy of ground object segmentation. This paper develops a method of integrated segmentation based on UNET integrated network. Several trained UNET network models are integrated with the weighted average method to get the UNET integrated network model. Then the CBAM (convolutional block attention module) is combined with UNET to get UNET attention model. The experimental results show that the segmentation accuracy of the improved UNET is higher.
According to the property of DEM, on the base of multi-resolution analysis algorithm this paper presents a method for multi-resolution modeling based on wavelet analysis. Then the paper analyzes the principles for choosing the fittest wavelet function and boundary extension mode. The paper also provide a way to choose the best resolution depended on the viewpoint and the viewing angle The paper suggests to control the threshold to build data pyramid. By some experiments, this method is proved to be efficient and robustness.
Using borescope equipment to inspect the inside of turbine engines is an important technology to the daily damage detection of aeronautic engine. Because the borescope image that we observe is based upon point light, and the quantum nature of light is not ideal enough, borescope image acquired through charge-coupled device (CCD) is contaminated by white Gaussian noise. Towards this, a kind of spatially adaptive context-based wavelet shrinkage borescope image denoising method was presented. The spatially adaptive wavelet thresholding was selected based on context modeling, which was used in our prior borescope image compression coder to adapt the probability. Each wavelet coefficient was modeled as a Gibbs field distribution. Context modeling was used to estimate the thresholding for each coefficient. This method was based on an overcomplete non-subsampled wavelet representation, which yielded better results than the orthogonal transform. Experimental results show that spatially adaptive wavelet thresholding yields significantly improved visual quality as well as lower mean squared error (MSE) compared to the method of Chang.
We present NeFF, a 3D neural scene representation estimated from captured images. Neural radiance fields(NeRF) have demonstrated their excellent performance for image based photo-realistic free-viewpoint rendering. However, one limitation of current NeRF based methods is the shape-radiance ambiguity, which means that without any regularization, there may be an incorrect shape that explains the training set very well but that generalizes poorly to novel views. This degeneration becomes particularly evident when fewer input views are provided. We propose an explicit regularization to avoid the ambiguity by introducing the Neural Feature Fields which map spatial locations to view-independent features. We synthesize feature maps by projecting the feature fields into images using volume rendering techniques as NeRF does and get an auxiliary loss that encourages the correct view-independent geometry. Experimental results demonstrate that our method has better robustness when dealing with sparse input views.
Aiming at the problem of delay requirements in VR panoramic video transmission, based on the MEC server resources on the edge side to reduce the calculation and transmission delay, an optimal partitioning method of MEC collaborative space based on maximum and minimum distance clustering is proposed. It is difficult to determine the initial value of the actual cluster classification of MEC in the distance method. In the "user-MEC/base station-cloud" task offloading architecture of VR video transmission on the edge side, based on the above collaborative space division scheme, the boundary of the MEC collaborative subspace is dynamically adjusted according to the resource usage in different regions and different time periods. The purpose is to maximize the It ensures that the edge terminal can complete the user's task request to the maximum extent, improves the transmission efficiency and quality of VR panoramic video, and reduces energy consumption, providing users with a smooth and immersive VR video experience.
Cloud simulation makes contribution to simulation system by providing on-demand, everywhere simulation services. Load Balancing is a fatal issue for Cloud simulation for the reason that an overloaded node may cause slowdown of the whole system. To acquire high performance of Cloud simulation, this paper proposes a two-stage load balancing method. The first stage aims to virtual machine (VM) load balancing by using a heuristic algorithm to allocate Federates to VMs before the simulation start. In the second stage, the VMs are dynamically migrated between physical machines (PMs) to keep the load balancing of PMs during the simulation. Experiments show that the two-stage load balancing method can significantly improve the efficiency of the HLA system on Cloud simulation platform (CSP).
The incentive mechanism as an important mean in the management, it's specific implementation has a variety of ways, such as objective incentive, emotional incentive, ideal incentive and so on.This paper describes the main methods to implement the incentive mechanism of physical education in colleges and universities from the important significance of the incentive mechanism in the process of teaching, and puts forward the further thinking about the incentive mechanism in order to provide a reference value for the benign development of physical education management.The incentive, in the literal sense as to inspire and encourage, the teaching of physical education in colleges and universities using the incentive mechanism is to stimulate students' learning enthusiasm by encouraging, to encourage students to actively participate in physical education, the purpose is to change the learning from passive to active, and to stimulate their creativity in the process of participation in learning.
As Internet of Things (IoT) is rapidly developing and popularizing, IoT devices generate a large amount of network traffic information, which requires reliable IoT traffic intrusion detection techniques to continuously improve network security mechanisms. Most existing intrusion detection methods based on machine learning and deep learning in IoT rely on complex feature reduction and feature selection techniques, and the models cannot focus on important features adaptively and have poor global modeling capability for high-dimensional sequential features. To overcome the dependence on feature preprocessing, realize the model adaptively focus on important features, and further enhance the ability to extract global features, this paper proposes a Transformer-based IoT intrusion detection method called TransIDS. TransIDS has remarkable global modeling capability by introducing a multi-headed self-attention mechanism, which can extract multiple global temporal features, adaptively adjusting the attention to high-dimensional features. To overcome the undesirable effects of unbalanced datasets, we adopt Label Smoothing to add noise to sample labels to avoid over-reliance on training samples, which can enhance the generalization ability of the model. Finally, the performance of the proposed method is verified on a TON-IoT standard dataset in a real environment, and the experimental results show that the proposed method achieves the superior recognition performance compared to other advanced methods. In addition, we also investigate the effect of hyperparameters on the detection performance.