In water resources pollution detection, because the collected data is not filtered, the rate of missed detection is high, and the accuracy of detection results is limited.Therefore, a method of water resources pollution detection based on convolution neural network is proposed.The data of water resource pollution is input into convolutional neural network to calculate its average fitness value, generate the optimal rule set, calculate its average fitness, train the water resource pollution detection data, and realize the accurate monitoring of water resource pollution.The experimental results show that the method has high detection accuracy, low leakage rate and high detection efficiency.
Balancing model size, segmentation accuracy, and inference speed is a key challenge in image semantic segmentation. This paper introduces a novel lightweight semantic segmentation network, CACNet (Class-Aware Context Network), featuring the innovative Class-Aware Context Enhancement Module (CACEM). CACEM is designed to explicitly intertwine category and context information, addressing the shortcomings of traditional convolutional networks in capturing and encoding inter-category relationships. It operates by normalizing pixel probability distributions via softmax, mapping pixels to categories, and generating new feature maps that accurately encapsulate these relationships. Additionally, the network utilizes multi-scale context information and employs dilated convolutions, followed by upsampling to blend this context with single-channel category information. This process, enhanced by Fourier adaptive attention mechanisms, allows CACNet to capture intricate feature structures and manipulate features in the frequency domain for improved segmentation accuracy. On the Cityscapes and CamVid datasets, CACNet demonstrates competitive accuracies of 70.8 and 74.6 respectively, with a compact model size of 0.52M and an inference speed over 58FPS on GTX 2080Ti GPU platform. This blend of compactness, speed, and accuracy positions CACNet as an efficient choice in resource-constrained environments.
At present, image classification model has become an essential component for detection system. However, existing identification models are primarily concentrated on the convolutional neural network or utilize transformer model to capture the features of training images, which can obtain acceptable accuracy for single objectives and is hard to dispose multiple objectives with unsatisfied optimization results. In this work, we utilize the self-supervised mechanism to dispose the optimization process for traditional convolutional neural network. Initially, we construct the contrastive learning module to generate the amount of objectives label for the training data, which is utilized to reduce the extraction loss for multiple tasks. Subsequently, the convolutional neural network is established to classify the objectives in the images. From our extensive experimental results and compared analysis with existing classification models, we can directly observe that our proposed model can achieve the multiple images classification with more than 85% identification accuracy with reasonable responding costs.
At present, the image features extraction and classification have widely utilized to detect objects in numerous applications. Additionally, existing classification are primary concentrated on the machine learning models or train a convolutional neural network, which can obtain the acceptable extraction and classification accuracy. However, these methods ignore the graph convolutional operation can also be utilized to achieve the image features and classification tasks. In this work, we initially extend the graph convolutional neural network to the image features extraction and classifications, which can also obtain an acceptable classification accuracy with approximately to 95%. Initially, we utilize the clustering algorithm to mix multiple image pixels to a graph node and the adjacent matrix is depended on the pixel value features. Subsequently, the transferred graph is trained by a graph convolutional neural network to identify the class of these images. From our extensive experimental and compare with traditional machine learning models, our proposed model can successfully identify the subjects in the images with highest classification accuracy and reasonable computation costs.