With the rapid development of monitoring systems, extensive amount of water quality high-resolution measurements are accumulated, which make it unrealistic to manually extract the water quality anomaly features from the huge river environment information. In this study, a hybrid anomaly detection framework is developed by the combination of prediction-based and classification-based data-driven methods to provide a scientific indication for river pollution identification. A Variational Mode Decomposition-Back Propagation Neural Network (VMD-BPNN) model is used to analyze the real-time water quality variation tendencies in the first stage. Additionally, a Support Vector Data Description (SVDD) algorithm is adopted to capture the multi-dimensional water quality anomaly characteristics in the second stage. The developed hybrid framework is then applied to the Kansas River in America, to verify its river pollution identification performance in comparison to different anomaly detection methods and in various anomaly-level scenarios. The developed hybrid framework can achieve a maximum Area Under the Curve (AUC) value of 0.932 under a two-dimensional anomaly detection pattern with the True Positive Rate (TPR) and False Positive Rate (FPR) values of 0.861 and 0.142, respectively. The results indicate that the developed hybrid framework can provide an effective river pollution identification performance with dynamically determined warning thresholds. Meanwhile, a vigorous anomaly detection pattern can improve the pollution identification performance by considering the cumulative interactions among the multi-dimensional water quality parameters.
By mining the data published on social network, we can discover the hidden value of information including the privacy of individuals and organizations. Protecting privacy of individuals and organizations on social network has become the focus of more and more researchers. Based on the actual privacy protection need of edge sensitive attribute and vertexes sensitive attribute, we propose a new personalized -anonymity technology of privacy preserving to reduce distortion extent of the data in the privacy processing of data of social network. Experimental results of personalized -anonymity algorithm show that -neighborhood attack of graph, background knowledge attack, and homogeneity attack can be prevented effectively by using anonymous vertexes and edges, as well as the influence matrix based on background knowledge. The diversity of vertex sensitive attribute can be achieved. Personalized protecting privacy requirements can be met by using such parameter as .
Characteristics extraction and anomaly analysis based on frequency spectrum can provide crucial support for source apportionment of PM2.5 pollution. In this study, an effective source apportionment framework combining the Fast Fourier Transform (FFT)- and Continuous Wavelet Transform (CWT)-based spectral analyses and Positive Matrix Factorization (PMF) receptor model is developed for spectrum characteristics extraction and source contribution assessment. The developed framework is applied to Beijing during the winter heating period with 1-h time resolution. The spectrum characteristics of anomaly frequency, location, duration and intensity of PM2.5 pollution can be captured to gain an in-depth understanding of source-oriented information and provide necessary indicators for reliable PMF source apportionment. The combined analysis demonstrates that the secondary inorganic aerosols makes relatively high contributions (50.59 %) to PM2.5 pollution during the winter heating period in Beijing, followed by biomass burning, vehicle emission, coal combustion, road dust, industrial process and firework emission sources accounting for 15.01 %, 11.00 %, 10.70 %, 5.31 %, 3.88 %, and 3.51 %, respectively. The source apportionment result suggests that combining frequency spectrum characteristics with source apportionment can provide consistent rationales for understanding the temporal evolution of PM2.5 pollution, identifying the potential source types and quantifying the related contributions.
Cataract, the leading cause of global blindness, represents a focal concern within the field of blindness prevention. Its diagnosis primarily relies on the observation of lens opacification under slit-lamp examination, coupled with best-corrected visual acuity assessment. With the rapid evolution of artificial intelligence, the ophthalmic domain has increasingly incorporated AI technologies; however, research in the realm of cataracts remains relatively limited. This study employs computer vision segmentation techniques to obtain precise images of cataractous lens nuclei and utilizes deep learning methodologies for training and validation, yielding commendable results in the realm of graded diagnostic accuracy. The application of computer vision for meticulous cataractous lens nuclear region-of-interest imaging, coupled with adept deep learning methodologies, has demonstrated notable efficacy in achieving superior diagnostic outcomes.