In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.
The World Health Organization states that early diagnosis is essential to increasing the cure rate for breast cancer, which poses a danger to women's health worldwide. However, the efficacy and cost limitations of conventional diagnostic techniques increase the possibility of misdiagnosis. In this work, we present a quantum hybrid classical convolutional neural network (QCCNN) based breast cancer diagnosis approach with the goal of utilizing quantum computing's high-dimensional data processing power and parallelism to increase diagnosis efficiency and accuracy. When working with large-scale and complicated datasets, classical convolutional neural network (CNN) and other machine learning techniques generally demand a large amount of computational resources and time. Their restricted capacity for generalization makes it challenging to maintain consistent performance across multiple data sets. To address these issues, this paper adds a quantum convolutional layer to the classical convolutional neural network to take advantage of quantum computing to improve learning efficiency and processing speed. Simulation experiments on three breast cancer datasets, GBSG, SEER and WDBC, validate the robustness and generalization of QCCNN and significantly outperform CNN and logistic regression models in classification accuracy. This study not only provides a novel method for breast cancer diagnosis but also achieves a breakthrough in breast cancer diagnosis and promotes the development of medical diagnostic technology.
In this paper, we use the idea of quantum Graphic Cascade Coding, information security and quantum random coding to construct an immune noise channel, by using the normal state of pattern tree chart and forest chart, to make large diagonal matrices or blocks, in order to perform quantum coding operations under multiple degrees of freedom. Through calculation and analysis of different concatenated code channel capacity, we get the formula of different noise channels in multi-channel coherence information under the coder multiple degrees of freedom. By this, we can quickly calculate the coherence information of various concatenated codes in the channel, the approximation of the channel capacity and the noise margin of the channel transmission quantum, and analyze the different anti-noise performance of different cascaded codes in different parameter channels. And, thus, we can obtain the security regions where different noise channels can transmit quantum information.
Abstract In recent years, with the rapid development of quantum computing technology, the fusion of quantum computing and machine learning techniques is becoming a research hotspot in the field of machine learning. This article aims to explore the impact of the depth and width of quantum convolutional layers on image classification tasks in Quantum−Classical Hybrid Convolutional Neural Networks. To this end, a model combining parameterized quantum circuits and classical neural networks is designed, and a series of experiments are conducted on the MNIST dataset to assess the specific effects of different configurations of quantum convolutional layers on model performance. The research results indicate that simply increasing the depth or width of quantum convolutional layers does not guarantee performance improvement and sometimes may even lead to performance degradation. Therefore, when designing quantum convolutional layers, we should make reasonable choices based on the actual needs of the application scenarios. Finally, based on these findings, a multidimensional optimization strategy is proposed to enhance the overall performance of the model. The achievements of this research not only provide important guidance for the design and optimization of Quantum−Classical Hybrid Convolutional Neural Networks but also offer new research perspectives for researchers in the field of quantum machine learning.
Federated learning is a learning method that uses distributed data to train a neural network model, which can effectively solve the "data island" problem. However, it has a severe privacy disclosure problem. Although the existing security aggregation scheme solves the problem of privacy disclosure in federated learning, it requires many additional computing and communication costs. This paper proposes an efficient federated learning security aggregation scheme (EFLSAS) to solve the above problem in an edge computing context. First, the edge server takes the communication delay between devices participating in federated learning as the weight of the device fully connected graph and modifies the graph into a sparse connectivity graph based on the minimum spanning tree (MST), to reduce the communication delay between devices. Second, the device broadcasts information through neighboring devices instead of the edge server according to the MST topology to reduce the communication overhead of the edge server. Finally, many experimental results show that when the number of devices is ten, the EFLSAS proposed in this paper can reduce the running times of federated learning by 28.2% compared with the traditional security aggregation SA without reducing the security level and model accuracy of federated learning.
Earthquake-triggered landslides frequently occur in active mountain areas, which poses great threats to the safety of human lives and public infrastructures. Fast and accurate mapping of coseismic landslides is important for earthquake disaster emergency rescue and landslide risk analysis. Machine learning methods provide automatic solutions for landslide detection, which are more efficient than manual landslide mapping. Deep learning technologies are attracting increasing interest in automatic landslide detection. CNN is one of the most widely used deep learning frameworks for landslide detection. However, in practice, the performance of the existing CNN-based landslide detection models is still far from practical application. Recently, Transformer has achieved better performance in many computer vision tasks, which provides a great opportunity for improving the accuracy of landslide detection. To fill this gap, we explore whether Transformer can outperform CNNs in the landslide detection task. Specifically, we build a new dataset for identifying coseismic landslides. The Transformer-based semantic segmentation model SegFormer is employed to identify coseismic landslides. SegFormer leverages Transformer to obtain a large receptive field, which is much larger than CNN. SegFormer introduces overlapped patch embedding to capture the interaction of adjacent image patches. SegFormer also introduces a simple MLP decoder and sequence reduction to improve its efficiency. The semantic segmentation results of SegFormer are further improved by leveraging image processing operations to distinguish different landslide instances and remove invalid holes. Extensive experiments have been conducted to compare Transformer-based model SegFormer with other popular CNN-based models, including HRNet, DeepLabV3, Attention-UNet, U2Net and FastSCNN. SegFormer improves the accuracy, mIoU, IoU and F1 score of landslide detectuin by 2.2%, 5% and 3%, respectively. SegFormer also reduces the pixel-wise classification error rate by 14%. Both quantitative evaluation and visualization results show that Transformer is capable of outperforming CNNs in landslide detection.