Automatic sleep stage classification plays an essential role in sleep quality measurement and sleep disorder diagnosis. Although many approaches have been developed, most use only single-channel electroencephalogram signals for classification. Polysomnography (PSG) provides multiple channels of signal recording, enabling the use of the appropriate method to extract and integrate the information from different channels to achieve higher sleep staging performance. We present a transformer encoder-based model, MultiChannelSleepNet, for automatic sleep stage classification with multichannel PSG data, whose architecture is implemented based on the transformer encoder for single-channel feature extraction and multichannel feature fusion. In a single-channel feature extraction block, transformer encoders extract features from time-frequency images of each channel independently. Based on our integration strategy, the feature maps extracted from each channel are fused in the multichannel feature fusion block. Another set of transformer encoders further capture joint features, and a residual connection preserves the original information from each channel in this block. Experimental results on three publicly available datasets demonstrate that our method achieves higher classification performance than state-of-the-art techniques. MultiChannelSleepNet is an efficient method to extract and integrate the information from multichannel PSG data, which facilitates precision sleep staging in clinical applications.
Convolutional neural networks (CNNs) have demonstrated remarkable capability in extracting deep semantic features from images, leading to significant advancements in various image processing tasks. This success has also opened up new possibilities for change detection (CD) in remote sensing applications. But unlike conventional image recognition tasks, the performance of AI models in CD heavily relies on the method used to fuse the features from two different phases of the image. The existing deep learning-based methods for CD typically fuse features of bi-temporal images using difference or concatenation techniques. However, these approaches often fails tails to prioritize potential change areas adequately and neglects the rich contextual information essential for discerning subtle changes, potentially leading to slower convergence speed and reduced accuracy. To tackle this challenge, we propose a novel feature fusion approach called Feature-Difference Attention-based Feature Fusion CD Network (FDA-FFNet). This method aims to enhance feature fusion by incorporating a Feature-Difference Attention-based Feature Fusion Module (FDA-FFM), enabling a more focused analysis of change areas. Additionally, a Deep Supervised Attention Module (DSAM) is implemented to leverage the Deep Surveillance module for cascading refinement of change areas. Furthermore, an atrous Spatial Pyramid Pooling-Fast (SPPF) is employed to efficiently acquire multi-scale object information. The proposed method is evaluated on two publicly available datasets, namely the WHU-CD and LEVIR-CD datasets. Compared to state-of-the-art CD methods, the proposed method outperforms in all metrics, with an IoU of 92.49% and 85.56%, respectively. The codes are available at https://github.com/pwg111/FDAFFNet .
Human activity area extraction, a popular research topic, refers to mining meaningful location clusters from raw activity data. However, varying densities of large-scale spatial data create a challenge for existing extraction methods. This research proposes a novel area extraction framework (ELV) aimed at tackling the challenge by using clustering with an adaptive distance parameter and a re-segmentation strategy with noise recovery. Firstly, a distance parameter was adaptively calculated to cluster high-density points, which can reduce the uncertainty introduced by human subjective factors. Secondly, the remaining points were assigned according to the spatial characteristics of the clustered points for a more reasonable judgment of noise points. Then, to face the varying density problem, a re-segmentation strategy was designed to segment the appropriate clusters into low- and high-density clusters. Lastly, the noise points produced in the re-segmentation step were recovered to reduce unnecessary noise. Compared with other algorithms, ELV showed better performance on real-life datasets and reached 0.42 on the Silhouette coefficient (SC) indicator, with an improvement of more than 16.67%. ELV ensures reliable clustering results, especially when the density differences of the activity points are large, and can be valuable in some applications, such as location prediction and recommendation.
Change detection (CD) using deep learning techniques is a prominent topic in the field of remote sensing (RS). However, the existing methods require large amounts of labeled samples for supervised learning, which is time-consuming and labor-intensive. To address this challenge, semi-supervised learning methods that utilize a limited number of labeled samples along with a large pool of unlabeled samples have emerged as a compelling solution. We propose a novel semi-supervised CD (SSCD) network that combines self-training and consistency regularization, namely STCRNet. During the self-training phase, STCRNet selects unlabeled samples with reliable pseudolabels based on their prediction stability across different training epochs and the consistency between class activation maps and prediction results within the model. Then, we apply data augmentation to the reliable samples and enforce consistency regularization on the augmented samples using the pseudolabels to enhance the network's robustness. Moreover, feature consistency regularization is applied to the remaining unlabeled samples with image perturbations, thereby broadening the feature space and improving the model's generalization performance. The experimental results on two widely used datasets demonstrate that STCRNet achieves state-of-the-art performance, especially with a significantly small amount (5%–10%) of labeled samples. STCRNet presents a promising solution for SSCD.
Change detection (CD) is an important application of remote sensing (RS) technology, which discovers changes by observing bi-temporal RS images. The rise of deep learning provides new solutions for CD. However, due to the insufficient extraction and utilization of deep features from RS images, existing deep learning-based CD methods are difficult to fully integrate such deep features, resulting in unstable performance, especially low sensitivity to multi-scale changes. In this letter, a multi-scale feature fusion CD network (MSFF-CDNet) is proposed to enhance feature fusion, by integrating a mask guided change fusion module (MGCF) to achieve the fusion of the consistency and difference of multi-scale features. Also, a CD refinement module (CDRM) is implemented to assist the encoding-decoding structure to achieve CD at a finer scale. By training with a hybrid loss function, the MSFF-CDNet is able to learn trans-formation relationships of bi-temporal RS images and their change maps. Besides, using a deep supervised learning strategy further improves the fitting performance and robustness. The method is experimented on two open-source datasets (i.e., CDD and LEVIR-CD dataset). Compared to state-of-the-art CD methods, the proposed method outperforms on all metrics and its IoU reaches 92.39% and 85.89%, respectively. The codes are available at https://github.com/WangLukang/MSCD.
Change detection (CD) in remote sensing (RS) imagery is a pivotal method for detecting changes in the Earth’s surface, finding wide applications in urban planning, disaster management, and national security. Recently, deep learning (DL) has experienced explosive growth and, with its superior capabilities in feature learning and pattern recognition, it has introduced innovative approaches to CD. This review explores the latest techniques, applications, and challenges in DL-based CD, examining them through the lens of various learning paradigms, including fully supervised, semi-supervised, weakly supervised, and unsupervised. Initially, the review introduces the basic network architectures for CD methods using DL. Then, it provides a comprehensive analysis of CD methods under different learning paradigms, summarizing commonly used frameworks. Additionally, an overview of publicly available datasets for CD is offered. Finally, the review addresses the opportunities and challenges in the field, including: (a) incomplete supervised CD, encompassing semi-supervised and weakly supervised methods, which is still in its infancy and requires further in-depth investigation; (b) the potential of self-supervised learning, offering significant opportunities for Few-shot and One-shot Learning of CD; (c) the development of Foundation Models, with their multi-task adaptability, providing new perspectives and tools for CD; and (d) the expansion of data sources, presenting both opportunities and challenges for multimodal CD. These areas suggest promising directions for future research in CD. In conclusion, this review aims to assist researchers in gaining a comprehensive understanding of the CD field.
Abstract Temporal lobe epilepsy (TLE) is defined as the sporadic occurrence of spontaneous recurrent seizures, and its pathogenesis is complex. SHP‐2 (Src homology 2‐containing protein tyrosine phosphatase 2) is a widely expressed cytosolic tyrosine phosphatase protein that participates in the regulation of inflammation, angiogenesis, gliosis, neurogenesis and apoptosis, suggesting a potential role of SHP‐2 in TLE. Therefore, we investigated the expression patterns of SHP‐2 in the epileptogenic brain tissue of intractable TLE patients and the various effects of treatment with the SHP‐2‐specific inhibitor SHP099 on a pilocarpine model. Western blotting and immunohistochemistry results confirmed that SHP‐2 expression was upregulated in the temporal neocortex of patients with TLE. Double‐labeling experiments revealed that SHP‐2 was highly expressed in neurons, astrocytes, microglia and vascular endothelial cells in the epileptic foci of TLE patients. In the pilocarpine‐induced C57BL/6 mouse model, SHP‐2 upregulation in the hippocampus began one day after status epilepticus, reached a peak at 21 days and then maintained a significantly high level until day 60. Similarly, we found a remarkable increase in SHP‐2 expression at 1, 7, 21 and 60 days post‐SE in the temporal neocortex. In addition, we also showed that SHP099 increased reactive gliosis, the release of IL‐1β, neuronal apoptosis and neuronal loss, while reduced neurogenesis and albumin leakage. Taken together, the increased expression of SHP‐2 in the epileptic zone may be involved in the process of TLE.
Change detection (CD) using deep learning techniques is a trending topic in the field of remote sensing. However, most existing networks require pixel-level labels for supervised learning, which is difficult and time-consuming to label all changed pixels from multi-temporal images. To address this challenge, we propose a novel framework for weakly supervised change detection (WSCD), namely CS-WSCDNet, which can achieve pixel-level results by training on samples with image-level labels. Specifically, the framework is built upon the localization capability of class activation mapping (CAM) and the powerful zero-shot segmentation ability of the foundation model, i.e., segment anything model (SAM). After training an image-level classifier to identify whether changes have occurred in the image pair, CAM is utilized to roughly localize the regions of change in the images pair. Subsequently, SAM is employed to optimize these rough regions and generate pixel-level pseudo-labels for changed objects. These pseudo-labels are then used to train a CD model at the pixel-level. To evaluate the effectiveness of CS-WSCDNet, experiments are conducted on two high-resolution remote sensing datasets. It shows that the proposed framework not only achieves state-of-the-art (SOTA) performance in WSCD tasks but also demonstrates the potential of weakly supervised learning in the field of CD. The demo codes are available at https://github.com/WangLukang/CS-WSCDNet.