Automatic sleep stage classification plays an essential role in sleep quality measurement and sleep disorder diagnosis. Although many approaches have been developed, most use only single-channel electroencephalogram signals for classification. Polysomnography (PSG) provides multiple channels of signal recording, enabling the use of the appropriate method to extract and integrate the information from different channels to achieve higher sleep staging performance. We present a transformer encoder-based model, MultiChannelSleepNet, for automatic sleep stage classification with multichannel PSG data, whose architecture is implemented based on the transformer encoder for single-channel feature extraction and multichannel feature fusion. In a single-channel feature extraction block, transformer encoders extract features from time-frequency images of each channel independently. Based on our integration strategy, the feature maps extracted from each channel are fused in the multichannel feature fusion block. Another set of transformer encoders further capture joint features, and a residual connection preserves the original information from each channel in this block. Experimental results on three publicly available datasets demonstrate that our method achieves higher classification performance than state-of-the-art techniques. MultiChannelSleepNet is an efficient method to extract and integrate the information from multichannel PSG data, which facilitates precision sleep staging in clinical applications.
A crucial question in data science is to extract meaningful information embedded in high-dimensional data into a low-dimensional set of features that can represent the original data at different levels. Wavelet analysis is a pervasive method for decomposing time-series signals into a few levels with detailed temporal resolution. However, obtained wavelets are intertwined and over-represented across levels for each sample and across different samples within one population. Here, using neuroscience data of simulated spikes, experimental spikes, calcium imaging signals, and human electrocorticography signals, we leveraged conditional mutual information between wavelets for feature selection. The meaningfulness of selected features was verified to decode stimulus or condition with high accuracy yet using only a small set of features. These results provide a new way of wavelet analysis for extracting essential features of the dynamics of spatiotemporal neural data, which then enables to support novel model design of machine learning with representative features.
To address the issue of the computational intensity and deployment difficulties associated with weed detection models, a lightweight target detection model for weeds based on YOLOv8s in maize fields was proposed in this study. Firstly, a lightweight network, designated as Dualconv High Performance GPU Net (D-PP-HGNet), was constructed on the foundation of the High Performance GPU Net (PP-HGNet) framework. Dualconv was introduced to reduce the computation required to achieve a lightweight design. Furthermore, Adaptive Feature Aggregation Module (AFAM) and Global Max Pooling were incorporated to augment the extraction of salient features in complex scenarios. Then, the newly created network was used to reconstruct the YOLOv8s backbone. Secondly, a four-stage inverted residual moving block (iRMB) was employed to construct a lightweight iDEMA module, which was used to replace the original C2f feature extraction module in the Neck to improve model performance and accuracy. Finally, Dualconv was employed instead of the conventional convolution for downsampling, further diminishing the network load. The new model was fully verified using the established field weed dataset. The test results showed that the modified model exhibited a notable improvement in detection performance compared with YOLOv8s. Accuracy improved from 91.2% to 95.8%, recall from 87.9% to 93.2%, and mAP@0.5 from 90.8% to 94.5%. Furthermore, the number of GFLOPs and the model size were reduced to 12.7 G and 9.1 MB, respectively, representing a decrease of 57.4% and 59.2% compared to the original model. Compared with the prevalent target detection models, such as Faster R-CNN, YOLOv5s, and YOLOv8l, the new model showed superior performance in accuracy and lightweight. The new model proposed in this paper effectively reduces the cost of the required hardware to achieve accurate weed identification in maize fields with limited resources.
The olivocerebellar circuitry is important to convey both motor and non-motor information from the inferior olive (IO) to the cerebellar cortex. Several methods are currently established to observe the dynamics of the olivocerebellar circuitry, largely by recording the complex spike activity of cerebellar Purkinje cells; however, these techniques can be technically challenging to apply in vivo and are not always possible in freely behaving animals. Here, we developed a method for the direct, accessible, and robust recording of climbing fiber (CF) Ca2+ signals based on optical fiber photometry. We first verified the IO stereotactic coordinates and the organization of contralateral CF projections using tracing techniques and then injected Ca2+ indicators optimized for axonal labeling, followed by optical fiber-based recordings. We demonstrated this method by recording CF Ca2+ signals in lobule IV/V of the cerebellar vermis, comparing the resulting signals in freely moving mice. We found various movement-evoked CF Ca2+ signals, but the onset of exploratory-like behaviors, including rearing and tiptoe standing, was highly synchronous with recorded CF activity. Thus, we have successfully established a robust and accessible method to record the CF Ca2+ signals in freely behaving mice, which will extend the toolbox for studying cerebellar function and related disorders.
Two-photon Ca 2+ imaging is a leading technique for recording neuronal activities in vivo with cellular or subcellular resolution. However, during experiments, the images often suffer from corruption due to complex noises. Therefore, the analysis of Ca 2+ imaging data requires preprocessing steps, such as denoising, to extract biologically relevant information. We present an approach that facilitates imaging data restoration through image denoising performed by a neural network combining spatiotemporal filtering and model blind learning. Tests with synthetic and real two-photon Ca 2+ imaging datasets demonstrate that the proposed approach enables efficient restoration of imaging data. In addition, we demonstrate that the proposed approach outperforms the current state-of-the-art methods by evaluating the qualities of the denoising performance of the models quantitatively. Therefore, our method provides an invaluable tool for denoising two-photon Ca 2+ imaging data by model blind spatiotemporal processing.
This study presents an improved Informed RRT* algorithm integrating a dynamic shrinkage threshold node selection mechanism with an adaptive goal-biased strategy, aimed at reducing computational iterations and accelerating convergence performance. To resolve node redundancy during Informed RRT* sampling, a dynamic shrinkage threshold-based node selection mechanism is developed. Through the dynamic evaluation of nodal distances (between newly generated nodes and the existing tree structure) against node selection thresholds, redundant nodes are eliminated to enhance spatial exploration efficiency. To address blind exploration and convergence delays, an adaptive goal-biased strategy guides the directional expansion of the search tree toward target regions, thereby optimizing convergence behavior. Systematic simulations demonstrate the effectiveness of the proposed algorithm across multiple scenarios. Comparative experiments demonstrate that the two key technologies significantly improved the speed of the initial-path generation of Informed RRT*. Moreover, the proposed method shows good adaptability and stability in different environments, which proves its potential and advantages in the path-planning field.
Two-photon Ca 2+ imaging is a widely used technique for investigating brain functions across multiple spatial scales. However, the recording of neuronal activities is affected by movement of the brain during tasks in which the animal is behaving normally. Although post-hoc image registration is the commonly used approach, the recent developments of online neuroscience experiments require real-time image processing with efficient motion correction performance, posing new challenges in neuroinformatics. We propose a fast and accurate image density feature-based motion correction method to address the problem of imaging animal during behaviors. This method is implemented by first robustly estimating and clustering the density features from two-photon images. Then, it takes advantage of the temporal correlation in imaging data to update features of consecutive imaging frames with efficient calculations. Thus, motion artifacts can be quickly and accurately corrected by matching the features and obtaining the transformation parameters for the raw images. Based on this efficient motion correction strategy, our algorithm yields promising computational efficiency on imaging datasets with scales ranging from dendritic spines to neuronal populations. Furthermore, we show that the proposed motion correction method outperforms other methods by evaluating not only computational speed but also the quality of the correction performance. Specifically, we provide a powerful tool to perform motion correction for two-photon Ca 2+ imaging data, which may facilitate online imaging experiments in the future.
Quantitative and mechanistic understanding of learning and long-term memory at the level of single neurons in living brains require highly demanding techniques. A specific need is to precisely label one cell whose firing output property is pinpointed amidst a functionally characterized large population of neurons through the learning process and then investigate the distribution and properties of dendritic inputs. Here, we disseminate an integrated method of daily two-photon neuronal population Ca2+ imaging through an auditory associative learning course, followed by targeted single-cell loose-patch recording and electroporation of plasmid for enhanced chronic Ca2+ imaging of dendritic spines in the targeted cell. Our method provides a unique solution to the demand, opening a solid path toward the hard-cores of how learning and long-term memory are physiologically carried out at the level of single neurons and synapses.
In vivo two-photon Ca2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca2+ indicators, we recorded the Ca2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca2+ indicators (GECIs), we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors.