A large-scale neural network training framework for generalized estimation of single-trial population dynamics

2021 
Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics. However, the sheer volume of data and its dynamical complexity are critical barriers to uncovering and interpreting these dynamics. Deep learning methods are a promising approach due to their ability to uncover meaningful relationships from large, complex, and noisy datasets. When applied to high-D spiking data from motor cortex (M1) during stereotyped behaviors, they offer improvements in the ability to uncover dynamics and their relation to subjects9 behaviors on a millisecond timescale. However, applying such methods to less-structured behaviors, or in brain areas that are not well-modeled by autonomous dynamics, is far more challenging, because deep learning methods often require careful hand-tuning of complex model hyperparameters (HPs). Here we demonstrate AutoLFADS, a large-scale, automated model-tuning framework that can characterize dynamics in diverse brain areas without regard to behavior. AutoLFADS uses distributed computing to train dozens of models simultaneously while using evolutionary algorithms to tune HPs in a completely unsupervised way. This enables accurate inference of dynamics out-of-the-box on a variety of datasets, including data from M1 during stereotyped and free-paced reaching, somatosensory cortex during reaching with perturbations, and frontal cortex during cognitive timing tasks. We present a cloud software package and comprehensive tutorials that enable new users to apply the method without needing dedicated computing resources.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    12
    Citations
    NaN
    KQI
    []