Modeling task-based fMRI data via deep belief network with neural architecture search.

2020 
Abstract It has been shown that deep neural networks are powerful and flexible models that can be applied on fMRI data with superb representation ability over traditional methods. However, a challenge of neural network architecture design has also attracted attention: due to the high dimension of fMRI volume images, the manual process of network model design is very time-consuming and not optimal. To tackle this problem, we proposed an unsupervised neural architecture search (NAS) framework on a deep belief network (DBN) that models volumetric fMRI data, named NAS-DBN. The NAS-DBN framework is based on Particle Swarm Optimization (PSO) where the swarms of neural architectures can evolve and converge to a feasible optimal solution. The experiments showed that the proposed NAS-DBN framework can quickly find a robust architecture of DBN, yielding a hierarchy organization of functional brain networks (FBNs) and temporal responses. Compared with 3 manually designed DBNs, the proposed NAS-DBN has the lowest testing loss of 0.0197, suggesting an overall performance improvement of up to 47.9%. For each task, the NAS-DBN identified 260 FBNs, including task-specific FBNs and resting state networks (RSN), which have high overlap rates to general linear model (GLM) derived templates and independent component analysis (ICA) derived RSN templates. The average overlap rate of NAS-DBN to GLM on 20 task-specific FBNs is as high as 0.536, indicating a performance improvement of up to 63.9% in respect of network modeling. Besides, we showed that the NAS-DBN can simultaneously generate temporal responses that resemble the task designs very well, and it was observed that widespread overlaps between FBNs from different layers of NAS-DBN model form a hierarchical organization of FBNs. Our NAS-DBN framework contributes an effective, unsupervised NAS method for modeling volumetric task fMRI data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    4
    Citations
    NaN
    KQI
    []