Temporal Information Guided Generative Adversarial Networks for Stimuli Image Reconstruction from Human Brain Activities

2021 
Understanding how the human brain work has attracted increasing attention in both fields of neuroscience and machine learning. Previous studies use autoencoder and generative adversarial networks (GAN) to improve the quality of stimuli image reconstruction from functional Magnetic Resonance Imaging (fMRI) data. However, these methods mainly focus on acquiring relevant features between two different modalities of data, i.e., stimuli images and fMRI, while ignoring the temporal information of fMRI data, thus leading to sub-optimal performance. To address this issue, in this paper, we propose a temporal information guided GAN (TIGAN) to reconstruct visual stimuli from human brain activities. Specifically, the proposed method consists of three key components, including 1) an image encoder for mapping the stimuli images into latent space, 2) a Long Short-Term Memory (LSTM) generator for fMRI feature mapping, which is used to capture temporal information in fMRI data, and 3) a discriminator for image reconstruction, which is used to make the reconstructed image more similar to the original image. In addition, to better measure the relationship of two different modalities of data (i.e., fMRI and natural images), we leverage a pairwise ranking loss to rank the stimuli images and fMRI to ensure strongly associated pairs at the top and weakly related ones at the bottom. Experimental results on real-world datasets suggest that the proposed TIGAN achieves better performance in comparison with several state-of-the-art image reconstruction approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []