Integrating Space, Time, and Orientation in Spiking Neural Networks: A Case Study on Multimodal Brain Data Modeling

2018 
Recent progress in a noninvasive brain data sampling technology has facilitated simultaneous sampling of multiple modalities of brain data, such as functional magnetic resonance imaging, electroencephalography, diffusion tensor imaging, and so on. In spite of the potential benefits from integrating predictive modeling of multiple modality brain data, this area of research remains mostly unexplored due to a lack of methodological advancements. The difficulty in fusing multiple modalities of brain data within a single model lies in the heterogeneous temporal and spatial characteristics of the data sources. Recent advances in spiking neural network systems, however, provide the flexibility to incorporate multidimensional information within the model. This paper proposes a novel, unsupervised learning algorithm for fusing temporal, spatial, and orientation information in a spiking neural network architecture that could potentially be used to understand and perform predictive modeling using multimodal data. The proposed algorithm is evaluated both qualitatively and quantitatively using synthetically generated data to characterize its behavior and its ability to utilize spatial, temporal, and orientation information within the model. This leads to improved pattern recognition capabilities and performance along with robust interpretability of the brain data. Furthermore, a case study is presented, which aims to build a computational model that discriminates between people with schizophrenia who respond or do not respond to monotherapy with the antipsychotic clozapine.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    13
    Citations
    NaN
    KQI
    []