Suitability of features of deep convolutional neural networks for modeling somatosensory information processing

2019 
Deep Learning (DL) has recently led to great excitement and success in AI and attracted further attention due to the fact that the features extracted in the early cortical layers have properties similar to those of real neurons in the primary visual cortex. Understanding cortical mechanisms of sensory information processing is important for improving DL systems as well as for developing more realistic simulations of cortical systems. Just like representing speech in timefrequency domain makes it possible to use DL systems trained on images for speech recognition, simulating tactile information processing can also be modeled and studied using DL. Tactile stimulators, such as those invented by Cortical Metrics, are efficient tools for quantitative sensory testing. To model the features extracted by tactile information processing in the somatosensory cortex more accurately, we propose a novel unsupervised DL method using transfer learning and the principle of contextual guidance. Our approach helps describe the goal of sensory coding in early cortical areas because low-level stimulus features that would be behaviorally useful as building blocks enabling the construction of high-level behaviorally significant features have to rely on some principles, such as contextual guidance and transfer-utility, applicable to any sensory modality (whether visual, auditory, or tactile). We show that the emergent features offer higher performance accuracy than AlexNet on the Caltech-101 dataset and on classification of textures resembling tactile stimuli S1 somatosensory area processes. Our computational modeling approach can help improve (i) Cortical Metrics approaches, (ii) sensorimotor cortical models, and (iii) deep hybrids of unsupervised and supervised networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    2
    Citations
    NaN
    KQI
    []