Learning Cross-Modal Visual-Tactile Representation Using Ensembled GANs

2019 
In this study, the authors study a deep learning model that can convert vision into tactile information, so that different texture images can be fed back to the tactile signal close to the real tactile sensation after training and learning. This study focuses on the classification of different image visual information and its corresponding tactile feedback output mode. A training model of ensembled generative adversarial networks is proposed, which has the characteristics of simple training and stable efficiency of the result. At the same time, compared with the previous methods of judging the tactile output, in addition to subjective human perception, this study also provides an objective and quantitative evaluation system to verify the performance of the model. The experimental results show that the learning model can transform the visual information of the image into the tactile information, which is close to the real tactile sensation, and also verify the scientificity of the tactile evaluation method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    7
    Citations
    NaN
    KQI
    []