Verbally Assisting Virtual-Environment Tactile Maps: A Cross-Linguistic and Cross-Cultural Study

2014 
The Verbally Assisting Virtual-Environment Tactile Maps (VAVETaM) approach proposes to increase the effectiveness of tactile maps by realizing an intelligent multi-modal tactile map system that generates assisting utterances that generates assisting utterances acquiring survey knowledge from virtual tactile maps. Two experiments in German conducted with blindfolded sighted people and with blind and visually impaired people show that both types of participants benefit from verbal assistance. In this paper, we report an experiment testing the adaptation of the German prototype to be useable by Chinese native speakers. This study shows that the VAVETaM system can be adapted to Chinese language with comparable small effort. The Chinese participants’ achievement in acquiring survey knowledge is comparable to those of the participants in the German study. This supports the view that human processing of representationally multi-modal information is comparable between different cultures and languages.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    3
    Citations
    NaN
    KQI
    []