Tracking multimodal cohesion in Audio Description: Examples from a Dutch audio description corpus

2019 
One of the main questions addressed by multimodality research—the main conceptual framework for analysing audiovisual texts—is how the different modes of audiovisual texts combined—visual, verbal, aural—create supplementary meaning in texts, over and above the meanings conveyed by the individual constituents. Ensuring that this multimodal interaction or multimodal cohesion remains intact is a key challenge in the practice of audiovisual translation (AVT), and particularly in Audio Description (AD) for the blind and visually impaired. The present article therefore studies the functioning of multimodal cohesion in audio-described texts by analysing the types of interaction between descriptive units and sound effects in a selection of Dutch audio-described films and series. The article begins with a detailed description of the methodology which is based on multimodal transcription and concludes with an overview of the types of multimodal cohesive relations identified.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    1
    Citations
    NaN
    KQI
    []