Multi-channel Transformers for Multi-articulatory Sign Language Translation
2020
Sign languages use multiple asynchronous information channels
(articulators), not just the hands but also the face and body, which
computational approaches often ignore. In this paper we tackle the multiarticulatory sign language translation task and propose a novel multichannel
transformer architecture. The proposed architecture allows both
the inter and intra contextual relationships between different sign articulators
to be modelled within the transformer network itself, while also
maintaining channel specific information. We evaluate our approach on
the RWTH-PHOENIX-Weather-2014T dataset and report competitive
translation performance. Importantly, we overcome the reliance on gloss
annotations which underpin other state-of-the-art approaches, thereby
removing the need for expensive curated datasets.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
67
References
22
Citations
NaN
KQI