Decomposed Attention: Self-Attention with Linear Complexities.

2019 
Recent works have been applying self-attention to various fields in computer vision and natural language processing. However, the memory and computational demands of existing self-attention operations grow quadratically with the spatiotemporal size of the input. This prohibits the application of self-attention on large inputs, e.g., long sequences, high-definition images, or large videos. To remedy this drawback, this paper proposes a novel decomposed attention (DA) module with substantially less memory and computational consumption. The resource-efficiency allows more widespread and flexible application. Empirical evaluations on object recognition demonstrated the effectiveness of these advantages. DA-augmented models achieved state-of-the-art performance for object recognition on MS-COCO 2017 and significant improvement for image classification on ImageNet. Further, the resource-efficiency of DA democratizes self-attention to fields where the prohibitively high costs have been preventing its application. The state-of-the-art result for stereo depth estimation on the Scene Flow dataset exemplified this.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    3
    Citations
    NaN
    KQI
    []