Should Semantic Vector Composition be Explicit? Can it be Linear.

2021 
Vector representations have become a central element in semantic language modelling, leading to mathematical overlaps with many fields including quantum theory. Compositionality is a core goal for such representations: given representations for `wet' and `fish', how should the concept `wet fish' be represented? This position paper surveys this question from two points of view. The first considers the question of whether an explicit mathematical representation can be successful using only tools from within linear algebra, or whether other mathematical tools are needed. The second considers whether semantic vector composition should be explicitly described mathematically, or whether it can be a model-internal side-effect of training a neural network. This paper is intended as a survey and motivation for discussion, and does not claim to give definitive answers to the questions posed. We speculate that these questions are related, and that the nonlinear operators used in implicitly compositional language models may inform explicit compositional modelling.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    0
    Citations
    NaN
    KQI
    []