Using top-down gating to optimally balance shared versus separated task representations
2021
Human adaptive behavior requires continually learning and performing a wide variety of tasks, often with very little practice. To accomplish this, it is crucial to separate neural representations of different tasks in order to avoid interference. At the same time, sharing neural representations supports generalization and allows faster learning. Therefore, a crucial challenge is to find an optimal balance between shared versus separated representations. Typically, models of human cognition employ top-down gating signals to separate task representations, but there exist surprisingly little systematic computational investigations of how such gating is best implemented. We identify and systematically evaluate two crucial features of gating signals. First, top-down input can be processed in an additive or multiplicative manner. Second, the gating signals can be adaptive (learned) or non-adaptive (random). We cross these two features, resulting in four gating models which are tested on a variety of input datasets and tasks with different degrees of stimulus-action mapping overlap. The multiplicative adaptive gating model outperforms all other models in terms of accuracy. Moreover, this model develops hidden units that optimally share representations between tasks. Specifically, different than the binary approach of currently popular latent state models, it exploits partial overlap between tasks.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
81
References
0
Citations
NaN
KQI