Similarity, kernels, and the fundamental constraints on cognition

2016 
Abstract Kernel-based methods, and in particular the so-called kernel trick, which is used in statistical learning theory as a means of avoiding expensive high-dimensional computations, have broad and constructive implications for the cognitive and brain sciences. An equivalent and complementary view of kernels as a measure of similarity highlights their effectiveness in low-dimensional and low-complexity learning and generalization — tasks that are indispensable in cognitive information processing. In this survey, we seek (i) to highlight some parallels between kernels in machine learning on the one hand and similarity in psychology and neuroscience on the other hand, (ii) to sketch out new research directions arising from these parallels, and (iii) to clarify some aspects of the way kernels are presented and discussed in the literature that may have affected their perceived relevance to cognition. In particular, we aim to resolve the tension between the view of kernels as a method of raising the dimensionality, and the various requirements of reducing dimensionality for cognitive purposes. We identify four fundamental constraints that apply to any cognitive system that is charged with learning from the statistics of its world, and argue that kernel-like neural computation is particularly suited to serving such learning and decision making needs, while simultaneously satisfying these constraints.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    127
    References
    7
    Citations
    NaN
    KQI
    []