Towards a bio-inspired evaluation methodology for motion estimation models

2010 
Offering proper evaluation methodology is essential to continue progress in modelling neural mechanisms in vision/visual information processing. Currently, evaluation of motion estimation models lacks a proper methodology for comparing their performance against the visual system. Here, we set the basis for such a new benchmark methodology which is based on human visual performance as measured in psychophysics, ocular following and neurobiology. This benchmark will enable comparisons between different kinds of models, but also it will challenge current motion estimation models and better characterize their properties with respect to visual cortex performance. To do so, we propose a database of image sequences taken from neuroscience and psychophysics literature. In this article, we focus on two aspects of motion estimation, which are the dynamics of motion integration and the respective influence between 1D versus 2D cues. Then, since motion models possibly deal with different kinds of motion representations and scale, we define here two general readouts based on a global motion estimation. Such readouts, namely eye movements and perceived motion will serve as a reference to compare simulated and experimental data. We evaluate the performance of several models on this data to establish the current state of the art. Models chosen for comparison have very different properties and internal mechanisms, such as feedforward normalisation of V1 and MT processing and recurrent feedback. As a whole, we provide here the basis for a valuable evaluation methodology to unravel the fundamental mechanisms of the visual cortex in motion perception. Our database is freely available on the web together with scoring instructions and results at http://www-sop.inria.fr/neuromathcomp/software/motionpsychobench
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    6
    Citations
    NaN
    KQI
    []