Audio localization for robots using parallel cerebellar models

2018 
A robot audio localization system is presented that combines the outputs of multiple adaptive filter models of the Cerebellum to calibrate a robot’s audio map for various acoustic environments. The system is inspired by the MOdular Selection for Identification and Control (MOSAIC) framework. This study extends our previous work that used multiple cerebellar models to determine the acoustic environment in which a robot is operating. Here, the system selects a set of models and combines their outputs in proportion to the likelihood that each is responsible for calibrating the audio map as a robot moves between different acoustic environments, or contexts. The system was able to select an appropriate set of models, achieving a performance better than that of a single model trained in all contexts, including novel contexts, as well as a baseline GCC-PHAT sound source localization algorithm. The main contribution of this work is the combination of multiple calibrators to allow a robot operating in the field to adapt to a range of different acoustic environments. The best performances were observed where the presence of a Responsibility Predictor was simulated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    2
    Citations
    NaN
    KQI
    []