Mathematical models of decision making and learning

2008 
: Computational models of reinforcement learning have recently been applied to analysis of brain imaging and neural recording data to identity neural correlates of specific processes of decision making, such as valuation of action candidates and parameters of value learning. However, for such model-based analysis paradigms, selecting an appropriate model is crucial. In this study we analyze the process of choice learning in rats using stochastic rewards. We show that "Q-learning," which is a standard reinforcement learning algorithm, does not adequately reflect the features of choice behaviors. Thus, we propose a generalized reinforcement learning (GRL) algorithm that incorporates the negative reward effect of reward loss and forgetting of values of actions not chosen. Using the Bayesian estimation method for time-varying parameters, we demonstrated that the GRL algorithm can predict an animal's choice behaviors as efficiently as the best Markov model. The results suggest the usefulness of the GRL for the model-based analysis of neural processes involved in decision making.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []