Reinforcement learning and instance-based learning approaches to modeling human decision making in a prognostic foraging task

2015 
Procedural memory and episodic memory are known to be distinct and both underlie the performance of many tasks. Reinforcement learning (RL) and instance-based learning (IBL) represent common approaches to modeling procedural and episodic memory in that order. In this work, we present a neural model utilizing RL dynamics and an ACT-R model utilizing IBL productions to the task of modeling human decision making in a prognostic foraging task. The task performed was derived from a geospatial intelligence domain wherein agents must choose among information sources to more accurately predict the actions of an adversary. Results from both models are compared to human data and suggest that information gain is an important component in modeling decision-making behavior using either memory system; with respect to the episodic memory approach, the procedural memory approach has a small but significant advantage in fitting human data. Finally, we discuss the interactions of multi-memory systems in complex decision-making tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []