Exact Maximum Entropy Inverse Optimal Control for modeling human attention switching and control

2016 
Maximum Causal Entropy (MCE) Inverse Optimal Control (IOC) has become an effective tool for modeling human behavior in many control tasks. Its advantage over classic techniques for estimating human policies is the transferability of the inferred objectives: Behavior can be predicted in variations of the control task by policy computation using a relaxed optimality criterion. However, exact policy inference is often computationally intractable in control problems with imperfect state observation. In this work, we present a model class that allows modeling human control of two tasks of which only one be perfectly observed at a time requiring attention switching. We show how efficient and exact objective and policy inference via MCE can be conducted for these control problems. Both MCE-IOC and Maximum Causal Likelihood (MCL)-IOC, a variant of the original MCE approach, as well as Direct Policy Estimation (DPE) are evaluated using simulated and real behavioral data. Prediction error and generalization over changes in the control process are both considered in the evaluation. The results show a clear advantage of both IOC methods over DPE, especially in the transfer over variation of the control process. MCE and MCL performed similar when training on a large set of simulated data, but differed significantly on small sets and real data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []