Goal-oriented action planning in partially observable stochastic domains
2012
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. The paper presented a probabilistic conditional planning problem for Goal-Oriented Action Planning based on POMDP (called p-GOAP). We are interested in finding a plan such that the plan has maximal the goal satisfaction subject to the cost not exceeding the threshold in p-GOAP. During computing maximum goal satisfaction, we discuss a speed-up technique that alleviates the computational complexity by separating the algorithm into two phases: a greedy algorithm and a recursive process. Finally p-GOAP is proposed to cognitive reappraisal for deliberate emotion.
Keywords:
- Computer science
- Markov decision process
- Automated planning and scheduling
- Planning Domain Definition Language
- Greedy algorithm
- Probabilistic logic
- Markov process
- Machine learning
- Goal orientation
- Artificial intelligence
- Partially observable Markov decision process
- Computational complexity theory
- Distributed computing
- Mathematical optimization
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
10
References
4
Citations
NaN
KQI