A reward-based approach for preference modeling: A case study

2017 
Abstract Most of reasoning for decision making in daily life is based on preferences. As other kinds of reasoning processes, there are many formalisms trying to capture preferences, however none of them is able to capture all the subtleties of the human reasoning. In this paper we analyze how to formalize the preferences expressed by humans and how to reason with them to produce rankings. Particularly, we show that qualitative preferences are best represented with a combination of reward logics and conditional logics. We propose a new algorithm based on ideas of similarity between objects commonly used in case-based reasoning. We see that the new approach produces rankings close to the ones expressed by users.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []