language-icon Old Web
English
Sign In

Preference learning

Preference learning is a subfield in machine learning, which is a classification method based on observed preference information . In the view of supervised learning, preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items. Preference learning is a subfield in machine learning, which is a classification method based on observed preference information . In the view of supervised learning, preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items. While the concept of preference learning has been emerged for some time in many fields such as economics, it's a relatively new topic in Artificial Intelligence research. Several workshops have been discussing preference learning and related topics in the past decade. The main task in preference learning concerns problems in 'learning to rank'. According to different types of preference information observed, the tasks are categorized as three main problems in the book Preference Learning: In label ranking, the model has an instance space X = { x i } {displaystyle X={x_{i}},!} and a finite set of labels Y = { y i | i = 1 , 2 , ⋯ , k } {displaystyle Y={y_{i}|i=1,2,cdots ,k},!} . The preference information is given in the form y i ≻ x y j {displaystyle y_{i}succ _{x}y_{j},!} indicating instance x {displaystyle x,!} shows preference in y i {displaystyle y_{i},!} rather than y j {displaystyle y_{j},!} . A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance. It was observed some conventional classification problems can be generalized in the framework of label ranking problem: if a training instance x {displaystyle x,!} is labeled as class y i {displaystyle y_{i},!} , it implies that ∀ j ≠ i , y i ≻ x y j {displaystyle forall j eq i,y_{i}succ _{x}y_{j},!} . In the multi-label case, x {displaystyle x,!} is associated with a set of labels L ⊆ Y {displaystyle Lsubseteq Y,!} and thus the model can extract a set of preference information { y i ≻ x y j | y i ∈ L , y j ∈ Y ∖ L } {displaystyle {y_{i}succ _{x}y_{j}|y_{i}in L,y_{j}in Yackslash L},!} . Training a preference model on this preference information and the classification result of an instance is just the corresponding top ranking label. Instance ranking also has the instance space X {displaystyle X,!} and label set Y {displaystyle Y,!} . In this task, labels are defined to have a fixed order y 1 ≻ y 2 ≻ ⋯ ≻ y k {displaystyle y_{1}succ y_{2}succ cdots succ y_{k},!} and each instance x l {displaystyle x_{l},!} is associated with a label y l {displaystyle y_{l},!} . Giving a set of instances as training data, the goal of this task is to find the ranking order for a new set of instances. Object ranking is similar to instance ranking except that no labels are associated with instances. Given a set of pairwise preference information in the form x i ≻ x j {displaystyle x_{i}succ x_{j},!} and the model should find out a ranking order among instances. There are two practical representations of the preference information A ≻ B {displaystyle Asucc B,!} . One is assigning A {displaystyle A,!} and B {displaystyle B,!} with two real numbers a {displaystyle a,!} and b {displaystyle b,!} respectively such that a > b {displaystyle a>b,!} . Another one is assigning a binary value V ( A , B ) ∈ { 0 , 1 } {displaystyle V(A,B)in {0,1},!} for all pairs ( A , B ) {displaystyle (A,B),!} denoting whether A ≻ B {displaystyle Asucc B,!} or B ≻ A {displaystyle Bsucc A,!} . Corresponding to these two different representations, there are two different techniques applied to the learning process. If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is called utility function. For label ranking the mapping is a function f : X × Y → R {displaystyle f:X imes Y ightarrow mathbb {R} ,!} such that y i ≻ x y j ⇒ f ( x , y i ) > f ( x , y j ) {displaystyle y_{i}succ _{x}y_{j}Rightarrow f(x,y_{i})>f(x,y_{j}),!} . For instance ranking and object ranking, the mapping is a function f : X → R {displaystyle f:X ightarrow mathbb {R} ,!} .

[ "Machine learning", "Data mining", "Artificial intelligence", "preference", "Contrast set learning" ]
Parent Topic
Child Topic
    No Parent Topic