2012年6月5日 星期二

Learning to Rank: From Pairwise Approach to Listwise Approach

In the previous paper, support vector learning for ordinal regression, it use pairwise method to solve the problem of ordinal learning. However, using pairwise approach to solve this problem will induce some problem.
First, the objective of learning is formalized as minimizing errors in classification of document pairs, rather than minimizing errors in ranking of documents.Second, the training process is computationally costly, as the number of document pairs is very large. Third, the assumption of that the document pairs are generated i.i.d. is also too strong. Fourth, the number of generated document pairs varies largely from query to query, which will result in training a model biased toward queries with more document pairs.
Thus, in this paper, they propose the list-wise approach.
The followings are the problem setting.
















In training, a set of queryies Q=(q(1),q(2), ... ,q(m)) is given.
Each query q(m) is associated with a list of documents d(i) = (d(i)1, d(i)2, ...,d(i)n(i)).
Each list of documents  d(i) is associated with a list of judgments (scores) y(i)= (y(i)1, y(i)2, ...,y(i)n(i)). The judgment y(i)j represents the relevance degree of d(i)j to q(i).
A feature vector x(i)j=ψ(q(i),d(i)j) is created from each query-document pair.
Note that there is a  q(i)  in ψ function. If it doesn't have this item, then the ranking list will be always the same no matter what query is.

We want to find a ranking function f, which will output the score by taking one feature vector.
For the list of feature vectors x(i), we can obtain a list of scores z(i) = (f(x(i)1), ... ,f(x(i)n(i))).

The objective of learning is formalized as minimization of the total losses with respect to the training data.






Where L is the loss function.

In order to measure the loss function of two different ranking, they first convert the ranking into probability distribution, and measure the loss function by using Cross Entropy.
They use the following definition to convert the ranking into probability distribution.

















However, if we want to derive this probability distribution precisely, we need lots of computation overhead.
Thus, instead of calculating this probability, they calculate top one probability.
Then, for top one probability, the value can be derived more easily.








After using this representation, then we can use Cross Entropy as metric.






And phi-function is defined as exponential function.







Then, they use gradient descent method to solve this optimization problem.
The ranking function fw is based on the Neural Network model w.
Given a feature vector x(i)jfw(x(i)j) assigns a score to it.
Thus, for a query q(i), the score list z(i)(fw) = (fw(x(i)1), fw(x(i)2), ..., fw(x(i)n(i))).
For simplicity, they use linear Neural Network model in the experiment, fw(x(i)j) = w × x(i)j.
That is, they tried to learn a linear combination that minimize the loss function measure by Cross Entropy.
And they show their list-wise method really performs better than other methods

Support vector learning for ordinal regression

In this paper, they proposed a problem about ordinal regression, which is very different from the traditional learning problems, such as classification and metric regression.

In traditional classification problem, the loss function is defined as 0-1 loss.
In regression estimation, the loss function will take into account the full metric structure.
In ordinal regression problem, the problem this paper want to solve, they consider a problem which shares properties of both classification and metric regression. Like classification, the outcome space is a finite set, and like metric regression, there exists an ordering among the elements.

The detail of the problem is defined as follows.
An input space X belongs to R^n with objects being represented by feature vectors x = (x1,...,xn)^T.
Y denotes the rank order.
For every training point in the set S, we want to find a mapping function that map the points to the rankings.






The error cost function is defined as 




lpref  is 1 if and only if the ranking of h(x1) and h(x2) is different from y1 and y2.
The expected cost can be computed as






However, that is hard to minimize.
Thus, we first convert it into another set S', which is a set representing the pairwise ranking.










And in theorem 1, it said










Which means that for every projection function we get in the set S', the 0-1 loss for S' is proportional to the expected cost for S. Then, the minimum cost projection function found on the set S' will be the same as found on set S.  Thus, now the ordinal problem is converted into the classification problem and can be solved by SVM.

●Solving by using SVM
We want to find a projection vector W, and some biases that project original data into ranks.







Then, we can give SVM some degree of violation, and solve W.




After finding W, we can find different thresholds θ(ri) for different ranking now by minimizing the following equation under the constraint of above equation.




After doing this, we can get the final results we want as the following figure.















When new data comes, it simply project it by using W, and judge its rank by check the thresholds θ(ri)

========================================================
Comment:
    The authors are very smart, they convert the ordinal problem into classification problem. Then, they can directly using the methods developed well on classification to solve the ordinal problem. However, I think the most important thing is to tell us why the projection derive on set S' will be the same as S. They only provide a theorem to say that they will be the same but haven't explain it. I think that is one weak point.
    Another weak point is the notation of this paper. This paper use too much equation to define a simple concept, such as lpref.  I think I will become much better to provide some illustration, or state their meaning directly by using words.