Tilting methods for assessing the influence of components in a classifier
2009
Many contemporary classifiers are constructed to provide good performance for very high dimensional data. However, an issue that is at least as important as good classification is determining which of the many potential variables provide key information for good decisions. Responding to this issue can help us to determine which aspects of the datagenerating mechanism (e.g. which genes in a genomic study) are of greatest importance in terms of distinguishing between populations. We introduce tilting methods for addressing this problem. We apply weights to the components of data vectors, rather than to the data vectors themselves (as is commonly the case in related work). In addition we tilt in a way that is governed by "L" 2 -distance between weight vectors, rather than by the more commonly used Kullback-Leibler distance. It is shown that this approach, together with the added constraint that the weights should be non-negative, produces an algorithm which eliminates vector components that have little influence on the classification decision. In particular, use of the "L" 2 -distance in this problem produces properties that are reminiscent of those that arise when "L" 1 -penalties are employed to eliminate explanatory variables in very high dimensional prediction problems, e.g. those involving the lasso. We introduce techniques that can be implemented very rapidly, and we show how to use bootstrap methods to assess the accuracy of our variable ranking and variable elimination procedures. Copyright (c) 2009 Royal Statistical Society.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
64
References
36
Citations
NaN
KQI