language-icon Old Web
English
Sign In

Kernel Fisher discriminant analysis

In statistics, kernel Fisher discriminant analysis (KFD), also known as generalized discriminant analysis and kernel discriminant analysis, is a kernelized version of linear discriminant analysis (LDA). It is named after Ronald Fisher. Using the kernel trick, LDA is implicitly performed in a new feature space, which allows non-linear mappings to be learned. In statistics, kernel Fisher discriminant analysis (KFD), also known as generalized discriminant analysis and kernel discriminant analysis, is a kernelized version of linear discriminant analysis (LDA). It is named after Ronald Fisher. Using the kernel trick, LDA is implicitly performed in a new feature space, which allows non-linear mappings to be learned. Intuitively, the idea of LDA is to find a projection where class separation is maximized. Given two sets of labeled data, C 1 {displaystyle mathbf {C} _{1}} and C 2 {displaystyle mathbf {C} _{2}} , define the class means m 1 {displaystyle mathbf {m} _{1}} and m 2 {displaystyle mathbf {m} _{2}} to be where l i {displaystyle l_{i}} is the number of examples of class C i {displaystyle mathbf {C} _{i}} . The goal of linear discriminant analysis is to give a large separation of the class means while also keeping the in-class variance small. This is formulated as maximizing, with respect to w {displaystyle mathbf {w} } , the following ratio: where S B {displaystyle mathbf {S} _{B}} is the between-class covariance matrix and S W {displaystyle mathbf {S} _{W}} is the total within-class covariance matrix:

[ "Kernel method", "Facial recognition system", "discriminant vector", "nonlinear discriminant analysis", "Fisher kernel" ]
Parent Topic
Child Topic
    No Parent Topic