Total variation distance of probability measures

In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance or variational distance. In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance or variational distance. The total variation distance between two probability measures P and Q on a sigma-algebra F {displaystyle {mathcal {F}}} of subsets of the sample space Ω {displaystyle Omega } is defined via Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. The total variation distance is related to the Kullback–Leibler divergence by Pinsker's inequality: When the set is countable, the total variation distance is related to the L1 norm by the identity: The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is c ( x , y ) = 1 x ≠ y {displaystyle c(x,y)={mathbf {1} }_{x eq y}} , that is, where the infimum is taken over all π {displaystyle pi } probability distributions with marginals P {displaystyle P} and Q {displaystyle Q} , respectively.

[ "Empirical probability", "Regular conditional probability", "Convolution of probability distributions", "Probability mass function" ]
Parent Topic
Child Topic
    No Parent Topic