In information theory, the Rényi entropy generalizes the Hartley entropy, the Shannon entropy, the collision entropy and the min-entropy. Entropies quantify the diversity, uncertainty, or randomness of a system. The Rényi entropy is named after Alfréd Rényi. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. In information theory, the Rényi entropy generalizes the Hartley entropy, the Shannon entropy, the collision entropy and the min-entropy. Entropies quantify the diversity, uncertainty, or randomness of a system. The Rényi entropy is named after Alfréd Rényi. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly by virtue of the fact that it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors. The Rényi entropy of order α {displaystyle alpha } , where α ≥ 0 {displaystyle alpha geq 0} and α ≠ 1 {displaystyle alpha eq 1} , is defined as Here, X {displaystyle X} is a discrete random variable with possible outcomes 1 , 2 , . . . , n {displaystyle 1,2,...,n} and corresponding probabilities p i ≐ Pr ( X = i ) {displaystyle p_{i}doteq Pr(X=i)} for i = 1 , … , n {displaystyle i=1,dots ,n} . The logarithm is conventionally taken to be base 2, especially in the context of information theory where bits are used.If the probabilities are p i = 1 / n {displaystyle p_{i}=1/n} for all i = 1 , … , n {displaystyle i=1,dots ,n} , then all the Rényi entropies of the distribution are equal: H α ( X ) = log n {displaystyle mathrm {H} _{alpha }(X)=log n} .In general, for all discrete random variables X {displaystyle X} , H α ( X ) {displaystyle mathrm {H} _{alpha }(X)} is a non-increasing function in α {displaystyle alpha } . Applications often exploit the following relation between the Rényi entropy and the p-norm of the vector of probabilities: Here, the discrete probability distribution P = ( p 1 , … , p n ) {displaystyle P=(p_{1},dots ,p_{n})} is interpreted as a vector in R n {displaystyle mathbb {R} ^{n}} with p i ≥ 0 {displaystyle p_{i}geq 0} and ∑ i = 1 n p i = 1 {displaystyle sum _{i=1}^{n}p_{i}=1} . The Rényi entropy for any α ≥ 0 {displaystyle alpha geq 0} is Schur concave. As α approaches zero, the Rényi entropy increasingly weighs all possible events more equally, regardless of their probabilities. In the limit for α → 0, the Rényi entropy is just the logarithm of the size of the support of X. The limit for α → 1 is the Shannon entropy. As α approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability. Provided the probabilities are nonzero, H 0 {displaystyle mathrm {H} _{0}} is the logarithm of the cardinality of X, sometimes called the Hartley entropy of X,