In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the 'moment method' consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments. The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment. The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative, integer-valued random variable X, we may want to prove that X = 0 with high probability. To obtain an upper bound for P(X > 0), and thus a lower bound for P(X = 0), we first note that since X takes only integer values, P(X > 0) = P(X ≥ 1). Since X is non-negative we can now apply Markov's inequality to obtain P(X ≥ 1) ≤ E. Combining these we have P(X > 0) ≤ E; the first moment method is simply the use of this inequality. In the other direction, E being 'large' does not directly imply that P(X = 0) is small. However, we can often use the second moment to derive such a conclusion, using Cauchy–Schwarz inequality. Theorem: If X ≥ 0 is a random variable withfinite variance, then Proof: Using the Cauchy–Schwarz inequality, we have Solving for P ( X > 0 ) {displaystyle operatorname {P} (X>0)} , the desired inequality then follows. ∎ The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality. Suppose that Xn is a sequence of non-negative real-valued random variables which converge in law to a random variable X. If there are finite positive constants c1, c2 such that