In statistics, particularly in analysis of variance and linear regression, a contrast is a linear combination of variables (parameters or statistics) whose coefficients add up to zero, allowing comparison of different treatments. In statistics, particularly in analysis of variance and linear regression, a contrast is a linear combination of variables (parameters or statistics) whose coefficients add up to zero, allowing comparison of different treatments. Let θ 1 , … , θ t {displaystyle heta _{1},ldots , heta _{t}} be a set of variables, either parameters or statistics, and a 1 , … , a t {displaystyle a_{1},ldots ,a_{t}} be known constants. The quantity ∑ i = 1 t a i θ i {displaystyle sum _{i=1}^{t}a_{i} heta _{i}} is a linear combination. It is called a contrast if ∑ i = 1 t a i = 0 {displaystyle sum _{i=1}^{t}a_{i}=0} . Furthermore, two contrasts, ∑ i = 1 t a i θ i {displaystyle sum _{i=1}^{t}a_{i} heta _{i}} and ∑ i = 1 t b i θ i {displaystyle sum _{i=1}^{t}b_{i} heta _{i}} , are orthogonal if ∑ i = 1 t a i b i = 0 {displaystyle sum _{i=1}^{t}a_{i}b_{i}=0} . Let us imagine that we are comparing four means, μ 1 , μ 2 , μ 3 , μ 4 {displaystyle mu _{1},mu _{2},mu _{3},mu _{4}} . The following table describes three possible contrasts: The first contrast allows comparison of the first mean with the second, the second contrast allows comparison of the third mean with the fourth, and the third contrast allows comparison of the average of the first two means with the average of the last two. In a balanced one-way analysis of variance, using orthogonal contrasts has the advantage of completely partitioning the treatment sum of squares into non-overlapping additive components that represent the variation due to each contrast. Consider the numbers above: each of the rows sums up to zero (hence they are contrasts). If we multiply each element of the first and second row and add those up, this again results in zero, thus the first and second contrast are orthogonal and so on. A contrast is defined as the sum of each group mean multiplied by a coefficient for each group (i.e., a signed number, cj). In equation form, L = c 1 X ¯ 1 + c 2 X ¯ 2 + ⋯ + c k X ¯ k ≡ ∑ j c j X ¯ j {displaystyle L=c_{1}{ar {X}}_{1}+c_{2}{ar {X}}_{2}+cdots +c_{k}{ar {X}}_{k}equiv sum _{j}c_{j}{ar {X}}_{j}} , where L is the weighted sum of group means, the cj coefficients represent the assigned weights of the means (these must sum to 0 for orthogonal contrasts), and X ¯ {displaystyle {ar {X}}} j represents the group means. Coefficients can be positive or negative, and fractions or whole numbers, depending on the comparison of interest. Linear contrasts are very useful and can be used to test complex hypotheses when used in conjunction with ANOVA or multiple regression. In essence, each contrast defines and tests for a particular pattern of differences among the means. Contrasts should be constructed 'to answer specific research questions', and do not necessarily have to be orthogonal. A simple (not necessarily orthogonal) contrast is the difference between two means. A more complex contrast can test differences among several means (ex. with four means, assigning coefficients of –3, –1, +1, and +3), or test the difference between a single mean and the combined mean of several groups (e.g., if you have four means assign coefficients of –3, +1, +1, and +1) or test the difference between the combined mean of several groups and the combined mean of several other groups (i.e., with four means, assign coefficients of –1, –1, +1, and +1). The coefficients for the means to be combined (or averaged) must be the same in magnitude and direction, that is, equally weighted. When means are assigned different coefficients (either in magnitude or direction, or both), the contrast is testing for a difference between those means. A contrast may be any of: the set of coefficients used to specify a comparison; the specific value of the linear combination obtained for a given study or experiment; the random quantity defined by applying the linear combination to treatment effects when these are themselves considered as random variables. In the last context, the term contrast variable is sometimes used. Contrasts are sometimes used to compare mixed effects. A common example is the difference between two test scores — one at the beginning of the semester and one at its end. Note that we are not interested in one of these scores by itself, but only in the contrast (in this case — the difference). Since this is a linear combination of independent variables, its variance equals the weighted sum of the summands' variances; in this case both weights are one. This 'blending' of two variables into one might be useful in many cases such as ANOVA, regression, or even as descriptive statistics in its own right.