language-icon Old Web
English
Sign In

Forcing (mathematics)

In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. It was first used by Paul Cohen in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. It was first used by Paul Cohen in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. Forcing has been considerably reworked and simplified in the following years, and has since served as a powerful technique, both in set theory and in areas of mathematical logic such as recursion theory. Descriptive set theory uses the notion of forcing from both recursion theory and set theory. Forcing has also been used in model theory, but it is common in model theory to define genericity directly without mention of forcing. Intuitively, forcing consists of expanding the set theoretical universe V {displaystyle V} to a larger universe V ∗ {displaystyle V^{*}} . In this bigger universe, for example, one might have many new subsets of ω {displaystyle omega } = { 0 , 1 , 2 , … } {displaystyle ={0,1,2,ldots }} that were not there in the old universe, and thereby violate the continuum hypothesis. While impossible when dealing with finite sets, this is just another version of Cantor's paradox about infinity. In principle, one could consider: identify x ∈ V {displaystyle xin V} with ( x , 0 ) {displaystyle (x,0)} , and then introduce an expanded membership relation involving 'new' sets of the form ( x , 1 ) {displaystyle (x,1)} . Forcing is a more elaborate version of this idea, reducing the expansion to the existence of one new set, and allowing for fine control over the properties of the expanded universe. Cohen's original technique, now called ramified forcing, is slightly different from the unramified forcing expounded here. Forcing is also equivalent to the method of Boolean-valued models, which some feel is conceptually more natural and intuitive, but usually much more difficult to apply. A forcing poset is an ordered triple, ( P , ≤ , 1 ) {displaystyle (mathbb {P} ,leq ,mathbf {1} )} , where ≤ {displaystyle leq } is a preorder on P {displaystyle mathbb {P} } that is atomless, meaning that it satisfies the following condition: Members of P {displaystyle mathbb {P} } are called forcing conditions or just conditions. One reads p ≤ q {displaystyle pleq q} as ' p {displaystyle p} is stronger than q {displaystyle q} '. Intuitively, the 'smaller' condition provides 'more' information, just as the smaller interval [ 3.1415926 , 3.1415927 ] {displaystyle } provides more information about the number π {displaystyle pi } than the interval [ 3.1 , 3.2 ] {displaystyle } does. There are various conventions in use. Some authors require ≤ {displaystyle leq } to also be antisymmetric, so that the relation is a partial order. Some use the term partial order anyway, conflicting with standard terminology, while some use the term preorder. The largest element can be dispensed with. The reverse ordering is also used, most notably by Saharon Shelah and his co-authors.

[ "Climatology", "Topology", "Mathematical analysis", "Meteorology", "Atmospheric sciences", "Large cardinal", "Cichoń's diagram", "Supercompact cardinal", "stochastic forcing", "Suslin tree" ]
Parent Topic
Child Topic
    No Parent Topic