language-icon Old Web
English
Sign In

Climateprediction.net

Climateprediction.net (CPDN) is a distributed computing project to investigate and reduce uncertainties in climate modelling. It aims to do this by running hundreds of thousands of different models (a large climate ensemble) using the donated idle time of ordinary personal computers, thereby leading to a better understanding of how models are affected by small changes in the many parameters known to influence the global climate. Climateprediction.net (CPDN) is a distributed computing project to investigate and reduce uncertainties in climate modelling. It aims to do this by running hundreds of thousands of different models (a large climate ensemble) using the donated idle time of ordinary personal computers, thereby leading to a better understanding of how models are affected by small changes in the many parameters known to influence the global climate. The project relies on the volunteer computing model using the BOINC framework where voluntary participants agree to run some processes of the project at the client-side in their personal computers after receiving tasks from the server-side for treatment. CPDN, which is run primarily by Oxford University in England, has harnessed more computing power and generated more data than any other climate modelling project. It has produced over 100 million model years of data so far. As of June 2016, there are more than 12,000 active participants from 223 countries with a total BOINC credit of more than 27 billion, reporting about 55 teraflops (55 trillion operations per second) of processing power. The aim of the Climateprediction.net project is to investigate the uncertainties in various parameterizations that have to be made in state-of-the-art climate models. The model is run thousands of times with slight perturbations to various physics parameters (a 'large ensemble') and the project examines how the model output changes. These parameters are not known exactly, and the variations are within what is subjectively considered to be a plausible range. This will allow the project to improve understanding of how sensitive the models are to small changes and also to things like changes in carbon dioxide and sulphur cycle. In the past, estimates of climate change have had to be made using one or, at best, a very small ensemble (tens rather than thousands) of model runs. By using participants' computers, the project will be able to improve understanding of, and confidence in, climate change predictions more than would ever be possible using the supercomputers currently available to scientists. The Climateprediction.net experiment should help to 'improve methods to quantify uncertainties of climate projections and scenarios, including long-term ensemble simulations using complex models', identified by the Intergovernmental Panel on Climate Change (IPCC) in 2001 as a high priority. Hopefully, the experiment will give decision makers a better scientific basis for addressing one of the biggest potential global problems of the 21st century. As shown in the graph above, the various models have a fairly wide distribution of results over time. For each curve, on the far right, there is a bar showing the final temperature range for the corresponding model version. As you can see and would expect, the further into the future the model is extended, the wider the variances between them. Roughly half of the variation depends on the future climate forcing scenario rather than uncertainties in the model. Any reduction in those variations whether from better scenarios or improvements in the models are wanted. Climateprediction.net is working on model uncertainties not the scenarios. The crux of the problem is that scientists can run models and see that x% of the models warm y degrees in response to z climate forcings, but how do we know x% is a good representation of the probability of that happening in the real world? The answer is that scientists are uncertain about this and want to improve the level of confidence that can be achieved. Some models will be good and some poor at producing past climate when given past climate forcings and initial conditions (a hindcast). It does make sense to trust the models that do well at recreating the past more than those that do poorly. Therefore, models that do poorly will be downweighted. The different models that Climateprediction.net has and will distribute are detailed below in chronological order. Therefore, anyone who has joined recently is likely to be running the Transient Coupled Model. Myles Allen first thought about the need for large Climate ensembles in 1997, but was only introduced to the success of SETI@home in 1999. The first funding proposal in April 1999 was rejected as utterly unrealistic.

[ "Precipitation", "Climate change" ]
Parent Topic
Child Topic
    No Parent Topic