"Prediction is very difficult, especially if it's about the future."
--Nils Bohr, Nobel laureate in PhysicsIt would be ridiculous if climate science simply stopped at the present. How would that conversation go with the UN?
"Well," the hypothetical scientist would say, "we've noticed a global warming trend over the last 50 years, and think maybe you should help the world prepare for a new and different climate."
"I see" hypothetical Ban Ki-moon, UN secretary general would reply. "What exactly should we expect?"
"Um, I don't know. That's in the future, you see. We perceive time in a wholly linear fashion, like mice running through an endless dark corridor, the future remaining forever intangible, yet hope filled" the extravagantly wistful scientist would reply.
To avoid such a patently ridiculous conversation, climate models came into being. By using an enormous amount of computing power, it's possible to reproduce the complex processes of the Earth's atmosphere using dynamical equations and parameterizations. (making something complex simpler by only representing the key parts, kind of like drawing a stick man because faces are hard. Except with maths).
The problem with these kind of predictions is that the only way to know if you got it right or not is to wait and see. There are other ways of testing the skill of a model (i.e. how good it is), like hindcasts where you create the model for, say, 1950-2000 temperatures and then run it over 1900-1950. If the model does a decent job of predicting the first half of the 20th century then bingo, it must be good. This is a little unsatisfactory though, since what we really want to know is how good the model is for the period we will be living through next. To get a good idea of model skill though, you need roughly 15 years of data minimum.
Well, luckily for this blog, this week a group of scientists have gone back and tested their own model devised 15 years ago to see how it did. The article is here and hopefully, if Research Blogging works, the citation should be at the bottom of the page.
In this brief paper, Allen et al. review the prediction they made in 2000, which was based on climate data up until 1996. They then test how the model performed from 1996-2012. The key suggestion in the paper is that if observations lie outside of a certain range, the model should be described as falsified. This doesn't imply it was created in an underhand way, simply that it isn't performing very well.
Below is the original prediction, with the latest data added in:
The solid black line is the original prediction, the grey area marks the 5-95% confidence of this prediction and the black dotted line plots the middle of the uncertainty range. The red line indicates the observations in the period, with the black diamonds showing the predicted decadal mean temperature and the red diamonds showing observed decadal mean temperature for the 1990's and 2000's. The yellow diamonds show each individual year temperature from 1996-2012.
As we can see, this particular model prediction has done remarkable well over the last 15 years. Although the orginal prediction is a slight overestimate, the observations are well within the uncertainty range. The paper also points out that if temperatures do not warm by 2017-2026 this model will become falsified under their own definition. So although it's accurate for the first 15 years, it's not guaranteed that this model will get temperatures correct forever.
To make a prediction myself, I think this type of paper will probably become more and more common over the next few years as people start to look at the third annual IPCC report (2001) ensemble model. Hopefully these studies will prove that climate change models are all rubbish and we can go back to burning fossil fuels without worrying about it. Based on this early paper though, it looks like the modeling community did a pretty good job. Unfortunately.
Allen, M., Mitchell, J., & Stott, P. (2013). Test of a decadal climate forecast Nature Geoscience, 6 (4), 243-244 DOI: 10.1038/ngeo1788
