## Monday, August 15, 2011

### Lottery Paradox vs. Carbonic Warming

I just heard about the lottery paradox, and at first I was a little dumbfounded that anyone could take it seriously. However, it offers an exceptionally clear demonstration of why climate modelling is generally a waste of time.

This vein of criticism turned out very rich, running into Cauchy distribution territory - measuring more weather doesn't make your models more accurate.

The solution to the 'paradox' is simple; it is a violation of mathematics. Logic demands that probabilities sum to one, and if you approximate 0.1% as 0%, they don't. Put another way, the rounding error is 100% of the measurement. Why is anyone surprised that when you sum over 100% measurement errors, you get total nonsense?

In physical measurements, the rounding error often cancels out, because the normal rounding rule (0-4 down, 5-9 up) is well chosen for randomly distributed numerals. However, in chaotic systems, a small error (2%, say) in measurement usually has a large error in prediction, even in excess of 100% after enough computation. The measurement errors amplify each other, instead of cancelling out, essentially the output state is the sum over all previous, increasingly-erroneous input states.

Say you have a 2% error in your prediction of the weather next Wednesday. Taking these measurements, this gives you a 100% error of Wednesday three years from now. To predict Wednesday six years from now, you have no choice but to use your 100% wrong Wednesday from three years from now. And so on.

Apologists would point out that weather and climate are not the same. It's actually kind of difficult to get three years from now that badly wrong because the range of weather August can produce is constrained. No snow, for example. If your prediction at three years is as wrong as possible, the six year prediction can't get any more wrong. Unfortunately, this only holds in constant climate; the whole point of studying climate is to track how the constraints move. To find out what the constraints are themselves constrained by.

Moreover, weather follows the Cauchy-Lorentz distribution, which has this property: "the sample mean will become increasingly variable as more samples are taken, because of the increased likelihood of encountering sample points with a large absolute value." In human English, the longer you look at thermometers, the more likely you'll see eye-popping temperatures, in fact so likely that they will cancel out any settling toward an average you saw before. (For example, say you have nine December records of about zero Celsius, and then suddenly you see a new record of -20C. The average is now -2.) The distribution, compared to a usual Gaussian, slices bits away from the middle and layers them on the tails. This in turn means that as temperature records continue, we'll see an endless chain of high-temperature records...and a similar parade of low-temperature records, though you normally won't hear about those. Not convinced? They proudly boast of 150 years of predictions, yet almost all the extreme records were in the last forty years, looking exactly as a Cauchy distribution should look.

(Climate almost certainly also follows a Cauchy distribution. One technical point is that it will have a median and a mode, but because we're supposedly measuring its change, the mode will be hidden and the median will have an error proportional to how fast it is changing. A more positive technicality is that a true Cauchy distribution is infinite, and temperature cannot be - one thousand below is physically impossible and getting to one thousand above would require vaporizing the oceans.)

So. We could predict climate in a general sense, though never specifically, if we already knew its meta-climactic constraints. Even then, the odds of extreme climate rise quickly the longer climate is measured - even if the meta-climate isn't itself changing.
It is almost impossible to measure climate or meta-climate, because it is a Cauchy distribution. More and better ice cores and trees rings don't help - you're exactly as likely to find a random spike as to get a reliable baseline.
Any mathematical modelling will only amplify these fundamental errors.

James James said...

"They proudly boast of 150 years of predictions, yet almost all the extreme records were in the last forty years, looking exactly as a Cauchy distribution should look."

Are you equivocating between two meanings of the word "extreme"?

In the context of the Cauchy distribution, extreme just means "far from what we've recorded already", right? Whereas in common parlance, "extreme temperatures" are relative to what humans like (about 10-30 degrees C, a little below our body temperature).

Alrenous said...

I don't think I'm equivocating. I meant, "far from what we recorded already."

I don't think warming should be producing record lows. I also looked at pressure and so forth, and again both highs and low records were set recently.

Though admittedly, now that you mention it, record highs are simply inconclusive for warming, as one should expect a real climate to produce a steady stream of record highs regardless.