Wednesday, March 14, 2012


date: Wed Apr 14 12:37:53 1999
from: Tim Osborn <>
subject: comment on your recent Nature paper

Dear Mark, Dave and Chris,

I read you recent Nature paper with interest. I had some earlier discussions with Mark about one of the figures, because I feel that it will be misinterpreted. I have now drafted a note that I may send as 'scientific correspondence' to Nature to make the same point. It is included below, and I would welcome your comments on it (also included are the earlier messages between Mark and myself, to save repeating any previous comments). I in no way intend to be antagonistic about this - I simply felt that it was a point worth making. In view of the time elapsed since your paper appeared, I would appreciate your response as early as possible. Thanks.

Best regards



Rodwell et al.(1) use an atmospheric general circulation model to show that sea surface temperature (SST) anomalies have driven some of the variations in the wintertime North Atlantic Oscillation (NAO) observed over the past 50 years. They reduce internal atmospheric "noise" by taking the mean of an ensemble of simulations (with each individual simulation driven by the observed evolution of SST), to leave mainly the variations driven by SST changes. While undoubtedly an extremely important result, its significance is likely to be misinterpreted due to the incorrect scaling applied to the ensemble mean results shown in the top panel of their Figure 1.

If the NAO indices from the six individual ensemble members were overlain onto their Figure 1, then the ensemble mean (as shown) would not lie in the centre of the ensemble range � indeed, for some years, it would lie outside the ensemble range. This is because they have scaled up the ensemble mean curve so that its variance matches the variance of the observed NAO index. This is not the correct way to use an ensemble, since the ensemble mean should have a lower variance than that observed because the noise due to internal atmospheric variations has been partly removed. Results from the six individual members should have been scaled to match the observed variance, and then averaged to produce the ensemble mean.

Rodwell et al.�s Figure 1 implies that, according to their model, all of the observed trend in the winter NAO index from the 1960s to the 1990s can be explained by the SST variations over that period. If the ensemble mean curve had been correctly scaled, it would show that only about half of this recent observed trend is explained. Their results, as they were presented, might incorrectly discourage further research into the causes and implications of this major variation in the climate system, on the basis that it had been fully explained.

Timothy J. Osborn
Climatic Research Unit, School of Environmental Sciences, University of East Anglia, Norwich NR4 7TJ, UK.

1. Rodwell, M.J., Rowell D.P. & Folland, C.K. Nature 398, 320-323 (1999).

E-MAIL TO MARK (26/2/99):

...The ohp was of the observed NAO index and the simulated NAO index, with the latter coming from a six-member ensemble of runs of HadAM2b forced by GISST3.0; both individual ensemble results and the ensemble mean were shown....The ensemble mean did not lie in the middle of the ensemble members - indeed at times it even lay outside the ensemble range! I have thought about it a bit more now and think that I understand the reason why. You show the NAO index as a normalised or standardised time series...As such, the observed and individual ensemble member series appear fine; but rather than presenting the ensemble mean as the average of the six normalised members, it looks like you have averaged the six members and then re-normalied. Can you confirm that this is what you did?...If that is the case, then it is wrong. Re-normalising the ensemble mean involves dividing by the standard deviation of the ensemble mean (SDens); since SDens is less than the observed SD or the SD from an individual ensemble member (which are single realisations with higher noise levels), you are dividing by a number that is too small. The ensemble mean should have less variability than the other curves, but doesn't appear to. If this is indeed an error, then HadAM2b would imply that only about half of the observed 1960s to 1990s trend is forced by SST, rather than all of it. Similarly for the decadal-scale oscillations...

REPLY FROM MARK (26/2/99):

...The diagram in our Nature paper is scaled correctly. I'll attach it here
for you to see...The caption that goes with it is the following:

Figure 1, Time series of observed (solid) and modelled ensemble mean
(dotted) winter North Atlantic Oscillation, December 1947 - February 1997 . The
NAO is calculated as the normalised difference in December to February mean
sea-level pressure between the Azores (26 W 38 N) and Iceland (23 W
65 N). Observed data is taken from Ponta Delgada (Azores) and Stykkisholmur
(Iceland). The shading in the upper graph shows +/- 1 standard deviation about
the ensemble mean, calculated from the non-normalised 6 model simulations for
each individual year and scaled according to the normalisation of the ensemble

E-MAIL TO MARK (1/3/99):

...At first I thought that the new figure was indeed scaled correctly - the ensemble mean now lies nicely in the middle of the ensemble range. But by comparing it with my previous diagram of your results, it seems that you have scaled the ensemble members (+ hence range) to fit the ensemble mean, rather than the other way around. To clarify things, is the ensemble mean series computed by method (1), (2) or (3)?

(a) AX(t)=Azores DJF mean SLP through time from ensemble mean
(b) Ai(t)=Azores DJF mean SLP through time from ensemble member i (i=1,6)
(c) Ditto for IX(t) and Ii(t) for Iceland
(d) The long-term mean and st. dev. of (AX(t)-IX(t)) are MEANX and SDX
(e) The long-term mean and st. dev. of (Ai(t)-Ii(t)) are MEANi and SDi


(1) NAO(t) = [ (AX(t)-IX(t)) - MEANX ] / SDX

(2) NAO(t) = [ (AX(t)-IX(t)) - MEANX ] / (average of the six SDi values)

(3a) NAOi(t) = [ (Ai(t)-Ii(t)) - MEANi ] / SDi
(3b) NAO(t) = average of the six NAOi(t) values at each time t

Methods (2) and (3) are quite similar [in (2) the denominator should be really be computed by averaging the individual variances (SDi^2) and then taking the square root of this average, but if all six SDi are similar then it won't make much difference]. Either (2) or (3) would be fine, but I think that method (1) isn't correct. The only difference being: which standard deviation is used in the normalisation? I don't think that method (1) would be comparable with the observed time series (which is effectively computed using method (2) or (3), although with only a single realisation)...


The short answer is: Method 1. However...As with everything, the problem gets less clear the closer you get to it! There are obviously many ways to analyse the NAO and it depends on what questions you want to address. For example, do you want to find out the predictability of the NAO index in a particular model or do you want to get an idea of the ultimate predictability of the NAO?

A big question is: Can the observed timeseries be considered as simply another realisation of the model? This requires the model and real world to have (eg) the same sensitivity to SSTs, the same amount of internal variability and to have no relative bias. This is not the case.

By counting the number of zero crossings, it was clear that every ensemble member has more high-frequency variability than the real world. The use of the ensemble mean removes this 'erroneous' variability and gives a timeseries with more similar variability to the observations. This is one reason for us using the ensemble mean in the way we do.

In addition, by correlating the Iceland pressure series with that of the Azores, it was clear that the model failed to fully capture the strength of the negative correlation seen in nature. This may be one of the reasons for why the observed NAO index has a significantly larger standard deviation than it does in the model...


No comments:

Post a Comment