date: Mon, 27 Sep 2004 20:14:16 -0400
from: Andy Revkin <anrevkatXYZxyzimes.com>
subject: mann's thoughts
to: email@example.com, t.osbornatXYZxyz.ac.uk
with the understanding that he wouldn't further circulate the embargoed papers, i've sought
input from Mann (only way I can write on studies of this sort is by getting context ahead
of time, while still under embargo).
i'd appreciate your response to the following thoughts emailed by Mann.
particularly the spots I've underlined and highlighted with boldface.
1. This kind of analysis isn't new. The authors have simply performed the same
experiments other groups have in past years (Rutherford et al, 2003, Zorita et al 2003),
but with a much more extreme model scenario. With such an extreme scenario, the
experiment exhibits a well established and previously estimated bias, but to a far
greater degree. There is a good discussion of this in the review paper by Jones and
Jones, P.D., Mann, M.E., Climate Over Past Millennia, Reviews of Geophysics, 42,
RG2002, doi: 10.1029/2003RG000143, 2004.
The same exact thing as what the authors describe has already been done using forced
simulations (Rutherford et al, 2003, J. Climate) and long control simulations (Zorita et
al, J. Climate). In these studies, the bias argued by the authors were found not to be
significant. This is true because the range of low-frequency variability did not
significantly exceed that present in the 20th century calibration period in those
simulations, nor does it in any other simulations that have been done of the forced
response of the climate over the past 1000 years (see Figure 8 of Jones and Mann, 2004).
The sensitivity of the model used in the present study is somewhat higher than those of
other models, but this isn't the main problem with their simulation. More problematic is
the fact that the authors in the present case use a solar forcing that is about twice
that used by other researchers in the field. Using this unusually large past forcing
scenario, the authors obtain variations in previous centuries that are well outside the
range of the modern period used to calibrate the reconstruction. As shown previously by
Rutherford et al using the example of anthropogenic forced climate changes (J. Climate,
2003), statistical reconstructions will indeed underpredict the variations in the actual
climate in such a case. This was already well established. However, there is no evidence
of such an extreme range of variability in any other known climate simulation of the
past 1000 years.
Osborn and Briffa indeed mention that the authors arguments hinge on the much larger
amount of low-frequency variability that is present in their simulation (as it
influences the 'redness' of the spectrum of the model data). In this regard, their
conclusions would seem not to apply to the real world.
2. In Mann et al (1999), one of the studies the authors focus on, the method of
uncertainty estimate does in fact take into account the potential loss of low-frequency
variance due to the limited regression period, the very issue raised by the authors. In
their accompanying commentary, Osborn and Briffa seem to be unaware of this or
mischaracterize this when they state that this has not been taken into account in
previous work. Mann et al (1999) examined the spectrum of the "residuals" over an
older ("cross-validation") period that was independent from the calibration period.
Where they found evidence for enhanced regression uncertainty in these residuals at the
lowest (century) timescale resolved, they inflated the estimates of the uncertainties
accordingly (to more than 1 degree C peak-to-peak). This inflated uncertainty, which
accounts for potential low-frequency regression bias, in fact accommodates the range of
potential bias shown by the authors in the present study.
The conclusion in previous studies that late 20th century warmth is anomalous in a
long-term context actually takes into account the expanded regression uncertainties at
low-frequencies that are the subject of the present analysis. There is no
inconsistency--its just a matter of different interpretation/spin.
3. The magnitude of the bias estimated by the authors is questionable because it rests
on the assumption of the signal-to-noise properties of the proxies. The signal-to-noise
ratios used by the authors may be considerably underestimated because they incorrectly
assume that all proxies in the network used by Mann and coworkers are local indicators
of temperature, which they most certainly are not. They don't account for the fact that
proxies in certain regions may be related to the large-scale temperature field, not
through the local relationship with temperature, but through a non-local relationship to
the key underlying climate signals (e.g. the influence of the El Nino phenomenon on
local drought or precipitation influences recorded by a particular proxy). The real El
Nino phenomenon typically leads to much stronger relationships between certain
individual proxies in the network used by Mann and coworkers, and large-scale climate
patterns, because of the global importance of the El Nino phenomenon in the real world.
4. It is curious that the authors focus on the results of the MBH98/MBH99 method, when
in fact they demonstrate that this method performs better than the the simple approaches
generally used by other researchers in the field that make use of local regressions of
temperature against proxy data (Bradley et al, '93; Briffa et al, '98; Jones et al, '98;
Crowley and Lowery, '00; Esper et al '02; Mann and Jones, '03). Oddly, in this context,
the authors argue for a more favorable comparison of their result to the reconstruction
of Esper et al, even though this paper uses the approach authors note as being more
prone to the regression bias in question.
In reality, applications of these these different (local and pattern-based) approaches
actually yield relatively similar past histories (see Figure 5 in Jones and Mann, '04).
Indeed, a paper in press in "Journal of Climate" by Rutherford et al (on which Osborn
and Briffa are co-authors), shows that both the local regression method and
pattern-based regression method yield essentially indistinguishable results when applied
to the same network of proxy data.
So, really there is nothing new here, and I'm very surprised that Science chose to
publish this article. In order to believe that the results of this study have any real
world implications at all, the authors would need to reconcile the extreme variability
in their model simulation with the numerous other model simulations that indicate far
less low-frequency variability in past centuries.
Andrew C. Revkin, Environment Reporter, The New York Times
229 West 43d St. NY, NY 10036
Tel: 212-556-7326, Fax: 509-357-0965 (via www.efax.com, received as email)