Sunday, March 25, 2012


date: Mon, 10 Jul 2000 13:34:37 +0200
from: Frank Oldfield <>
subject: the ghost of futures past

Salut mes amis,

I've lost sleep fussing about the figure coupling Mann et al. (or any
alternative climate-history time series) to the IPCC scenarios. It seems to
me to encapsulate the whole past-future philosophical dilemma that bugs me
on and off (Ray - don't stop reading just yet!), to provide potentially the
most powerful peg to hang much of PAGES future on, at least in the eyes of
funding agents, and, by the same token, to offer more hostages to fortune
for the politically motivated and malicious. It also links closely to the
concept of being inside or outside 'the envelope' - which begs all kinds of
notions of definition. Given what I see as its its prime importance, I
therefore feel the need to understand the whole thing better. I don't know
how to help move things forward and my ideas, if they have any effect at
all, will probably do the reverse. At least I might get more sleep having
unloaded them, so here goes......

The questions in my mind centre round the following issues. If I've got any
one of them wrong, what follows in each section can be disregarded or (more
kindly) set straight for my benefit.

1. How can we justify bridging proxy-based reconstruction via the last bit
of instrumental time series to future model-based scenarios.

2. How can the incompatibilities and logical inconsistencies inherent in
the past-future comparisons be reduced?

3. More specifically, what forms of translation between what we know about
the past and the scenarios developed for the future deal adequately with
uncertainty and variability on either side of the 'contemporary hinge' in a
way that improves comparability across the hinge.

4. Which, if any, scenarios place our future in or out of 'the envelope'
in terms of experienced climate as distinct from calculated forcing? This
idea of an envelope is an engaging concept, easy to state in a quick and
sexy way (therefore both attractive and dangerous); the future could leave
us hoisted by our own petard unless it is given a lot more thought.

1. I am more or less assuming that this can already be addressed from data
available and calculations completed, by pointing to robust calibration
over the chosen time interval and perhaps looking separately at variability
pre 1970, if the last 3 decades really do seem to have distorted the
response signatures for whatever reasons. I imagine developing this line of
argument could feed into the 'detection' theme in significant ways.

2 & 3. This is where life gets complicated. For the past we have biases,
error bars that combine sources of uncertainty, and temporal variability.
For the future we have no variability, simply a smooth, mean, monotonic
trend to a target 'equilibrium' date. Bandwidths of uncertainty reflect
model construction and behaviour. So we are comparing apples and oranges
when we make any statement about the significance of the past record for
the future on the basis of the graph. Are there ways of partially
overcoming this by developing different interactions between past data and
future models?

My own thinking runs as follows: Take variability. Do we need to wait for
models to capture this before building it into future scenarios? This seems
unnecessary to me, especially since past variability will be the validation
target for the models. Is there really no way of building past variability
into the future projections? One approach would be to first smooth the
past record on the same time-span as the future scenarios. This would get
us to first base in terms of comparability, but a very dull and pretty
useless first base in and of itself. It would, however, allow all kinds of
calculations of inter-annual variability relative to a mean time line of
the 'right' length. This in turn could be used in several ways, for
- build the total range of past variability into the uncertainty
bands of each future scenario.
- take the 30,50 or 100 year period (depending on the scenario for
comparison) during which
there was the greatest net variability, or the greatest net fall
in Temperature, or the
greatest net increase in T. and superimpose/add this data-based
variability on the mean
- take the n-greatest positive anomalies relative to the trend and
use them to define an upper
limit of natural variability to compare with the (to my mind)
more realistic future scenarios.

These and cleverer variants I cannot begin to think up seem to me to hold
out the possibility of linking future projections of GHG forcing with what
we know about natrual variability in reasonably realistic ways and perhaps
even of redefining the 'past data-future scenario' relationship in ways
that benefit both the paleo-community and the quality of future

4. I also think the above kinds of exercise might eventually lead us
towards a better definition of 'the envelope' and more confidence in
deciding what is outside and what is not. The same sort of approach can be
taken towards projections of P/E I imagine and, more particularly, at
regional rather than global or hemispheric level.

Sorry if all this sounds stupid or obvious. I got afflicted with the 'need
to share' bug.


Frank Oldfield

Executive Director
Barenplatz 2
CH-3011 Bern, Switzerland


Phone: +41 31 312 3133; Fax: +41 31 312 3168

No comments:

Post a Comment