from: Mike Hulme <m.hulmeatXYZxyz.ac.uk>
subject: Fwd: Desai paper to Nature
Geoff is obviously rather pissed off with us, and maybe James also.
I haven't replied to him, but will see him on Monday.
Let me know what your reactions are - I'm here till 6.30 tonight.
Date: Fri, 08 Oct 2004 10:07:30 +0100
From: "Jenkins, Geoff" <geoff.jenkinsatXYZxyzoffice.com>
Subject: Desai paper to Nature
To: Mike Hulme <m.hulmeatXYZxyz.ac.uk>
Cc: "Murphy, James" <james.murphyatXYZxyzoffice.com>
Thread-Topic: Desai paper to Nature
X-UEA-MailScanner-Information: Please contact the ISP for more information
X-UEA-MailScanner: Found to be clean
I was hoping to see you at the Defra meeting last Friday to talk about this informally.
James tells me that Suraje did not feel able to wait a few more days until James
returned from the IPCC meeting on Monday for comments, so has already submitted it to
Nature, which we think is a pity.
Suraje's note seemed to us a bit of a rag-bag of comments, many not actually to do with
James et al's paper at all. I have some sympathies with criticising the Kerr science
commentary, as I thought it gave too strong an impression that climate sensitivities
To address some of Suraje's points:
The "expert selections of parameter ranges" were not "closely related to the model's
default parameter values". They extended (as described in the supplementary info) over a
wide, but plausible, range. In many cases, the value in the standard model version was
located at an extreme of the quoted range, not at the centre.
Furthermore, weighting ensemble members according to match with observations implies
that un-tuned model versions are penalized while those that fit the original tuning are
favoured . This is a basic misinterpretation of the methodology and results. While it
may be possible to tune the model manually to achieve a simple target such as balance in
the global radiation budget, it is not possible to tune it to optimise skill as measured
by the CPI, a complex multi-variable index based on regional patterns. Therefore it is
wrong to assume that the CPI value of the standard ( tuned ) model version will be
substantially better than that of the perturbed versions. In fact, a significant number
of the perturbed model versions actually score better than the standard version in terms
of the CPI. Note also that the most likely value of climate sensitivity in the weighted
pdf does not correspond to the sensitivity of the standard version, further proof that
the effect of the detuning is not being negated by the weighting process.
The point about structural uncertainty was made in the paper - as Suraje admits. So why
bring it up as a criticism of the paper ? We have other uncertainty types (structural,
other Earth System modules) in the Defra project plans and in some cases already
underway. We are also comparing our results against existing "multi-model" ensembles to
check the extent to which our perturbed parameter approach captures the full range of
The linear approach to sampling the effects of multiple parameter combinations is of
course a caveat, but the paper does state this and also makes an attempt to account for
it by checking the errors made by the linear approach in predicting the results of 13
actual runs with multiple perturbations. The error is quantified and accounted for when
producing the pdfs shown in the paper. So, whilst not perfect, we think criticising the
experimental design as not being rational is unfair. We have now completed a new 128
member ensemble based on multiple perturbations which will be used to update our
The supplementary info contains details of the parameter settings for all to see and
criticise. Experts capable of assessing the values chosen have a traceable account of
our methodology and assumptions which gives them a basis for disagreeing if they wish.
We would argue that this is a major step forward from typical GCM modelling studies,
where readers are asked to assume that one particular model version is plausible with
(typically) little or no account of how the particular combination of chosen parameter
settings was arrived at. So we think it is most unfair to criticise us on the grounds of
Suraje believes there is a conflict between "not making a priori assumptions" and
"identifying key controlling parameters". What James et al did was to avoid making a
priori judgements about which parameters (eg those associated with cloud) might have the
largest impact on the climate change response, but to spread the parameter choices
throughout all aspects of the model physics. Within each area of physics they did (to
make the project tractable) choose parameters thought likely to have the largest effects
on the basic physical characteristics of the model's simulation, but without pre-judging
which of those characteristics might play the largest role in driving climate change
feedbacks. This seems a fair way of doing things, and we see no contradiction of the
sort Suraje suggests.
In the last para, Suraje criticises the commentators of James et al s paper (albeit
unfairly in the case of the Stocker commentary), so why not make the paper a critique of
re the last sentence, (a) James et al said clearly in the paper that we need to move on
to structural uncertainties, (b) we believe the experimental design was rational, and
(c) the results of the elicitation procedure were made clear in the supplementary info.
If Suraje's paper is accepted, we will, of course, make these points in our response.
PS: Surprising to see you getting into bed with Kandlikar, one of Illarionov's Moscow
Dr Geoff Jenkins
Head, Climate Prediction Programme
FitzRoy Road, EXETER, EX1 3PB, UK
tel: +44 (0) 1392 88 6653
mobile: 0787 966 1136