Sunday, January 15, 2012

1947.txt

cc: simon.tettatXYZxyzoffice.gov.uk, jonesatXYZxyzs.de, Eduardo.ZoritaatXYZxyzs.de, Keith Briffa <k.briffaatXYZxyz.ac.uk>
date: Tue, 18 Jan 2005 17:35:21 +0000
from: "Brohan, Philip" <philip.brohanatXYZxyzoffice.gov.uk>
subject: Re: SO&P Deliverable 8
to: Tim Osborn <t.osbornatXYZxyz.ac.uk>

Hi Tim.

Many thanks for your comments, they are thorough and useful. My
responses are below.

I have divided your comments into two groups: those which I can resolve
either in this message or by a minor revision of the report, and those
which I need to consider in further work.

So: resolvable comments:

> (1) For consistency, could you use ECHO-G rather than Echo-G
> throughout?

I will do this.

> (2) Why did you use the Columbus run rather than the Erik run?

I need a control run. There is a control run for Columbus but not for Erik.

> (3) What is the "median" slope in Figure 2? Is this different to a
> least-squares fit?

For each year I have divided the instrumental anomaly by the model
anomaly. The line slope is then the median of these values. I chose the
median because I wanted an estimator that had no pretence to being
statistically correct. Producing a statistically sound estimator is what
the rest of the paper is about.
Rather than explain all this in the paper, I will delete the lines (and
all reference to the median) from the figure and revise the associated
paragraph.

>
> (6) I'm also confused why the urbanisation error is represented by a random
> series, albeit always negative -- isn't it more of a systematic error in
> the trend, though with unknown magnitude? Page 5 implies it is random,
> page 6 says it (plus bucket and exposure errors) are treated as systematic.

The urbanisation error, like the bucket and exposure errors, IS a fixed
series scaled by a random value. I.e. a systematic effect, as you say, a
trend with unknown magnitude. Unlike the bucket and exposure errors, the
trend is constrained to be negative.
I will try to make this clearer in the text.


> (7) Some of the error distributions may be quite non-Gaussian, I guess, so
> it might be useful to show more than just the quartiles - perhaps the
> quartiles and the 95% range (2.5%-97.5%)?

I will experiment with this. If I can easily make a better figure along
these lines I will change them all.


> (8) Is there a strong enough drift in the ECHO-G control run to generate
> the stronger persistence (autocorrelation), or is it genuinely different
> behaviour.

The ECHO-G control has no obvious drift. I think this is genuinely
different behaviour. (Eduardo, Simon: any comment?)


> (11) Note that Erik (and I presume Columbus) simulations do in fact
> consider changes in N2O - hence page 16 is slightly in error. They did not
> include CFCs or ozone etc. I have the N2O series from Julie if you'd like
> it (sorry it's not yet on the SOAP website).

Please send me the forcing and I'll add it in. It won't make any great
difference.

> (14) Section 5.2.6: I think you should say here why the "best
> estimates" of
> effective forcing for HadCM3 and ECHO-G differ. This needs to be clear to
> avoid confusion. As far as I can tell there are two sources of difference:
> first you used a different "best estimate" volcanic forcing for each model
> (see 13), and second the effective forcing has been convolved with each
> model's own autocorrelation structure. Is that it?

You are right about the two sources of difference. I will expand this
section to say this explicitly.

> (15) Page 22, you say about ECHO-G "so it is more likely that the
> model is
> oversensitive". Don't you mean undersensitive, or have I got the sense of
> beta the wrong way round?

You are right, it should be undersensitive.

So thanks for all those. I'll send round a modestly revised version of
the report in a little while.

OK: Now for the difficult points.

> (4) I'm not convinced that using the autocorrelation structure of the
> control run will correctly emulate the response time of the model's global
> temperature to forcing perturbations. Would it give the same result as a
> simple climate model (e.g. MAGICC, Wigley and Raper) that models the
> timescale-dependent effective thermal inertia of the oceans? I wonder if
> you have a stand-alone program to generate random forcing realisations that
> could be used to drive MAGICC in ensemble mode? Might be interesting.
>
This approximation effectively makes two assumptions:
1) Forcing in year x has no direct influence on temperature in year
x+1. It only has an indirect effect by changing the temperature in year
x, and that change of temperature modifies the temperature in the next
year.
2) The model autocorrelation is the same with large temperature changes
as with small ones.
Neither of these assumptions is true (I suspect) but improving on them
is hard. I have tried to estimate the model impulse response directly
from cross-correlation of the forcing and temperature time-series in a
forced run, but the Natural and ALL forcings runs are not ideal for this
purpose and the results were poor.
Your idea of using an EBM or simple model to look at this is very
attractive; Simon has made a similar suggestion. I'd like to talk to you
about this at the meeting in Reading.

>
> (5) Would you get the same results if you did everything with decadal
> means? One reason I ask is that I wasn't sure if the Folland et al. errors
> were applicable to individual annual values, or to decadal means (the
> "representativity" error, at least, is *timescale* dependent).

Eduardo also suggested this. But If I have estimated all the
uncertainties correctly, any further averaging of the input series will
make the results worse. We would have to allow for the reduction in
noise in the solution process, and we would lose some of the signal.
This is, I think, a general result in signal processing: If you are
fitting the right model, you get the best results with unsmoothed data.
I am tempted to try this even so, just to see what happens. But it is
technically time-consuming (I would have to modify all the uncertainty
generators to produce decadal series). So I don't plan to do this soon.
I am sure I am correctly using annual representativity errors, but this
error component is negligible in any event.

>
> (9) For your uncertainty analysis of the solar forcing, you've repeatedly
> taken one of the three solar curves and scaled it to get the randomly
> selected strength of overall change between Late Maunder Minimum and
> present. But this scaling affects all aspects of the curve, including the
> magnitude of the 11-year cycles - which over recent years are known
> relatively precisely from satellite measurements. Could you somehow scale
> the magnitude of the long-term trend without modifying the magnitude of the
> 11-year cycles? Or wouldn't it make much difference to the results?
>
See below.

> (10) Wigley and Raper (2001, Science 293, 451-454) use a log-normal
> distribution for the tropospheric aerosol forcing while you use a
> rectangular distribution. Though the IPCC TAR doesn't specify the likely
> distribution shape, this may be something that results are particularly
> sensitive to, given the importance of this forcing. A log-normal will
> clearly give less likelihood to the weak and strong extremes of aerosol
> forcing, compared with your results, which could probably be argued for
> with statistical and/or physical reasons.
>
See below.

> (12) In Figure 15 the GHG uncertainty is clearly apparent. You've scaled
> the ECHO-G time profile of forcing up to the best estimate from the IPCC
> TAR. But given that the reason why they differ is (once you include N2O --
> see 11) because CFCs are omitted, the "best estimate" will not simply be
> the "run" multiplied by a constant slightly greater than one. The "best
> estimate" will be the same as the "run" up until around 1970 when the CFC
> emissions become large. That is, the time profiles must be reshaped rather
> than just rescaled.
>
Your comments 9, 10 and 12 all come into the general category of 'The
forcing uncertainty estimates could be improved'. There is so much scope
for refining the forcing uncertainties that I am reluctant to start. I
have made many arbitrary decisions in producing these estimates, and I
could easily spend a year or so making better estimates. I doubt,
however, that any reasonable amount of work would produce estimates
(particularly of Solar forcing) that could be widely accepted as
definitive.
I believe that If I spent a year refining the forcing uncertainty
estimates, and then repeated the calculation, I would get essentially
the same answer (beta is very uncertain), because the estimates would
change greatly in detail, but little in absolute magnitude. So I am
reluctant to do much more work on forcing uncertainty. I realise that
this may make it very difficult to publish this analysis.
I am still thinking about what to do about this, and whether and how to
do more work along these lines.

> (16) Conclusions: would the test have more power if you expanded to
> consider (e.g.) NH land, NH ocean, SH land and SH ocean temperatures
> together, rather than just global? I guess that moves closer to the type
> of approach used for optimal detection and attribution - which is closely
> related to what you are doing here and links should be mentioned. In fact,
> possible overlaps with detection/attribution work and also with studies
> (e.g. Jonathan Gregory's) that diagnose climate sensitivity from
> observations may prove difficult in getting this published. We can discuss
> more in Reading.
>
I haven't much to say here except 'yes indeed'. I have not had time to get to
the bottom of optimal D+A, but on first inspection I think merging their optimisation
process with the formal uncertainty approach which I have been trying to develop
would be very difficult.

Thanks again. See you in Reading.

Philip
>
> Dear Philip, Simon, Julie and Eduardo,
>
> I have (at last!) had time to read the report in detail. Thanks very much
> for all the work and writing involved - it looks impressive and certainly
> satisfies the requirements for the deliverable.
>
> I have some comments for you all. Some may be errors which should ideally
> be corrected before I post the report on the website. Others are more
> general comments that needn't be dealt with now, but might be useful (I
> hope) when taking the work further (e.g. papers).
>
<SNIP>

--
Philip Brohan, Climate Scientist
Met Office Hadley Centre for Climate Prediction and Research
Tel: +44 (0)1392 884574 Fax: +44 (0)1392 885681
Email: philip.brohanatXYZxyzoffice.com http://www.metoffice.com

No comments:

Post a Comment