Friday, May 4, 2012

3804.txt

cc: John.Lanzante@noaa.gov, "Thomas.R.Karl" <Thomas.R.KarlatXYZxyza.gov>, carl mears <mearsatXYZxyzss.com>, "David C. Bader" <bader2atXYZxyzl.gov>, "'Dian J. Seidel'" <dian.seidelatXYZxyza.gov>, "'Francis W. Zwiers'" <francis.zwiersatXYZxyzgc.ca>, Frank Wentz <frank.wentzatXYZxyzss.com>, Karl Taylor <taylor13atXYZxyzl.gov>, Melissa Free <Melissa.FreeatXYZxyza.gov>, "Michael C. MacCracken" <mmaccracatXYZxyzcast.net>, "'Philip D. Jones'" <p.jonesatXYZxyz.ac.uk>, Sherwood Steven <steven.sherwoodatXYZxyze.edu>, Steve Klein <klein21atXYZxyzl.gov>, 'Susan Solomon' <susan.solomonatXYZxyza.gov>, "Thorne, Peter" <peter.thorneatXYZxyzoffice.gov.uk>, Tim Osborn <t.osbornatXYZxyz.ac.uk>, Tom Wigley <wigleyatXYZxyz.ucar.edu>
date: Fri, 28 Dec 2007 16:14:10 -0800
from: Ben Santer <santer1atXYZxyzl.gov>
subject: Re: [Fwd: sorry to take your time up, but really do need a scrub
to: Leopold Haimberger <leopold.haimbergeratXYZxyzvie.ac.at>

<x-flowed>
Dear Leo,

The Figure that you sent is extremely informative, and would be great to
include in a response to Douglass et al. The Figure clearly illustrates
that the "structural uncertainties" inherent in radiosonde-based
estimates of tropospheric temperature change are much larger than
Douglass et al. have claimed. This is an important point to make.

Would it be possible to produce a version of this Figure showing results
for the period 1979 to 1999 (the period that I've used for testing the
significance of model-versus-observed trend differences) instead of 1979
to 2004?

With best regards, and frohes Neues Jahr!

Ben
Leopold Haimberger wrote:
> Dear all,
>
> I have attached a plot which summarizes the recent developments
> concerning tropical radiosonde temperature datasets and which could be
> a candidate to be included in a reply to Douglass et al.
> It contains trend profiles from unadjusted radiosondes, HadAT2-adjusted
> radiosondes, RAOBCORE (versions 1.2-1.4) adjusted radiosondes
> and from radiosondes adjusted with a neighbor composite method (RICH)
> that uses the break dates detected with RAOBCORE (v1.4) as metadata.
> RAOBCORE v1.2,v1.3 are documented in Haimberger (2007), RAOBCORE v1.4
> and RICH are discussed in the manuscript I mentioned in my previous email.
> Latitude range is 20S-20N, only time series with less than 24 months of
> missing data are included. Spatial sampling of all curves is the same
> except HadAT which contains less stations that meet the 24month
> criterion. Sampling uncertainty of the trend curves is ca.
> +/-0.1K/decade (95% percentiles estimated with bootstrap method).
>
> RAOBCORE v1.3,1.4 and RICH are results from ongoing research and warming
> trends from radiosondes may still be underestimated.
> The upper tropospheric warming maxima from RICH are even larger (up to
> 0.35K/decade, not shown), if only radiosondes within the tropics
> (20N-20S) are allowed as reference for adjustment of tropical radiosonde
> temperatures. The pink/blue curves in the attached plot should therefore
> not be regarded as upper bound of what may be achieved with plausible
> choices of reference series for homogenization.
> Please let me know your comments.
>
> I wish you a merry Christmas.
>
> With best regards
>
> Leo
>
> John Lanzante wrote:
>> Ben,
>>
>> Perhaps a resampling test would be appropriate. The tests you have
>> performed
>> consist of pairing an observed time series (UAH or RSS MSU) with each one
>> of 49 GCM times series from your "ensemble of opportunity". Significance
>> of the difference between each pair of obs/GCM trends yields a certain
>> number of "hits".
>>
>> To determine a baseline for judging how likely it would be to obtain the
>> given number of hits one could perform a set of resampling trials by
>> treating one of the ensemble members as a surrogate observation. For each
>> trial, select at random one of the 49 GCM members to be the
>> "observation".
>> From the remaining 48 members draw a bootstrap sample of 49, and perform
>> 49 tests, yielding a certain number of "hits". Repeat this many times to
>> generate a distribution of "hits".
>>
>> The actual number of hits, based on the real observations could then be
>> referenced to the Monte Carlo distribution to yield a probability that
>> this
>> could have occurred by chance. The basic idea is to see if the observed
>> trend is inconsistent with the GCM ensemble of trends.
>>
>> There are a couple of additional tweaks that could be applied to your
>> method.
>> You are currently computing trends for each of the two time series in the
>> pair and assessing the significance of their differences. Why not first
>> create a difference time series and assess the significance of it's
>> trend?
>> The advantage of this is that you would reduce somewhat the
>> autocorrelation
>> in the time series and hence the effect of the "degrees of freedom"
>> adjustment. Since the GCM runs are based on coupled model runs this
>> differencing would help remove the common externally forced variability,
>> but not internally forced variability, so the adjustment would still be
>> needed.
>>
>> Another tweak would be to alter the significance level used to assess
>> differences in trends. Currently you are using the 5% level, which yields
>> only a small number of hits. If you made this less stringent you would
>> get
>> potentially more weaker hits. But it would all come out in the wash so to
>> speak since the number of hits in the Monte Carlo simulations would
>> increase
>> as well. I suspect that increasing the number of expected hits would
>> make the
>> whole procedure more powerful/efficient in a statistical sense since you
>> would no longer be dealing with a "rare event". In the current scheme,
>> using
>> a 5% level with 49 pairings you have an expected hit rate of 0.05 X 49
>> = 2.45.
>> For example, if instead you used a 20% significance level you would
>> have an
>> expected hit rate of 0.20 X 49 = 9.8.
>>
>> I hope this helps.
>>
>> On an unrelated matter, I'm wondering a bit about the different
>> versions of
>> Leo's new radiosonde dataset (RAOBCORE). I was surprised to see that the
>> latest version has considerably more tropospheric warming than I recalled
>> from an earlier version that was written up in JCLI in 2007. I have a
>> couple of questions that I'd like to ask Leo. One concern is that if
>> we use
>> the latest version of RAOBCORE is there a paper that we can reference --
>> if this is not in a peer-reviewed journal is there a paper in submission?
>> The other question is: could you briefly comment on the differences in
>> methodology used to generate the latest version of RAOBCORE as
>> compared to the version used in JCLI 2007, and what/when/where did
>> changes occur to
>> yield a stronger warming trend?
>>
>> Best regards,
>>
>> ______John
>>
>>
>>
>> On Saturday 15 December 2007 12:21 pm, Thomas.R.Karl wrote:
>>
>>> Thanks Ben,
>>>
>>> You have the makings of a nice article.
>>>
>>> I note that we would expect to 10 cases that are significantly
>>> different by chance (based on the 196 tests at the .05 sig level).
>>> You found 3. With appropriately corrected Leopold I suspect you will
>>> find there is indeed stat sig. similar trends incl. amplification.
>>> Setting up the statistical testing should be interesting with this
>>> many combinations.
>>>
>>> Regards, Tom
>>>
>>
>>
>


--
----------------------------------------------------------------------------
Benjamin D. Santer
Program for Climate Model Diagnosis and Intercomparison
Lawrence Livermore National Laboratory
P.O. Box 808, Mail Stop L-103
Livermore, CA 94550, U.S.A.
Tel: (925) 422-2486
FAX: (925) 422-7675
email: santer1atXYZxyzl.gov
----------------------------------------------------------------------------
</x-flowed>

No comments:

Post a Comment