date: Mon, 02 Aug 2004 14:59:28 +0100
from: Jonathan Gregory <j.m.gregory@reading.ac.uk>
subject: Wigley et al on response to volcanic forcing
to: wigley@ucar.edu, ammann@ucar.edu, bsanter@rainbow.llnl.gov, sraper@awi-bremerhaven.de

Dear Tom, Caspar, Ben and Sarah

Here is my review of your paper. As Andrew Weaver knew, I didn't have time to
do it earlier because of holidays and the IPCC climate sensitivity workshop,
but I'm sorry it has therefore delayed the process a bit.

I hope you find this useful and aren't distressed by the number of my comments.
Please let me know if I can clarify anything I said.

Best wishes

Jonathan


Review of "The Effect of Climate Sensitivity on the Response to Volcanic
Forcing" (JCL#5196) by T.M.L. Wigley, C.M. Ammann, B.D. Santer and
S.C.B. Raper.

Large volcanic eruptions are a natural experiment which can potentially
constrain climate sensitivity. The main aim of this paper is to do that. A
subsidiary aim is to point out the uncertainties in the procedure and indicate
reasons for diagreement with the conclusions of Lindzen and Giannitsis
(1998). Both of these are useful objectives. I would rate the paper as Good on
all counts and recommend it be accepted with minor revisions. The points I
would like to suggest below for consideration by the authors are requests for
more clarification, caveats or relevance. The length of the paper is
reasonable. If shortening is required some repetition of earlier material
could be removed from the Conclusions, of which paras 2-4 are really a
summary.

(1) p2 start of 2nd para. I think this refers to detection and attribution
studies. However these don't estimate climate sensitivity, do they? I thought
the idea was that they produced scaling factors for the climate response which
results from a combination of climate sensitivity, the magnitude of forcing
and ocean heat uptake. You continue by making a point like that, in fact.
Studies which *do* attempt to extract climate sensitivity are similar in
principle to the method of the present study, but consider all forcings
e.g. Forest et al., Andronova and Schlesinger, Gregory et al. (a bit different
in not running a model for the C20 but subject to similar uncertainties). As
you say, these fail to constain DT2x very well, especially because of
aerosols.

(2) p3 end of 1st para. A fourth issue is that the climate sensitivity may
depend on the nature of the forcing agent, because of its geographical and
seasonal distribution. Hence DT2x evaluated from volcanoes might not be the
appropriate number for GHG-forced future climate change. E.g. "Comparison of
climate response to different radiative forcings in three general circulation
models: towards an improved metric of climate change", Clim Dyn, Joshi et
al. 2003, who consider various kinds of forcing (though not volcanoes,
unfortunately).

(3) Section 2. This discusses the two limits of sinusoidal forcing viz. high
frequency => response depends only on heat uptake (capacity), not at all on
climate sensitivity, low frequency => vice-versa. However, the case of
volcanoes is intermediate. Instead of saying "we can show" (p6 near bottom) it
would be nice to show it, as it's the most relevant case. Perhaps the solution
could be found for the box model with a non-zero forcing for a short time, as
for a volcano. The relevant points are how the peak cooling and the recovery
timescale depend on climate sensitivity, since these are the two quantities
which are later used. That would make the section more relevant to the rest of
the paper, I feel.

(4) Section 3 (or maybe Section 1). It would be useful to state more clearly
what is the role of the AOGCM in this study i.e. why not compare the MAGICC
with the real world. If I have understood correctly, the main purpose is to
get a better signal/noise ratio than you can get from the real world. But of
course the drawback is that you require the AOGCM to be entirely realistic,
which is very hard to guarantee. I think you could require (a) that VSOGA must
match the observed temperature record within the control variability and (b)
that it must match the observed ocean heat uptake likewise. A simultaneous
match to both temperature change and heat uptake is needed since there could
be compensating errors in climate sensitivity and ocean heat uptake efficiency
(Raper et al., 2001). But (b) is going to be hard to guarantee, I guess, since
no AOGCM apparently has the same variability as Levitus, which is the only
available dataset of global ocean heat uptake. In any case, I think it is
essential to have a discussion of why we should believe the AOGCM's climate
response to the volcanic forcing.

(5) p7 1st para. Ends with "same interval" but you haven't said precisely what
this is. Is it 1890-1999? Perhaps you could say this at the start.

(6) p8 2nd para. The reduction of 65% is consistent with assuming you are
calculating the mean of 4 realisations with control noise, and 12 others with
sqrt(2)*control noise (because they are differences). That gives you an
average rms variability of sqrt(4+12*2)/16=0.33.

(7) p8 3rd para. How are drifts removed? Presumably some fit must have been
subtracted from the V runs; a smooth fit shouldn't introduce extra noise, but
subtraction of a parallel control would do, of course. In the non-V cases, the
control is being subtracted, which would remove the drift anyway if it's a
parallel segment.

(8) p10 line 6. The IPCC chapter should be cited as Cubasch et al. (2001) with
all the lead authors listed, as requested in the report.

(9) p11 2nd para. The patterns of forcing are a reason why the climate
sensitivity might differ from the response to GHG (referring to the point
about p3).

(10) p11 footnote 3. If the residual is computed from the whole timeseries, it
is measuring the variation about zero signal as well as the deviation from the
AOGCM volcanic response. Quite a lot of the timeseries of Fig 2 has only a
small signal. A more demanding and perhaps appropriate test would be to
compare the residual only for the 10 years after each of the four major
eruptions, for instance. By eye I would have said that the recovery from the
one around month 900 is faster in the MAGICC than the AOGCM and slower in the
MAGICC than the AOGCM for the one around 1200.

(11) p12 start. For comparison, please could you state the climate sensitivity
used for PCM in the TAR.

(12) p11 end and p12 start. I am a bit concerned about the use of a range of
climate sensitivities with the original tuning of all the other MAGICC
parameters unchanged. If PCM had required a different climate sensitivity,
mightn't the tuning against its CMIP run also have required a different ocean
diffusivity, for example? If you don't retune the ocean to match the various
climate sensitivities, I would have thought you'd be producing versions of
MAGICC which must necessarily do a worse job at reproducing PCM than the TAR
version does. Also the effect of the ocean thermal inertia will be modified,
which could affect the results of your power-law fits.

(13) p12 1st para. Only three points seems rather a small number to do a
power-law fit. Wouldn't it be better to run many more of them? Or is it a
perfect straight line? If so, the reader would appreciate seeing the fit in
order to be satisfied by it. If it is less than a perfect straight line, there
must be uncertainties on the fit parameters, which should be propagated to
later steps in the argument.

(14) p12 2nd para. When is the time scale (of relaxation) measured from?

(15) p12 2nd para. Same comments about the number of points and the
uncertainty of the fit.

(16) p12 near bottom. What would be the reason for the later sub-exponential
decay?  Perhaps the effective heat capacity of the ocean is increasing.

(17) p13 bottom. Long decay times not consistent with PCM: Can this be
demonstrated, taking uncertainties into account? The uncertainties are (a)
those on the power-law fit and (b) the residual noise in the temperature
timeseries between MAGICC and the AOGCM. Given this conclusion, can you
suggest a reason why LG98's results are inconsistent with these? This would be
worth saying since it's a main result of this paper.

(18) p14 2nd para. In computing the ranges for the implied uncertainties on
DT2x, account should again be taken of (a) and (b) in the previous point, as
well as the uncertainties of Wigley (2000).

(19) Section 6. As this summarises some foregoing material, some of the above
comments apply again.

(20) All figs. Please could you label the x-axis in years.

(21) Fig 1a. What do the y-coordinates of the triangles represent, if
anything?  El Chichon was in 1982 I think.

