From: "Michael E. Mann" <mann@virginia.edu>
To: f055 <T.Osborn@uea.ac.uk>, "p.jones" <p.jones@uea.ac.uk>, "raymond s. bradley" <rbradley@geo.umass.edu>, f055 <T.Osborn@uea.ac.uk>, Keith Briffa <k.briffa@uea.ac.uk>, Tim Osborn <t.osborn@uea.ac.uk>
Subject: RE: CLIMLIST
Date: Fri, 31 Oct 2003 05:37:03 -0500
Cc: mhughes <mhughes@ltrr.arizona.edu>

   Thanks very much Tim,
   I was hoping that the revisions would ally concerns people had.
   I'll look forward to your comments on this latest draft. I agree w/ Malcolm on the need to
   be careful w/ the wording in the first paragraph. The first paragraph is a bit of relic of
   a much earlier draft, and maybe we need to rethink it a bit. Takinig the high road is
   probably very important here. If *others* want to say that their actions represent
   scientific fraud, intellectual dishonesty, etc. (as I think we all suspect they do), lets
   let *them* make these charges for us!
   Lets let our supporters in higher places use our scientific response to push the broader
   case against MM. So I look forward to peoples attempts to revise the first par. particular.
   I took the liberty of forwarding the previous draft to a handfull of our closet colleagues,
   just so they would have a sense of approximately what we'll be releasing later today--i.e.,
   a heads up as to
   how MM achieved their result...
   look forward to us finalizing something a bit later--I still think we need to get this out
   ASAP...
   mike
   SAt 03:01 AM 10/31/2003 +0000, f055 wrote:

     Dear all,
     I've just finished preparing a detailed response offline, only to log on to
     send it to you all and find new versions from Mike plus more comments
     and information.  Well, I don't have time to change my message now, so
     will paste it below this message.  But bear in mind that the new draft may
     well have allayed many of my concerns - in particular, a quick glance
     shows the figure to be much more convincing than the one Mike circulated
     earlier, indeed it seems to be utterly convincing!   I'll reply again on
     Friday
     morning once I've had time to read the new draft.  In the meantime, here is
     my message as promised.
     ************************************************************
     Dear MBH (cc to CRU),
     The number of emails has been rather overwhelming on this issue and
     I'm struggling to catch up with them!  But I will attempt to catch up with a
     few things here...
     (1) The single worst thing about the whole M&M saga is not that they did
     their study, not that they did things wrong (deliberately or by accident), but
     that neither they nor the journal took the necessary step of investigating
     whether the difference between their results and yours could be explained
     simply by some error or set of errors in their use of the data or in their
     implementation of your method.  If it turns out, as looks likely from Mike's
     investigation of this, that their results are erroneous, then they and the
     journal will have wasted countless person-hours of time and caused
     much damage in the climate policy arena.
     (2) Given that this is the single worst thing about the saga, we must not go
     and do exactly the same in rushing out a response to their paper.  If some
     claims in the response turned out to be wrong, based on assumptions
     about what M&M did or assumptions about how M&M's assumptions
     affect the results, then it would end up with a number of iterations of claim
     and counter claim.  Ultimately the issue might be settled, but by then the
     waters could be so muddied that it didn't matter.
     (3) Not only do I advise against an overly rushed response, but I'm also
     wondering whether it really ought to be only from MBH, for three reasons.
     (i) It is your paper/results that are being attacked.
     (ii) It is difficult to endorse everything that Mike has put in the draft
     response because I don't know 100% of the details of MBH and the MBH
     data.  Sure, I can endorse some things, but others I wouldn't know.   Sure,
     I accept Mike's explanation because he's looked at this stuff for 4 days
     and I believe he'll have got it right - but that's different to an independent
     check.  That must come from Ray or Malcolm if possible.
     (iii) If it does come to any independent assessment of who's right and
     who's wrong, then it would be difficult for us to be involved if we had
     already signed up to what some might claim to be a knee-jerk reaction to
     the M&M paper.  If that happened, then you would want us to be free to get
     involved to make sure the process was fair and informed.
     This sounds like a cop out, but - like I say - I'm not sure about point (3) so
     feel free to try to convince me otherwise if you wish.  Anyway Keith or Phil
     may be happy to sign up to a (quick or slow) response, despite my
     reservations above.
     I really advise a very careful reading of M&M and their supplementary
     website to ensure that everything in the response is clearly correct -
     precisely to avoid point (2).  I've only just started to do this, but already
     have some questions about the response that Mike has drafted.
     (a) Mike, you say that many of the trees were eliminated in the data they
     used.  Have you concluded this because they entered "NA" for "Not
     available" in their appendix table?  If so, then are you sure that "NA"
     means they did not use any data, rather than simply that they didn't
     replace your data with an alternative (and hence in fact continued to use
     what Scott had supplied to them)?  Or perhaps "NA" means they couldn't
     find the PC time series published (of course!), but in fact could find the
     raw tree-ring chronologies and did their own PCA of those?  How would
     they know which raw chronologies to use?  Or did you come to your
     conclusion by downloading their "corrected and updated" data matrix and
     comparing it with yours - I've not had time to do that, but even if I had and
     I
     found some differences, I wouldn't know which was right seeing as I've
     not done any PCA of western US trees myself?  My guess would be that
     they downloaded raw tree-ring chronologies (possibly the same ones you
     used) but then applied PCA only to the period when they all had full data -
     hence the lack of PCs in the early period (which you got round by doing
     PCA on the subset that had earlier data).  But this is only a guess, and
     this is the type of thing that should be checked with them - surely they
     would respond if asked? - to avoid my point (2) above.  And if my guess
     were right, then your wording of "eliminated this entire data set" would
     come in for criticism, even though in practise it might as well have been.
     (b) The mention of ftp sites and excel files is contradicted by their email
     record on their website, which shows no mention of excel files (they say
     an ASCII file was sent) and also no record that they knew the ftp address.
     This doesn't matter really, since the reason for them using a corrupted
     data file is not relevant - the relevant thing is that it was corrupt and had
     you been involved in reviewing the paper then it could have been found
     prior to publication.  But they will use the email record if the ftp sites and
     excel files are mentioned.
     (c) Not sure if you talk about peer-review in the latest version, but note
     that
     they acknowledge input from reviewers and Fred Singer's email says he
     refereed it - so any statement implying it wasn't reviewed will be met with
     an easy response from them.
     (d) Your quick-look reconstruction excluding many of the tree-ring data,
     and the verification RE you obtain, is interesting - but again, don't rush
     into
     using these in any response.  The time series of PC1 you sent is certainly
     different from your standard one - but on the other hand I'd hardly say you
     "get a similar result" to them, the time series look very different (see their
     fig 6d).  So the dismal RE applies only to your calculation, not to their
     reconstruction.  It may turn out that their verification RE is also very
     negative, but again we cannot assume this in case we're wrong and they
     easily counter the criticism.
     (e) Claims of their motives for selective censoring or changing of data, or
     for the study as a whole, may well be true but are hard to prove.  They
     would claim that their's is an honest attempt at reproducing a key
     scientific result.  If they made errors in what they did, then maybe they're
     just completely out of their depth on this, rather than making deliberate
     errors for the purposes of achieving preferred results.
     (f) The recent tree-ring decline they refer to seems related to
     tree-ring-width not density.  Regardless of width of density, this issue
     cannot simply be dismissed as a solved problem.  Since they don't make
     much of an issue out of it, best just to ignore it.
     (g) [I'm rambling now into an un-ordered list of things, so I'll stop soon!]
     The various other problems relating to temperature data sets, detrended
     standard deviations, PCs of tree-ring subsets etc. sound likely errors -
     though I've got no way of providing the independent check that you asked
     for.  But it is again a bit of a leap of faith to say that these *explain* the
     different results that they get.  Certainly they throw doubt on the validity
     of
     their results, but without actually doing the same as them it's not possible
     to say if they would have replicated your results if they hadn't made these
     errors.  After all, could the infilling of missing values have made much
     difference to the results obtained, something that they made a good deal
     of fuss about?
     (h) To say they "used neither the data nor the procedures of MBH98" will
     also be an easy target for them, since they did use the data that was sent
     to them and seemed to have used approximately the method too (with
     some errors that you've identified).  This reproduced your results to some
     extent (certainly not perfectly, but see Fig 6b and 6c).  Then they went
     further to redo it with the "corrected and updated" data - but only after
     first
     doing approximately what they claimed they did (i.e. the audit).
     These comments relate to random versions of the draft response, so
     apologies if they don't all seem relevant to the current draft.  I don't have
     these in front of me, here at home, so I'm doing this from memory of what
     I've read over the past few days.  But nevertheless, the point is that a quick
     response would ultimately require making a number of assumptions
     about what they did and assumptions about whether this explains the
     differences or not - assumptions that might be later shot down (in part
     only, at most, but still sufficient to muddy the debate for most outsiders).
     A quick response ought to be limited to something like:
     ---------------------------------------------
     The recent paper by McIntyre and McKitrick (2003; hereafter MM03) claims
     to be an "audit" of the analysis of Mann, Bradley and Hughes (1998;
     hereafter MBH98).  MM03 are unable to reproduce the Northern
     Hemisphere temperature reconstruction of MBH98 when attempting to
     use the same proxy data and methods as MBH98, though they obtain
     something similar with clearly anomalous recent warming (their Figure
     6c).  They then make many modifications to the proxy data set and repeat
     their analysis, and obtain a rather different result to MBH98.
     Unfortunately neither M&M nor the journal in which it was published took
     the necessary step of investigating whether the difference between their
     results and MBH98 could be explained simply by some error or set of
     errors in their use of the data or in their implementation of the MBH98
     method.  This should have been an essential step to take in a case such
     as this where the difference in results is so large and important.  Simple
     errors must first be ruled out prior to publication.  Even if the authors had
     not undertaken this by presenting their results to the authors of MBH98,
     the journal should certainly have included them as referees of the
     manuscript.
     A preliminary investigation into the proxy data and implementation of the
     method has already identified a number of likely errors, which may turn
     out to be the cause of the different results.  Rather than repeating M&M's
     failure to follow good scientific practise, we are witholding further
     comments until we can - by collaboration with M&M if possible - be certain
     of exactly what changes to data and method were made by M&M, whether
     these changes can really explain the differences in the results, and
     eventually which (if any) of these changes can be justified as equally valid
     (given the various uncertainties that exist) and which are simply errors that
     invalidate their results.
     -----------------------------------------
     Hope you find this all helpful, and despite my seemingly critical approach,
     take them in the spirit with which they are aimed - which is to obtain a
     strong and hard hitting rebuttal of bad science, but a rebuttal that cannot
     be buried by any minor innaccuracies or difficult-to-prove claims.
     Best regards
     Tim

   ______________________________________________________________
                       Professor Michael E. Mann
              Department of Environmental Sciences, Clark Hall
                         University of Virginia
                        Charlottesville, VA 22903
   _______________________________________________________________________
   e-mail: mann@virginia.edu   Phone: (434) 924-7770   FAX: (434) 982-2137
            [1]http://www.evsc.virginia.edu/faculty/people/mann.shtml

References

   1. http://www.evsc.virginia.edu/faculty/people/mann.shtml

