cc: "Malcolm Hughes" <mhughes@ltrr.arizona.edu>, "Keith Briffa" <k.briffa@uea.ac.uk>
date: Mon, 31 Mar 2008 18:00:46 -0400
from: "Wahl, Eugene R" <wahle@alfred.edu>
subject: RE:
to: "Caspar Ammann" <ammann@ucar.edu>, <mann@psu.edu>

   Hello all:


   A clarification...by "truth" in the second paragraph below I don't mean to imply that
   critiques of reconstruction methods based simply on examinations of ensemble distributions
   should stand, per se.  I only meant to say that recognition of the universe of possible
   reconstructions is a worthwhile addition of knowledge.  Conceivably, doing this might
   possibly help us refine our validation schemes.


   Caspar and I have taken one look into this set of issues in the companion piece to the
   Wahl-Ammann paper in Climatic Change last fall (the Ammann-Wahl article there), to deal
   with critiques of validation methodology raised by MM.  We revisited their ensemble
   approach (reconstructions driven only by the full-AR persistence structure in the proxies),
   but restricted its output with the kinds of calibration and verification criteria we use in
   actual practice (which MM did not do).  The idea was to do exactly the kind of geophysical
   contextualization that Caspar mentions -- thereby incorporating the ensemble method, but
   also embedding the ensemble output into the real world decision-making structure we use
   with all our reconstructions.  Interestingly, the results are quite similar to verification
   significance results based on small-lag AR structures in the target series itself, the
   general way this issue is approached in climatology.


   Peace, Gene


   From: Wahl, Eugene R
   Sent: Monday, March 31, 2008 11:02 AM
   To: 'Caspar Ammann'; mann@psu.edu
   Cc: Malcolm Hughes; Keith Briffa
   Subject: RE: http://www.cosis.net/abstracts/EGU2007/03128/EGU2007-J-03128.pdf?PHPSESSID=e


   Hi all:


   I think Caspar's on the money here.


   The statisticians have a point in that we are really sampling from noisy proxies that
   themselves are sampling from one of many possible realizations of climate for a given set
   of forcings (thinking of model ensembles, e.g., all with slightly perturbed initial
   conditions).  However, we in the paleoclimate part of geophysics (and other disciplines
   that use similar or identical methods, such as econometrics) have clearly recognized the
   need to separate "wheat from chaff" in forecasting/hindcasting models, and thus the
   calibration and verification exercises we do.


   So, it seems to me (at least on a first pass) that there is "truth" in both perspectives on
   the problem.  It would be interesting to explore what our validity screening procedures are
   in fact doing from a purely mathematical theoretical standpoint...what is the effect of the
   truncation of possibilities that our validation procedures entail in the underlying
   geometries we are examining?  That could be one way to bridge the difference between the
   statistical and geophysical perspectives Caspar identifies.  [Let me know if you think I've
   got something incorrect in this Q.]


   Happy spring to all.  Even in Alfred winter is finally breaking!


   Peace, Gene


   From: Caspar Ammann [mailto:ammann@ucar.edu]
   Sent: Sunday, March 30, 2008 10:19 PM
   To: mann@psu.edu
   Cc: Malcolm Hughes; Keith Briffa; Wahl, Eugene R
   Subject: Re: http://www.cosis.net/abstracts/EGU2007/03128/EGU2007-J-03128.pdf?PHPSESSID=e


   Malcolm and Mike,


   I wouldn't read too much into this. I believe that all we are looking at is the difference
   between a statisticians approach and us in geophysics. The statisticians like to simulate
   many ensembles. I had the same discussions with our guys at NCAR. The tendency for them is
   to include all possible reconstructions and then describe the distributions. Our approach
   has been to throw away reconstructions that don't make sense or that don't pass
   verification. So its more philosophical than anything else.


   Though Mike might be right in the sense that the choices can lead some of these approaches
   astray. We had this with regard to the selections of uncertainty, what is actually
   independent uncertainty. There a good and strong check on reality is necessary.


   So we shall see in Vienna ...

   Caspar





   On Mar 30, 2008, at 8:32 PM, Michael Mann wrote:


   Malcolm, in short, this looks like nonsense.  there is nothing magic about 'Bayesian'
   methods.  Many of the methods we use can easily be recast as Bayesian approaches, the
   critical question comes down to what the "prior" is.  For example, in RegEM, the prior is
   the first 'guess' in the iterative expectation-maximization algorithm. Of course, if the
   final result is sensitive to that choice, one becomes a bit worried, the pitfall indeed of
   many a Bayesian approach.
   mike
   Caspar Ammann wrote:

   Malcolm,

   are you referring to this?

   [1]http://www.cosis.net/abstracts/EGU2007/03128/EGU2007-J-03128.pdf?PHPSESSID=e

   Caspar



   Caspar M. Ammann

   National Center for Atmospheric Research

   Climate and Global Dynamics Division - Paleoclimatology

   1850 Table Mesa Drive

   Boulder, CO 80307-3000

   email: [2]ammann@ucar.edu    tel: 303-497-1705     fax: 303-497-1348




--
Michael E. Mann
Associate Professor
Director, Earth System Science Center (ESSC)

Department of Meteorology              Phone: (814) 863-4075
503 Walker Building                    FAX:   (814) 865-3663
The Pennsylvania State University      email:  [3]mann@psu.edu
University Park, PA 16802-5013

[4]http://www.met.psu.edu/dept/faculty/mann.htm


   Caspar M. Ammann

   National Center for Atmospheric Research

   Climate and Global Dynamics Division - Paleoclimatology

   1850 Table Mesa Drive

   Boulder, CO 80307-3000

   email: [5]ammann@ucar.edu    tel: 303-497-1705     fax: 303-497-1348

