Commentary on the Paper of Battisti

Dr. Battisti's presentation made a broad sweep over many of the issues that we have talked about this week. He has thus given me the opportunity to touch on a few of these in relation to my own paper.

I tend to approach the question of long-time-scale change in an engineering fashion: I wonder whether we can tell from all the different records that we have whether there really is any anthropogenic forcing, and whether we are seeing any response to it. I recognize that there are many other reasons for studying these time scales, but this is certainly an important one. I find it particularly interesting that as we try to make some kind of statistical test of what we have seen in the last century, it is the frequencies just bordering on the decadal range from below for which we really need answers.

I should like to talk a little about two Hasselmann-type models, which could also be called default models. The first one is noise forcing on a mixed-layer slab model with geography; the other one is a deep-ocean extension of that, but still very crude: It distorts the spectrum somewhat at low frequencies. One of the things I noticed as we went through the week—Ed Sarachik and Bob Dickson have already mentioned it—is the failure of these models to generate intermediate water very well. That has a very important impact right in the frequency band (100 yr)-1 to (10 yr)-1. Also, one of the odd things about our deep-ocean model is that it cannot get information down to the thermocline area very well. The giant models adjust so quickly onto the ramp curve that I suspect they might be getting information down there too fast.

All this makes me wonder whether the kinds of things that Jim McWilliams brought up earlier might be fairly serious—for instance, when we construct our bottom topography out of stair steps. Going out in Reynolds number on that lovely bifurcation diagram that he showed is equivalent to going to higher and higher resolution. In our present models we have gone past only a couple of bifurcations, and it is hard to tell whether we are seeing fictitious oscillations in our model world that might go away if we were to go just a little bit further. I do not mean to criticize the people who are working on these models; I think they are doing what needs to be done. Consider, for instance, some of the things that Dr. Rooth and Dr. Barnett showed. I cannot help suspecting that if we were to look at the Mikolajewicz/ Maier-Reimer model, we might see a peak at 300 years (recall the film loop that we viewed earlier this week), and in the Delworth et al. work we might see a peak in the 50-year range. These would, of course, have a large impact on our ability to pick out a signal or to infer how rare such an excursion might be in, say, the last century. Clearly, we have a lot to do before we can unravel these problems, so I again raise Dr. McWilliams's issue.

I should also like to mention once more something that Dick Lindzen pointed out on the first day, since I think it is important but easy to miss. Up at the top of the ocean there is a "valve" that allows the radiation to go out and be absorbed. It amounts to an effective cooling coefficient, with all the feedbacks and everything that must go into it, and thus controls the spectrum and basically the sensitivity of the atmospheric model. If we have an otherwise good atmospheric model that does not do the clouds right, its sensitivity may be wrong by a factor of 2 or even 3, and that error will actually control the spectrum in the low-frequency range. It will have a rather important influence on fluctuations, even at the decadal level.

I hope that Dr. Battisti will open the discussion with some ideas about where we should be going in modeling, and particularly how fine the scales must be to get at some of the questions that we need to answer.

The National Academies of Sciences, Engineering, and Medicine 500 Fifth St. N.W. | Washington, D.C. 20001