Measuring Climate Sensitivity – Part Two – Mixed Layer Depths

Using a simple model Spencer & Braswell (2008) had demonstrated that even when the value of “climate sensitivity” is constant and known, measurement of it can be obscured for a number of reasons.

The simple model was a “slab model” of the ocean with a top of atmosphere imbalance in radiation.

Murphy & Forster (2010) criticized Spencer & Braswell for a few reasons including the value chosen for the depth of this ocean mixed layer. As the mixed layer depth increases the climate sensitivity measurement problems are greatly reduced.

First, we will consider the mixed layer in the context of that simple model. Then we will consider what it means in real life.

The Simple Model of Climate Sensitivity

The simple model used by Spencer & Braswell has a “mixed ocean layer” of depth 50m.

Figure 1

In the model the mixed layer is where all of the imbalance in top of atmosphere radiation gets absorbed.

The idea in the simple model is that the energy absorbed from the top of atmosphere gets mixed into the top layer of the ocean very quickly. In reality, as we will see, there isn’t such a thing as one layer but it is a handy approximation.

Murphy & Forster commented:

For the heat capacity parameter c, SB08 use the heat capacity of a 50-m ocean mixed layer. This is too shallow to be realistic.

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010).

Held et al. (2010) found an initial time constant τ = c/α of about four yr in the Geophysical Fluid Dynamics Laboratory global climate model. Schwartz (2007) used historical data to estimate a globally averaged mixed layer depth of 150 m, or 106 m if the earth were only ocean.

The idea is an attempt to keep the simplicity of one mixed layer for the model, but increase the depth of this mixed layer for longer time periods.

There is always a point where models – simplified versions of the real world – start to break down. This might be the case here.

The initial model was of a mixed layer of ocean, all at the same temperature because the layer is well-mixed – and with some random movement of heat between this mixed layer and the ocean depths. In a more realistic scenario, more heat flows into the deeper ocean as the length of time increases.

What Murphy & Forster are proposing is to keep the simple model and “account” for the ever increasing heat flow into the deeper ocean by using a depth of the mixed layer that is dependent on the time period.

If we do this perhaps the model will work, perhaps it won’t. By “work” we mean provide results that tell us something useful about the real world.

So I thought I would introduce some more realism (complexity) into the model and see what happened. This involves a bit of a journey.

Real Life Ocean Mixed Layer

Water is a very bad conductor of heat – as are plastic and other insulators. Good conductors of heat include metals.

However, in the ocean and the atmosphere conduction is not the primary heat transfer mechanism. It isn’t even significant. Instead, in the ocean it is convection – the bulk movement of fluids – that moves heat. Think of it like this – if you move a “parcel” of water, the heat in that parcel moves with it.

Let’s take a look at the temperature profile at the top of the ocean. Here the first graph shows temperature:

Soloviev & Lukas (1997)

Figure 2

Note that the successive plots are not at higher and higher temperatures – they are just artificially separated to make the results easier to see. During the afternoon the sun heats the top of the ocean. As a result we get a temperature gradient where the surface is hotter than a few meters down. At night and early morning the temperature gradient disappears. (No temperature gradient means that the water is all at the same temperature)

Why is this?

Once the sun sets the ocean surface cools rapidly via radiation and convection to the atmosphere. The result is colder water, which is heavier. Heavier water sinks, so the ocean gets mixed. This same effect takes place on a larger scale for seasonal changes in temperature.

And the top of the ocean is also well mixed due to being stirred by the wind.

A comment from de Boyer Montegut and his coauthors (2004):

A striking and nearly universal feature of the open ocean is the surface mixed layer within which salinity, temperature, and density are almost vertically uniform. This oceanic mixed layer is the manifestation of the vigorous turbulent mixing processes which are active in the upper ocean.

How Deep is the Ocean Mixed Layer?

This is not a simple question. Partly it is a measurement problem, and partly there isn’t a sharp demarcation between the ocean mixed layer and the deeper ocean. Various researchers have made an effort to map it out.

Here is a global overview, again from Marshall & Plumb:

Figure 4

You can see that the deeper mixed layers occur in the higher latitudes.

Comment from de Boyer Montegut:

The main temporal variabilities of the MLD [mixed layer depth] are directly linked to the many processes occurring in the mixed layer (surface forcing, lateral advection, internal waves, etc), ranging from diurnal [Brainerd and Gregg, 1995] to interannual variability, including seasonal and intraseasonal variability [e.g., Kara et al., 2003a; McCreary et al., 2001]. The spatial variability of the MLD is also very large.

The MLD can be less than 20 m in the summer hemisphere, while reaching more than 500 m in the winter hemisphere in subpolar latitudes [Monterey and Levitus, 1997].

Here is a more complete map by month. Readers probably have many questions about methodology and I recommend reading the free paper:

From de Boyer Montegut et al (2004)

Figure 5 – Click for a larger image

Seeing this map definitely had me wondering about the challenge of measuring climate sensitivity. Spencer & Braswell had used 50m MLD to identify some climate sensitivity measurement problems. Murphy & Forster had reproduced their results with a much deeper MLD to demonstrate that the problems went away.

But what happens if instead we retest the basic model using the actual MLD which varies significantly by month and by latitude?

So instead of “one slab of ocean” at MLD = choose your value, we break up the globe into regions, have different values in each region each month and see what happens to climate sensitivity problems.

By the way, I also attempted to calculate the global annual (area weighted) average of MLD from the maps above, by eye. I also emailed the author of the paper to get some measurement details but no response.

My estimate of the data in this paper was a global annual area weighted average of 62 meters.

Trying Simple Models with Varying MLD

I updated the Matlab program from Measuring Climate Sensitivity – Part One. The globe is now broken up into 30º latitude bands, with the potential for a different value of mixed layer depth for each month of the year.

I created a number of different profiles:

Depth Type 0 – constant with month and latitude, as in the original article

Type 1 – using the values from de Boyer’s paper, as best as can be estimated from looking at the monthly maps.

Here are some results (review the original article for some of the notation), recalling that the actual climate sensitivity, λ = 3.0:

Figure 6

Figure 7 – as figure 6 without 30-day averaging

Figure 8

Figure 9

Figure 10

Figure 11

Figure 12

What’s the message from these results?

In essence, type 0 (the original) and type 1 (using actual MLDs vs latitude and month from de Boyer’s paper) are quite similar – but not exactly the same.

However, if we start varying the MLD by latitude and month in a more extreme way the results come out very differently – even though the global average MLD is the same in each case.

This demonstrates that the temporal and area variation of MLD can have a significant effect and modeling the ocean as one slab – for the purposes of this enterprise – may be risky.

Non-Linearity

We haven’t considered the effect of non-linearity in these simple models. That is, what about interactions between different regions and months. If we created a yet more complex model where heat flowed between regions dependent on the relative depths of the mixed layers what would we find?

Losing the Plot?

Now, in case anyone has lost the plot by this stage – and it’s possible that I have – don’t get confused into thinking that we are evaluating GCM’s and gosh aren’t they simplistic.. No, GCM’s have very sophisticated modeling.

What we have been doing is tracing a path that started with a paper by Spencer & Braswell. This paper used a very simple model to show that with some random daily fluctuations in top of atmosphere radiative flux, perhaps due to clouds, the measurement of climate sensitivity doesn’t match the actual climate sensitivity.

We can do this in a model – prescribe a value and then test whether we can measure it. This is where this simple model came in. It isn’t a GCM.

However, Murphy & Forster came along and said if you use a deeper mixed ocean layer (which they claim is justified) then the measurement of climate sensitivity does more or less match the actual climate sensitivity (they also had comment on the values chosen for radiative flux anomalies, a subject for another day).

What struck me was that the test model needs some significant improvement to be able to assess whether or not climate sensitivity can be measured. And this is with the caveat – if climate sensitivity is a constant.

The Next Phase – More Realistic Ocean Model

As Murphy & Forster have pointed out, the longer the time period, the more heat is “injected” into the deeper ocean from the mixed layer.

So a better model would capture this better than just creating a deeper mixed layer for a longer time. Modeling true global ocean convection is an impossible task.

For water, k = 0.6 W/m².K. So, as an example, if we have a 10ºC temperature difference across 1 km depth of water, q” = 0.006 W/m². This is tiny. Heat flow via conduction is insignificant. Convection is what moves heat in the ocean.

Many researchers have measured and estimated vertical heat flow in the ocean to come up with a value for vertical eddy diffusivity. This allows us to make some rough estimates of vertical heat flow via convection.

In the next version of the Matlab program (“in press”) the ocean is modeled with different eddy diffusivities below the mixed ocean layer to see what happens to the measurement of climate sensitivity. So far, the model comes up with wildly varying results when the eddy diffusivity is low, i.e., heat cannot easily move into the ocean depths. And it comes up with normal results when the eddy diffusivity is high, i.e., heat moves relatively quickly into the ocean depths.

Due to shortness of time, this problem has not yet been resolved. More in due course.

This article is already long enough, so the next part will cover the estimated values for eddy diffusivity because it’s an interesting subject

Conclusion

Regular readers of this blog understand that navigating to any kind of conclusion takes some time on my part. And that’s when the subject is well understood. I’m finding that the signposts on the journey to measuring climate sensitivity are confusing and hard to read.

And that said, this article hasn’t shed any more light on the measurement of climate sensitivity. Instead, we have reviewed more ways in which measurements of it might be wrong. But not conclusively.

Next up we will take a detour into eddy diffusivity, hoping in the meantime that the Matlab model problems can be resolved. Finally a more accurate model incorporating eddy diffusivity to model vertical heat flow in the ocean will show us whether or not climate sensitivity can be accurately measured.

Like this:

Related

26 Responses

Thanks for doing this – I’d been thinking the Spencer and Braswell analysis ought to be looked at in more depth (ha, pun intended :) but just haven’t had the time, your approach here is more than I would ever have done anyway, so double thanks!

Thanks for continuing to look into this issue. I must say, the more I read Murphy and Forster (2010), the more curious their choice for this ocean mixed layer depth seems to be, since it is derived for longer time scales. The idea in Spencer and Braswell (2008) is that unknown radiative variations influencing surface temperatures within the short-term (e.g., a few months) will contaminate the estimate. That this heat in the mixed layer may later be transported slowly to the deeper ocean over the course of a decade (or more) is largely irrelevant* — the primary thing that matters with respect to the contamination is the initial damping (which would relate to the effective MLD for sub-annual timescales). The whole 80-year simulation part seems largely a red herring. For a deeper MLD to be relevant there would need to be a much longer time lag for the initial surface temperature response to a forcing, in which case it would likely exceed the decorrelation time of this radiative noise anyway.

I tried to estimate this “effective” global mixed layer depth using sea surface temperature fluctuations and Argo OHC observations, and came up with about 60m for 3-month timescales, close to your area average. Your work examining a varying mixed layer depth is quite interesting, although if you’re moving into the horizontal realm, it seems like equally important might be the location of the flux variations (rather than N being distributed evenly across all latitudes).

*I say “largely irrelevant” because theoretically the heat transport resulting from previous changes in forcings could serve to decorrelate more recent forcing effects on the surface temperatures.

I think this paper might be free if you register with Science, if not I can email you a copy.

Many other papers also on this topic.

I can conceptually see why running the model over a longer time period requires a “calibration” due to heat “leaking out” into the deeper ocean. It would be amazing though if somehow just changing the depth made the model work perfectly.

And at the same time my conceptual thinking might just be plain wrong, in the way that you describe.

One problem is that climate sensitivity itself averages over a ton of detail so using parameters that are averages makes sense. The caveat is that the average may have to be a weighted average and not a simple one.

Interesting article, but have you considered that the 3C of change in global average temperature that occurs every 6 months (twice a year) does not support a time constant of 4+ years? A time constant of 4 years requires a response time of about 20 years. After 5 time constants, over 99% of the final effect has occurred, right?

The +/- 3C change (-3C in January) only occurs from about a +/- 8-10 W/m^2 forcing from perihelion to aphelion, where the maximum post albedo (+8-10 W/m^2) occurs in July rather than at perihelion in January because the albedo is largest in January as a result of the accumulated ice and snow from the northern hemisphere winter.

Spencer and Braswell’s 50m mixed layer would seem to be very consistent with this.

You claim in the post following this one (comments are closed) that I consider quantum mechanics to be a mistake. This is not a correct description of my position. I consider the statistical Copenhagen Interpretation to be non-physical as did Einstein and Schrodinger, but there may be a version of quant mech which is free of statistics and which is not a mistake.

For those who are not shocked when they first come across quantum theory cannot possibly have understood it.

Richard Feynman:

I think I can safely say that nobody understands quantum mechanics.

And yet quantum mechanics provides the ability to make extremely precise calculations of physical phenomena. If there were a deeper level of physics that makes the observed randomness only apparent, it would still have to produce the same results to many significant figures that are produced by quantum mechanical calculation. It would also have to account for:

1. The quantization of certain physical properties like charge and spin.
2. Wave-particle duality as evidenced by electron diffraction and the photoelectric effect.
3. The Uncertainty Principle
4. Quantum Entanglement.

Matlab utilities mostly refer to “the user documentation on netCDF” rather than explaining what is going on. Which is fair enough, except the user documentation is like reverse engineering because there is no human readable “header” for my .nc file. The documentation shows file structure in human readable form, but the files are not readable so it doesn’t help. I can’t see the structure of my file so have no idea whether my file is slightly different from their example or a whole different world.

Is there a converter for the header into human readable form? (Opening in a text editor doesn’t do it).

So rather than grasp the possibilities of the complex self-describing data structure, I just want to extract my data..

I downloaded some data from CERES for a small set of 1’x1′ cells for 3 TOA fluxes (net, lw, sw) over 31 days.

When I review the data structure via a few basic matlab commands I get:

R has a package (ncdf) that makes reading NC files pretty easy. I’ve put the daily global averages for lw, sw, and incident solar (starting 3/2000) up at http://dl.dropbox.com/u/9160367/Climate/12-31-12_CERES_Terra_daily_avg_03_2000.txt in a format that’s easy to read, if you want to use that. Basically, I’ve just used a weighted-average of the SSF1 degree cells and ignored any blank data. A bias could creep in depending on if there is a trend in the areas with missing data, since I don’t try any interpolation. Even excluding this blank data, there are a few days that are clear outliers that you may want to exclude. The R script is up here in case the Matlab commands are similar (I had to download the data in 2 chunks since 1 was greater than 2GB).

Thanks very much.
Since writing my comment I have managed to do some extraction of CERES data then area weight to get daily averages.

But there are data quality issues, where -999 seems to be used for missing data. So I have been experimenting a little with just ignoring negative values, also with the “trimmed mean” function in Matlab that calculates the mean after ditching a user defined % of outlier data.

I’m sure there are documents on best practice using CERES data, needs some digging..

I’ll now take a look at your data and see how it matches up with what I have.

Maybe I will start an article on this subject to capture what we have learnt and display some results.

Doug: My interest in climate science began with a single stop at RealClimate – where I saw some ridiculous excuses for the poor science in Al Gore’s movie and scathing comments about some “moron” named McIntyre. I needed to find out what that McIntyre fellow was selling that was so awful and I certainly found out. After reading ClimateAudit for a year or two, McIntyre’s positive comments about SOD sent me here.

I haven’t followed your debates with SOD, but I did follow the link he provided to your site, where I clicked on a few of your references. SOD’s post was free advertising for your website. The more frustrated and injudicious SOD may have been in the now delete post – the better the advertising. Now the link to your site is gone. Oh well, I didn’t see anything at your site that others will miss.

Since you don’t allow comments on your website, I’d just like to say here that there are a lot of inaccuracies that don’t take a “genius” to spot. The pool table missing an end is funny because it almost leads people to the right answer, but veers away just shy of correct…

I found the above comment in the spam queue when I was checking through it.

It’s unclear to me what kind of defamation it contains, but not being much of a legal person and not being at all interested in Doug Cotton I think, why not remove the page?

It’s main purpose was to:

1) explain why Doug Cotton’s comments were no longer welcome on Science of Doom – a summary if you will of blog policy already posted and replies to Doug Cotton already posted, and
2) send people to Doug Cotton’s page so they could see his arguments in all their glory

SOD: I read through some of the Montegut paper on the mixed layer and ended up more confused than before. There may be several different types of “mixed layers”. The mixed layer that is relevant to S&B presumably is the amount of water (depth times fixed surface area) that responds to a change in forcing applied over a given period of time. When Mt. Pinatubo erupted and SWR was reduced for 1-2 years, how deep in the ocean did the temperature change? How deep does one have to go in the ocean before there is no seasonal temperature difference associated with seasonal differences in solar forcing?

Montegut and others measure the depth of the mixed layer by homogeneity – the mixed layer is where temperature, salinity, or even oxygenation changes negligibly with depth (leaving the problem of defining “negligibly”). This type of homogeneity may be produced by turbulent mixing. Turbulent mixing will certainly produce energy transfer, but does energy transfer require turbulent mixing? Thermal diffusion and molecular diffusion provide mechanisms for energy transfer and “homogenization” which are DRIVEN by non-homogeneity, but these may be too slow to be relevant. Are there any other mechanisms that lie between these two extremes? (I see you just posted part 3, so perhaps an answer is there.)

There’s no perfect answer to questions about the mixed layer, because there isn’t a boundary where layer turns from mixed into not mixed. So “negligibly” is just a condition that needs a definition.

How deep does one have to go in the ocean before there is no seasonal temperature difference associated with seasonal differences in solar forcing?

Interesting question. Not sure of the answer to this.

I have a nice graph in Marshall & Plumb (2008) of surface temperature and 500m temperature at 50’W and 30’N (Atlantic Ocean). The 500m temperature shows no seasonal change. But that’s just one location.

Turbulent mixing will certainly produce energy transfer, but does energy transfer require turbulent mixing? Thermal diffusion and molecular diffusion provide mechanisms for energy transfer and “homogenization” which are DRIVEN by non-homogeneity, but these may be too slow to be relevant. Are there any other mechanisms that lie between these two extremes?

Moving fluids against their buoyancy gradient requires energy from somewhere.

In the case of daily mixing, the mechanism is night time cooling that changes the buoyancy gradient so that denser fluid is above lighter fluid.

With wind-driven mixing the energy comes from momentum transferred from the wind to the ocean surface (and easily demonstrated via temperature profiles during the day vs wind speed).

In the deeper ocean the mechanism is the breaking of internal waves. Smooth sea floor geometries have lower vertical eddy diffusivity above them compared with rough sea floor geometries.

Diffusion is too slow to explain the temperature profiles seen in the ocean.

SOD: Thanks for taking the time to post some interesting data. Since this data is from 10 degN (see Figure 1, http://drs.nio.org/drs/bitstream/2264/3631/3/Dee-_Sea_Res_Pt_I_57_739a.pdf), the temperature changes are complicated by other factors beside the seasonal change in solar forcing. The location is also near land, so changing currents, winds and upwelling/downwell can be influencing the temperature profile. However, it does look like the annual peak in water temperature in March extends below 100 m in this location and extends well below their definition of the mixed layer (within 0.5 degC of SST). Outside the tropics, if the mixed layer warms 5 degC during summer, there must be some warming in the steep thermocline underneath or there will be a discontinuity which can be smoothed only by diffusion. The (old) paper below suggests that nighttime convection deepens the mixed layer in the autumn when SST is falling.

Kraus: A one-dimensional model of the seasonal thermocline (1965) apapane.soest.hawaii.edu/users/jay/NIOSummerSchool2010/Lectures/9-MixedLayers-Vinay/MixedLayerReprints/KrausTurner1967(2).pdf

Radiational heat losses during the night or during autumn will be associated with penetrative convection which mixes heat from near the surface into lower layers. The depth of the thermocline can be influenced by the amplitude of daily, synoptic and seasonal fluctuations of cooling and heating at the surface, as well as by variations of wind stirring.

We conclude therefore that the penetrative radiation produces convective mixing at a rate which is of the same order as that produced by mechanical stirring and that during the height of summer the radiation effect is likely to be predominant.

Deep Sea Research and Oceanographic Abstracts
Volume 23, Issue 5, May 1976, Pages 391-401
doi:10.1016/0011-7471(76)90836-6 | How to Cite or Link Using DOI

The aim of this study is to see how well models might predict sea-surface temperature throughout the year, given the inputs of heat and mechanical energy into the ocean. With this in mind, curves relating surface temperature, heat content and potential energy of the top 250 m of ocean are obtained for one of the North Atlantic weather ships. Then comparisons are made with similar curves predicted by a series of simple models. Of those tried, the most satisfactory is a modified version of the model of Kraus and Turner, where a mixed layer is produced by both mechanical and (in the cooling period) convective mixing. In the original model, all the energy made availability by surface cooling was used to deepen the mixed layer. Much better agreement with observation is obtained if this convective stirring is non-penetrative or only slightly penetrative. This is consistent with recent atmospheric observations.