Pages

Tuesday, 30 October 2012

Last month our paper on small-scale cloud structure and radiative transfer using a state-of-the-art 3-dimensional Monte Carlo radiative transfer model was published. It was written together with two radiative transfer specialists: Sebastian Gimeno García and Thomas Trautmann. The paper introduces the new version of this model called MoCaRT, but the interesting part for this blog on variability are the results on the influence of small-scale variability on radiative transfer. Previously, I have written about cloud structure, whether it is fractal and the processes involved in creating such complicated and beautiful structures. This post will explain, why this structure is important for radiative transfer and thus for remote sensing (for example for weather satellites) and the radiative balance of the earth (determining the surface temperature). I will try to do so also for people not familiar with radiative transfer.

As an aside, the word radiation in this context should not be confused with radioactive radiation. (It is rumored that the Earth Radiation satellite Mission had to be renamed to the EarthCARE to be funded, as the word radiation sounds negative due to its association with radioactivity.)

Radiative transfer

In theory, radiative transfer is well understood. The radiative transfer equation is long know and describes how electromagnetic radiation (intensity) propagates through a medium and is scatter and emitted by it. Climatologically important are solar radiation from the sun and infrared (heat) radiation from the earth's surface and the atmosphere. For remote sensing of the atmosphere also radio waves are important.

In practice, radiative transfer through the atmosphere is difficult to compute. This starts with the fact that the equation is valid for one frequency of the electromagnetic wave only, while the optical properties of the atmosphere can depend strongly on the frequency. To compute the radiative balance of the earth, a large number of frequencies in the solar and infra red regime thus need to be computed (such models are called line-by-line models). More efficient are computations in broader frequency bands, but then approximations need to be made.

Thursday, 4 October 2012

Today, a first version of the global temperature dataset of the International Surface Temperature Initiative (ISTI) with 39 thousand stations has been released. The aim of the initiative is to provide an open and transparent temperature dataset for climate research.

The database is designed as a climate "sceptic" wet dream: the entire processing of the data will be performed with automatic open software. This includes every processing step from conversion to standard units, to merging stations to longer series, to quality control, homogenisation, gridding and computation of regional and global means. There will thus be no opportunity for evil climate scientists to fudge the data and create an artificially strong temperature trend.

It is planned that in many cases, you can go back to the digital images of the books or cards on which the observer noted down the temperature measurements. This will not be possible for all data. Many records have been keyed directly in the past, without making digital images. Sometimes the original data is lost, for instance in case of Austria, where the original daily observation have been lost in the Second World War and only the monthly means are still available from annual reports.

The ISTS also has a group devoted to data rescue to encourage people to go into the archives, image and key in the observations and upload this information to the database.