Weather Variability

Monday, 9 December 2013

A more extreme climate is often interpreted in terms of weather variability. In the media weather variability and extreme weather are typically even used as synonyms. However, extremes may also change due to changes in the mean state of the atmosphere (Rhines and Huybers, 2013) and it is in general difficult to decipher the true cause.

Katz and Brown theorem

Changes in mean and variability are dislike quantities. Thus comparing them is like comparing apples and oranges. Still Katz and Brown (1992) found one interesting general result: the more extreme the event, the more important a change in the variability is relative to the mean (Figure 1). Thus if there is a change in variability, it is most important for the most extreme events. If the change is small, these extreme events may have to be extremely extreme.

Given this importance of variability they state:

"[Changes in the variability of climate] need to be addressed before impact assessments for greenhouse gas-induced climate change can be expected to gain much credibility."

The relative sensitivity of an extreme to changes in the mean (dashed line) and in the standard deviation (solid line) for a certain temperature threshold (x-axis). The relative sensitivity of the mean (standard deviation) is the change in probability of an extreme event to a change in the mean (or standard deviation) divided by its probability. From Katz and Brown (1992). It is common in the climatological literature to also denote events that happen relatively regularly with the term extreme. For example, the 90 and 99 percentiles are often called extremes even if such exceedances will occur a few times a month or year. Following the common parlance, we will denote such distribution descriptions as moderate extremes, to distinguish them from extreme extremes. (Also the terms soft and hard extremes are used.) Based on the theory of Katz and Brown, the rest of this section will be ordered from moderate to extreme extremes.

Examples from scientific literature

We start with the variance, which is a direct measure of variability and strongly related to the bulk of the distribution. Della-Marta et al. (2007) studied trends in station data over the last century of the daily summer maximum temperature (DSMT). They found that the increase in DSMT variance over Western Europe and central Western Europe is, respectively, responsible for approximately 25% and 40% of the increase in hot days in these regions.

They also studied trends in the 90th, 95th and 98th percentiles. For these trends variability was found to be important: If only changes in the mean had been taken into account these estimates would have been between 14 and 60% lower.

Also in climate projections for Europe, variability is considered to be important. Fischer and Schär (2009) found in the PRUDENCE dataset (a European downscaling project) that for the coming century the strongest increases in the 95th percentile are in regions where variability increases most (France) and not in regions where the mean warming is largest (Iberian Peninsula).

The 2003 heat wave is a clear example of an extreme extreme, where one would thus expect that variability is important. Schär et al. (2004) indeed report that the 2003 heat wave is extremely unlikely given a change in the mean only. They show that a recent increase in variability would be able to explain the heat wave. An alternative explanation could also be that the temperature does not follow the normal distribution.

Conclusions

I could not find much literature on this question and have also likely not found everything yet. The studies I could find suggest that the intensity of heat waves is very sensitive to changes in temperature variability (Schär et al., 2004; Fischer and Schär, 2009; Clark et al., 2006; Della-Marta et al., 2007). This limited evidence fits to the Katz and Brown theorem in that the most extreme example, the European heat wave of 2003, small-scale weather variability seems to be very important.

There are also some studies that suggest that the changes in the mean temperature are most important for explaining changes the frequency and duration of heat waves (Barnett et al., 2006; Ballester et al., 2010). These difference illustrate the importance of methodological choices and scales considered.

I would argue that the IPCC (2012) special report on extremes rightly concludes that in projecting future changes in extremes we need to consider changes in the variability and the shape of the probability distribution. Variability is especially important for short-duration precipitation and temperatures in the mid- and high-latitudes.

In the next two posts in this series we will review which changes in weather variability have been found in observations and modelling.

Raw climate records contain changes due to non-climatic factors, such as relocations of stations or changes in instrumentation. This post introduces an article that tested how well such non-climatic factors can be removed.

Sunday, 1 December 2013

This is the introduction to a series on changes in the daily weather and extreme weather. The series discusses how much we know about whether and to what extent the climate system experiences changes in the variability of the weather. Variability here denotes the the changes of the shape of probability distribution around the mean. The most basic variable to denote variability would be the variance, but many other measures could be used.

Dimensions of variability

Studying weather variability adds more dimensions to our apprehension of climate change and also complexities. This series is mainly aimed at other scientists, but I hope it will be clear enough for everyone interested. If not, just complain and I will try to explain it better. At least if that is possible, we do not have much solid results on changes in the weather variability yet.

The quantification of weather variability requires the specification of the length of periods and the size of regions considered (extent, the scope or domain of the data). Different from studying averages is that the consideration of variability adds the dimension of the spatial and temporal averaging scale (grain, the minimum spatial resolution of the data); thus variability requires the definition of an upper and lower scale. This is important in climate and weather as specific climatic mechanisms may influence variability at certain scale ranges. For instance, observations suggest that near-surface temperature variability is decreasing in the range between 1 year and decades, while its variability in the range of days to months is likely increasing.

Similar to extremes, which can be studied on a range from moderate (soft) extremes to extreme (hard) extremes, variability can be analysed by measures which range from describing the bulk of the probability distribution to ones that focus more on the tails. Considering the complete probability distribution adds another dimension to anthropogenic climate change. Such a soft measure of variability could be the variance, or the interquartile range. A harder measure of variability could be the kurtosis (4th moment) or the distance between the first and the 99th percentile. A hard variability measure would be the difference between the maximum and minimum 10-year return periods.

Another complexity to the problem is added by the data: climate models and observations typically have very different averaging scales. Thus any comparisons require upscaling (averaging) or downscaling, which in turn needs a thorough understanding of variability at all involved scales.

A final complexity is added by the need to distinguish between the variability of the weather and the variability added due to measurement and modelling uncertainties, sampling and errors. This can even affect trend estimates of the observed weather variability because improvements in climate observations have likely caused apparent, but non-climatic, reductions in the weather variability. As a consequence, data homogenization is central in the analysis of observed changes in weather variability.

Upcoming posts

In this series, I will first discuss the relationship between changes in extremes and changes in the mean and in the variability; see figure below. Especially changes in extreme extremes are connected to changes in the variability; this can be shown using extreme value theory and is reflected in the literature on climatic changes in extremes.

The next two posts will be on changes in variability from modelling studies and observations. These posts will present many results, which are, or seem to be, conflicting. One reason for this is probably the strong dependence on methodological choices given the complexities mentioned above. There does seem to be a pattern emerging: the temperature variability on inter-annual time scales is decreasing, while it is increasing on intra-seasonal time scales (important for for example heat waves). The variability of precipitation seems to increase: The increasing trends in median precipitation amounts are weaker than the trends in severe precipitation.

An important reason for conflicting findings is likely the quality of the observations, this will also be the topic of a post. Inhomogeneities caused by changes in climate monitoring practices are already important for studying changes in the mean. Our basic understanding of the changes in observational methods and first empirical studies indicate that taking inhomogeneities into account is likely even more essential for studying changes in the variability. Empirical evidence comes from the results of new statistical homogenization methods for daily data and from parallel measurements with historical and modern measurement set-ups.

Long time series are needed in order to distinguish natural (multi-decadal) changes in variability from long-term changes and large international datasets are needed to corroborate the results of regional studies and to put them into a larger perspective. Such efforts have, however, to bear in mind that up to now continental and global collections are not homogenized because of the immense – usually unappreciated - labour required. Currently, only smaller homogenized daily temperature datasets are available.

There will also be a post on the research needed to understand changes in weather variability better. I see three main topics for future research on weather variability.

Raw climate records contain changes due to non-climatic factors, such as relocations of stations or changes in instrumentation. This post introduces an article that tested how well such non-climatic factors can be removed.

Monday, 7 October 2013

This SPP will study changes in the variability of the weather using daily climate data. An important reason to study variability is that changes in extremes can be caused by changes in the mean and in the variability. Variability is more important for extreme extremes and the mean is more important for moderate extremes. It is thus not clear whether results for moderate extremes, which are most studied, extrapolate to true extremes, which are important for many climate change impacts. Furthermore, the mean state has been studied in much more detail and is thus likely more reliable. Also for modelling nonlinear processes in climate models, variability is important.

In the first three-year period the SPP will study weather variability with a focus on the quality of observational data and on the performance of the modelling and analysis tools. This phase will concentrate on Europe, which has the longest and best observations available. In the second phase, the focus will be on understanding the mechanisms that cause these changes. Such studies need to be performed on the global scale.

First phase

1. Weather variability needs to be analysed in a range of difference climatic variables and phenomena at various temporal time scales and spatial ranges, as well as for different measures of variability, seasons and regions and their relations with climate modes. To learn most from these studies, they should be performed in a way that eases intercomparisons.

2. We will improve the analysis methods to study changes in the spatial and temporal dependence of variability over a range of spatio-temporal scales. Important for the comparability of studies is that the range of spatio-temporal scales is well defined. These methods will analyse the full probability distribution or multiple variability measures and not just one or a few.

3. Non-climatic changes due to changes in monitoring practices are especially important when it comes to changes in variability. We will thus develop quality control and (stochastic) homogenization methods for the probability distribution of daily data and estimate uncertainties due to remaining data problems.

4. We will investigate the properties of inhomogeneities in the essential climatic variables in various climate regions. Two methods for this are 1) using parallel measurements with historical and modern set-ups and 2) by studying the adjustments made by homogenisation methods.

5. An attractive alternative to creating homogenized datasets is the development of analysis methods that use data from homogeneous subperiods (similar to what the Berkeley project (BEST) has done for the mean temperature).

6. We will validate climate models with respect to variability at various temporal and spatial scales. Because of the differing spatial averaging scales, this includes the study of the downscaling or gridding methods.

Second phase

7. The methods developed in the first phase will have to be made robust to be applied to large global datasets to be able to study changes in weather variability for all climate regions of the Earth.

8. We will validate climate models globally, for various climate regions, with respect to variability at various temporal and spatial scales.

9. The mechanisms that determine natural and man-made changes in variability will be studied in global models and datasets.

Climatologists, statisticians and time series analysts working on extreme weather, quality control, homogenization, model validation or downscaling likely have the skills to participate in this SPP.

The SPP is focused on our understanding of the climate system. While impact studies will strongly benefit from the results, they are not part of this SPP. Studies on changes in extremes are welcome if they analyse the extremes together with other variability measures. Research on long-term (climatic) changes in the mean does not fit in this SPP on weather variability.

Saturday, 5 October 2013

For many the term homogenization is associated with dusty archives. Surely good metadata on station histories is important for achieving best results, but homogenization is much more, it is especially a very exiting statistical problem. It provides a number of problem that are of fundamental statistical interest.

Most of the work in homogenization has been focussed on improving the monthly and annual means, for example to allow for accurate computations of changes in global mean temperature. The recent research focus on extreme and server weather and on weather variability, has made the homogenization of daily data and its probability distribution necessary. Much recent work goes in this direction.

As I see it, there are five problems for statisticians to work on. The first problems are of general climatological interest and thus also for the study of weather variability. The latter ones are more and more important for the study of weather variability.

Friday, 4 October 2013

We want to build a global database of parallel measurements: observations of the same climatic parameter made independently at the same site

This will help research in many fields

Studies of how inhomogeneities affect the behaviour of daily data (variability and extreme weather)

Improvement of daily homogenisation algorithms

Improvement of robust daily climate data for analysis

Please help us to develop such a dataset

Introduction

One way to study the influence of changes in measurement techniques is by making simultaneous measurements with historical and current instruments, procedures or screens. This picture shows three meteorological shelters next to each other in Murcia (Spain). The rightmost shelter is a replica of the Montsouri screen, in use in Spain and many European countries in the late 19th century and early 20th century. In the middle, Stevenson screen equipped with automatic sensors. Leftmost, Stevenson screen equipped with conventional meteorological instruments. Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.

We intend to build a database with parallel measurements to study non-climatic changes in the climate record. This is especially important for studies on weather extremes where the distribution of the daily data employed must not be affected by non-climatic changes.

There are many parallel measurements from numerous previous studies analysing the influence of different measurement set-ups on average quantities, especially average annual and monthly temperature. Increasingly, changes in the distribution of daily and sub-daily values are also being investigated (Auchmann and Bönnimann, 2012; Brandsma and Van der Meulen, 2008; Böhm et al., 2010; Brunet et al., 2010; Perry et al., 2006; Trewin, 2012; Van der Meulen and Brandsma, 2008). However, the number of such studies is still limited, while the number of questions that can and need to be answered are much larger for daily data.

Unfortunately, the current common practice is not to share parallel measurements and the analyses have thus been limited to smaller national or regional datasets, in most cases simply to a single station with multiple measurement set-ups. Consequently there is a pressing need for a large global database of parallel measurements on a daily or sub-daily scale.

Also datasets from pairs of nearby stations, while officially not parallel measurements, are interesting to study the influence of relocations. Especially, typical types of relocations, such as the relocation of weather stations from urban areas to airports, could be studied this way. In addition, the influence of urbanization can be studied on pairs of nearby stations.

Thursday, 3 October 2013

I am searching for papers on the variability of climate and its natural variability and possible changes due to climate change. They are hard to find.

The New Climate Dice

This weekend I was reading a potential one: the controversial paper by James Hansen et al. (2012) popularly described as "The New Climate Dice". Its results suggest that variability is increasing. After an op-ed in the Washington Post, this article attracted much attention with multiple reviews on Open Mind (1, 2, 3), Sceptical Science and Real Climate. A Google search finds more than 60 thousand webpages, including rants by the climate ostriches.

While I was reading this paper the Berkeley Earth Surface Temperature group send out a newsletter announcing that they have also written two memos about Hansen et al.: one by Wickenburg and one by Hausfather. At the end of the Hausfather memo there is a personal communication by James Hansen that states that the paper did not intend to study variability. That is a pity, but at least saves me the time trying to understand the last figure.