You are here

Project 6 - Decadal changes of flood probabilities

Recently there have been a number of major floods in Europe and it seems as if they had increased in number and magnitude [Hall et al., 2014]. In many catchments the recent floods are much larger than the second largest event on record (Fig. 1). A very important question hence is whether the probabilities of the occurrence of such extreme floods have increased in recent years and what the drivers could be.

Fig. 1: Left: August 2005 flood in Western Austria. Right: maximum annual floods for the same catchment.

In SP6 this issue is addressed by analysing flood changes in Germany and Austria in a consistent way. The research in SP6 is focusing on: (1) the detection of flood changes from long data series of German and Austrian floods, (2) the extension of the analyses back in time by historical information and (3) a new approach for attributing flood changes to their drivers by including regional information. The project is therefore structured into 3 work packages (WPs).

WP1 Detecting flood probability changes

The detection of flood changes , as opposed to the traditional trend analysis, focuses on the decadal variability of floods. This means that rather than identifying trends from long data records, flood rich and flood poor periods are identified in space and time. Flood rich and flood poor periods are periods with significant above-average flood occurrence and significant below-average flood occurrence (Fig. 2). The advantage of the approach is that it can capture the variability of floods over long time periods, which may involve multiple increases and decreases. The analysis is performed on numerous stations at the same time to increase the power of detection and decrease sampling uncertainty.

Furthermore, instead of the traditional evaluation of flood peaks alone, the flood generating processes are also taken into account. While the flood magnitude is very important from a practical perspective, it contains little information on what has caused the flood. In order to identify flood changes in more detail and to facilitate the attribution of flood changes to their drivers (WP3), flood indicators are additionally used in the analysis.
These indicators are based on the flood typology developed in Subproject 4 distinguishing for instance between snow melt floods, rain-on-snow floods, flash floods associated with thunderstorms and low land floods associated with large scale rainfall. In terms of the applied methods, different approaches are used for the detection of flood rich and flood poor periods. As a first approach the method of Merz et al. [2016] is applied to Austrian data. In this approach the occurrences of floods are modeled as a realization of a point process. For this analysis the standard assumption in literature, a Poisson point process,
is used. Stationarity in the occurrence of floods would correspond to a homogenous Poisson point process, while clustering of floods corresponds to anomalies in the rate of occurrence. A non-parametric kernel occurrence rate is estimated to identify changes in the occurrence rate of floods. Significant above or below mean occurrence rates are indicators for a flood rich or a flood poor period in time. An example of such an analysis is shown in Fig.3.

As a second approach, in order to extend the analysis in space the flood indicators of different flood types are filtered over the project region by using an averaging window in space and time. In order to choose the dimensions of the averaging window so that space and time affect the smoothing of the flood indicators in a balanced way, their spatio-temporal autocorrelation is investigated. The time-dimension of the averaging window, relative to the space-dimension, is chosen so that the decrease in correlation in space and time is isotropic. As a result of the spatio-temporal filtering,
flood indicator functions in space and time for each flood type are obtained. Then threshold values are defined above which a flood rich period occurs and below which a flood poor period occurs. The above approaches mainly focus on how the average flood discharges have changed in space and time.
In order to understand how these changes extrapolate to smaller probabilities (more extreme floods), the data are also analysed by non-stationary extreme value statistics [e.g. Šraj et al., 2016]. The underlying idea is to allow the parameters of the flood frequency distribution to change in time. Non-stationary models of different complexity (e.g. trend in location parameter only, trends in location and scale parameters) are compared with each other and with the stationary model based on objective criteria to identify the best model given the observation data. The Generalised Extreme Value (GEV) distribution is used here. In line with the analysis of flood rich and flood poor periods, the non-stationarity of the distribution parameters will go beyond trends and allow for more complex time patterns.

WP2 Historical perspectives on extreme floods

Since flood records from the observational periods are usually not much longer than 200 years, historical flood data sets are used in WP2 to extend the data analyses of WP1 to the 18th century and earlier for one macroscale catchment (the Danube) and one mesoscale catchment
(the Mulde) in the study region Germany/Austria. Historical information is usually less detailed and mainly relies on documentary evidence such as narrative sources and legal-administrative institutional documentation (e.g. chronicles, newspapers, private and official protocols and correspondence, account books and taxation records, see examples in Fig. 4), but it is able to shed light on the extreme floods of the past 500-700 years [see e.g. Glaser et al. 2010].

For the case study areas a small number of new series are developed. These new series are targeted to strategic locations such as the Danube at Vienna or Passau and are developed in a 3 step procedure:
(1) Collection and critical evaluation of flood-related evidence from existing compilations, scientific literature (tracing and evaluate the original source texts); (2) Epigraphic evidence is collected such as contemporary and non-contemporary flood marks and pictorial and map representation of great floods;
(3) Archival and early print research is performed in selected study sites, including an analysis of narrative evidence (e.g. family and town chronicles), institutional records and damage accounts, newspapers (and other, early media products), as well as private and official correspondence and old maps.
In order to obtain a synoptic view of flood occurrence over a region, the historical information on flood magnitude is interpolated in time and space using the thin-plate-spline technique with parameters chosen to preserve the spatio-temporal correlations of the phenomenon (see WP1). For the analysis the historical data has to be processed to remove observational biases. The quantity and quality of historical information may depend on the historical period it belongs to, and also on the available contemporary source types, and is therefore heterogeneous in time. Moreover, for recent years the dates when a floods have occurred are exactly known, while back in time mainly evidence on greater flood events is available. In other words, historical information is heterogeneous in time and space. In order to reduce the observational biases, a homogenisation methodology for the historical flood time series is developed.
The method to remove the temporal observational is based on the assumption that all big floods have been correctly identified in the historical time series and that the frequency of years to be considered with no floods is proportional to the frequency of years with big floods. The series is homogenised in time, prior to the interpolation, by sampling “no-flood occurrences” in years with no information available using a Bernoulli distribution with a probability of occurrence that changes in time consistently with the frequency of big flood events. This frequency is evaluated on a time window around the year of interest. An analogous procedure is used to homogenise the spatial information. The homogenised data is then be interpolated to identify flood rich and flood poor periods in the last millennium.

WP3 Attributing flood probability changes

In WP3 the flood data in the regions where changes in flood occurrence are identified in WP1 and WP2 are confronted with precipitation data, air temperature data, land use change data and data on river works with the aim of identifying the drivers responsible for the changes. For this purpose long term precipitation and air temperature data are used and time series relating to land use changes and river works. A new framework for attributing flood changes to their drivers is used based on a regional fingerprint analysis, a combination of two techniques: fingerprinting analysis and regional frequency analysis [Viglione et al., 2016]. In fingerprinting [see e.g., Hundecha and Merz (2012); Merz et al., 2012], the problem of attributing a signal (e.g., flood change) to causes is framed as an inverse problem with two main assumptions:
(a) the resulting signal is a mixture of component signals;
(b) the patterns of the component signals (i.e. the fingerprints) are known to some degree. In terms of flood change this can for instance be expressed by the following equation:
where Q is the peak discharge, A the atmospheric indicator, C the indicator of catchment properties or land use and R the indicator for river works.

The accuracy of the estimated source contributions depends on whether the inverse system is well or ill posed (i.e., how sensitive the resulting signal is to the component signals), the uncertainties involved, the applicability of the mixing assumptions, and how realistic the fingerprints are. Since part of the uncertainties comes from the limited flood record lengths and part from the complexity of the flood generation processes, the information of many sites in a region is interpolated at the same time. The scaling characteristics (i.e., fingerprints) with catchment area of the effects of the drivers on flood changes are exploited in order to infer the relative contributions of these drivers. One would expect different scaling characteristics for the different drivers as suggested by Blöschl et al. [2007] and illustrated in fig. 5. For example, increases in local, convective precipitation are more relevant in small catchments which may lead to a decrease of flood changes with catchment area (smaller flood changes in larger catchments). Increases in regional precipitation due to changed atmospheric circulation patterns are more relevant in large catchments which will lead to an increase of flood changes with catchment area. Decreasing infiltration capacity of the soil due to urbanisation and agricultural soil compaction may lead to a decrease of flood changes with catchment area. Decreasing flood retention due to river training and loss of flood retention volumes and in the flood plains of large catchments may lead to an increase of flood changes with catchment area.

Fig. 5: Example of assumed scaling characteristics of different drivers with catchment area and influence of these drivers on peak discharges

The estimation is framed in Bayesian terms to obtain distributions of the contributions of the drivers to flood changes in the region [Viglione et al, 2016]. A number of hotspot catchments are selected where changes are particularly apparent in the flood data and are expected to be mainly attributed to one driver, in order to test the method. Examples for such catchments are for instance alpine catchments, where temperature increase
(and hence a change in snowmelt floods) has been particularly apparent or large catchments with strong changes in river works that have exacerbated flood discharges due to the loss of flood plain retention. In a further step all catchments are analysed to get an understanding of the spatial distribution of the relative magnitudes of the drivers of change. Again, the most extreme floods are put into the context of the smaller floods to assess the role of the changes in the drivers on the characteristics of the extreme floods and the associated probabilities. Similar to WP1, the Bayesian attribution of average floods above is complemented by non-stationary extreme value statistics. However, while in WP1 time is used as a co-variate, here other variables are used as co-variates and the parameters of the flood frequency distribution are allowed to change in time as a function of these co-variates. The co-variates examined are precipitation characteristics (such as mean annual precipitation), other atmospheric characteristics obtained from Subproject 2, as well as indicators of land use change and river works similar to those of the Bayesian fingerprinting. The results are compared to the Bayesian fingerprinting to understand whether it is possible to make statements about how the contributions of the drivers to flood changes differ between floods of different magnitudes.