Vegetation Cover Benchmarking

by Lena Boysen, The Land in the Earth System, MPI-M, Hamburg, 4 August 2011.

Benchmarking, or evaluating models against observations, is a very useful tool to facilitate improvement of the Earth System models. This page describes benchmarking of vegetation dynamics (land cover) as a part of the International Land Model Benchmarking (ILAMB) project (http://www.ilamb.org/) as well as benchmarking of seasonal albedo values. The focus is on evaluation of simulated fractions of woody vegetation and bare ground against observations. Two metrics are used. Spatial correlation between simulated and observed patterns is estimated using Pearson’s correlation coefficient (r2), while the magnitude of a difference between simulations and observations is calculated using the root mean square error (rmse). These comparison diagnostic tools are resulting in a scoring system as suggested by Randerson et al. (2009) as well as a visualization of the data sets. The software R is used (http://www.r-project.org/) for statistical computing and plotting figures.

Hereafter, a short introduction to the benchmarking scripts is given. By the MODIS data, we mean a MODIS VCF (Vegetation Continuous Fields) product (http://glcf.umiacs.umd.edu/data/vcf/),and JSBACH is a land surface model of Max Planck Institute Earth System Model (MPI-ESM). An example is given for the JSBACH vegetation cover simulated in the CMIP5 historical run for the year 2002 at the T63 spatial resolution (ca. 1:90 x1:90). The aim is a combination of weighted r2 with weighted rmse. For a more detailed instruction please read the documentation!

Structure of the scripts and usage

CORRELATION_ANALYSIS_main.j

This navigation file coordinates the scripts presented in figure 1. Necessary file and variable declarations are listed below. The data set for the vegetation type “desert” is gained by subtracting 1-grass-tree which is automatically done by the scripts.

Three vegetation types and albedo in this order (renaming possible):– grass cover (including crops and pasture),– tree cover (including shrubs),– desert (provided or generated by the script (1-grass-tree cover)) and– albedo (provided or left empty).

Data files from observation (Ascii or netCDF):

resolution in latitude and longitude,

resolution in degrees.

Data files from model (netCDF):

resolution in latitude and longitude,

resolution in degrees,

year to be computed.

Weighting factor for the scoring.

Fig. 1: Structure of the land surface benchmarking scripts.

Default calculations are made for the whole globe, the northern and southern hemisphere, the tropics and extra-tropics and the zonal mean. Additionally, the calculations can be extended to the 14 basisregions of the earth. But since these procedures take some minutes to compute they are optional to chose:

calculate regions: yes = 1, no = 0,

path to the GFED2_regions_????_0.5x0.5.nc masks.

Attention:

If the observation data is also provided as netCDF then please adjust the grid resolution to themodel resolution manually and before running the script!

If the observation data is NOT shifted about 180 degrees to the east, then go to the script f95_UPSCALE.j and set shift=0.

If the observation data does contain a header, then go to the script f95_UPSCALE.j and set header=1 and hl=line number.

Version of R:: Please check, which version of R you use and update it if it is older than R version 2.10.0 by typing ’module switch R R/2.10.1’ for example.

Necessary R packages: Before starting these scripts, please install two packages in R as shown below (Typing “R” in the command window opens the program R). Please follow the instructions to create a personal library.

R>> install.packages("gplots")>> install.packages("hydroGOF")>> q()

f95_UPSCALE.j

If the observation data e.g. provided by MODIS (Vegetation Continuous Fields data collection, Hansen et al., 2007) is of too fine resolution this script will upscale this data (if of format Ascii) to the grid resolution of the model.

CDO_code.j

Several steps take place here depending on the input data:

conversion of the upscaled Ascii-observation-files into netCDF files,

creation of a „desert“ file (1-grass-tree) if not provided as a data set,

R_code.j

R is used for the statistical computation. This script can also be run alone if the input data sets are already of the same resolution and of the netCDF-format (but the R-code can easily be changed to read Ascii as well; read the manual: http://cran.r-project.org/doc/manuals/R-intro.pdf).The R-code must be run in a mode which saves all data of the session which can then be accessed again later (unfortunately it will list up all steps in the terminal). This is needed to calculate the overallscore at the end. Here are the single steps:

First of all the package “ncdf” needs to be loaded into the library. The observation and the model data sets are opened and read into seperate .nc-objects which contain all necessary information about the files but not the data itself. The specific variables to be computed are then read into matrices via these .nc-objects where all missing values (-99.00) are replaced by NA (not available, observation data) or NaN (not a number, model data).

Two vectors of the same length must be created which contain the data of the observation and model. This is done by manual pairwise deletion of NAs or NaNs so that the final data is kept at right positions within these vectors.

The command for the correlation coefficient “cor” calculates the Pearson correlation coefficient between two data vectors, whereby use=“all.obs” gives an error if there are still missing values.The score (square of r ) is calculated afterwards.

Scatterplots demonstrate the correlation between the two data sets with the simple command “plot”. A graph of the linear regression serves for orientation. Therefore, the linear regression is calculated with “lm” and a graph is plotted with “abline” according to this result. A “best fit” line is drawn afterwards where “lty” changes the linetype.

Furthermore is the zonal mean calculated and displayed. First, vectors have to be created which are of the length of all latitudinal points. “na.rm=TRUE” makes sure that missing values are not taken into account when computing the mean.

R_REGIONS_code.j

If the calculation of the 14 basisregions (see documentation) is switched on, this script then computes the correlation coefficients and the scores by multiplying the matrices in R with region masks provided by the GFED2_regions_0.5x0.5.nc (Global Fire Emissions Database, 2011) files. This loop takes some minutes since the masks have to be upscaled as well (cdo remapnn) using the grid information of the model data. The script will check if the maps have already been upscaled once so that this procedure is only done for the first time the script is run. The results are listed in a table in an Ascii- file which is converted to .pdf later on. No visualization is provided.

GRADS_code.j

To get a better impression of the data sets the land cover and the differences between observed and modeled data are plotted here with GrADS (http://www.iges.org/grads/). The titles of the figures, colorbars and colorlevels can be changed in the script itself. Output can also be changed into .png (default is .pdf) by uncomment the command ’printim name.png’ in the code.

R_score_code.j

This script accesses the stored results R. This is necessary to average the values of the correlation scores and the root mean square errors for the vegetation types tree and desert in the regions of the tropics and extratropics. These means are then weighted by a maximum factor (e.g. 5) and combined to one single overall score for the model by taking the mean of both (equation (1)). Albedo is not yet included in this calculation.score =1/2(5 ⋅mean(r2) + 5 ⋅ [1 - mean(rmse)]) (1)

More information on this procedure is given by Abramowitz et al (2008). These values are again printed into a pdf-file by using data.frame and textplot.

CORRELATION_ANALYSIS_main.j

Finally, the resulting files from R and GrADS can be combined to one .pdf file. If the naming is not appropriate please change it in the last command in the lines where you can find: *** here the final OUTPUT is named ***.The results are stored automatically in a directory defined at the beginning of the script. All unused files are deleted at the end of every script which is important if you want to have a look, for example, at the upscaled data sets.