Sunday, January 25, 2015

I was reading a statistics book the other day. I am at the beginning. In this section I read '(we) report results as odds ratios, which is more intuitive'. I must have read sentences stating this any number of times. But I don't agree.
It may be my background. My mother tongue is Dutch, which does not have a word for odds. Hence odds is a word which I only learned when I got into statistics. It is an abstract mathematical construct. The odds ratio then, is a ratio between two abstract numbers and initially was just a number. Since I never did much in odds and odds ratio, they never gained meaning either.
Now I am just wondering, what proportion of people reading this kind of books does not have an intuition for odds?

Sunday, January 18, 2015

In this post I will run SAS example Logistic Regression Random-Effects Model in four R based solutions; Jags, STAN, MCMCpack and LaplacesDemon. To quote the SAS manual: 'The data are taken from Crowder (1978). The Seeds data set is a 2 x 2 factorial layout, with two types of seeds, O. aegyptiaca 75 and O. aegyptiaca 73, and two root extracts, bean and cucumber. You observe r, which is the number of germinated seeds, and n, which is the total number of seeds. The independent variables are seed and extract.'
The point of this exercise is to demonstrate a random effect. Each observation has a random effect associated with it. In contrast the other parameters have non-informative priors. As such, the models are not complex.
Previous post in the series PROC MCMC examples programmed in R were: example 1: sampling, example 2: Box Cox transformation, example 5: Poisson regression and example 6: Non-Linear Poisson Regression.

JAGS

To make things easy on myself I wondered if this data was already present as R data. That is when I discovered this post at Grimwisdom doing exactly what I wanted as JAGS program. Hence that code was the basis for this JAGS code.
One thing on the models and distributions. JAGS uses tau (1/variance) as hyperparameter for the delta parameters and tau has gamma distribution. In contrast, all other programs use standard deviation as hyperparameter for delta parameter and hence gets the inverse gamma distribution. This distribution is available in the MCMCpack package and also in STAN. Gelman has a publication on this kind of priors: Prior distributions for variance parameters in hierarchical models. library(R2jags)data(orob2,package='aod')seedData <- list ( N = 21, r = orob2$y, n = orob2$n, x1 = c(1,0)[orob2$seed], x2 = c(0,1)[orob2$root])

Sunday, January 11, 2015

Eurostat has information on death rates by cause and NUTS 2 region. I am trying to get this visually displayed on the map. To get there I map all causes to three dimensions via a principal components analysis. These three dimensions are subsequently translated in RGB colors and placed in the map of Europe.

Data

Death rates are from Eurostat table causes of death. I took crude death rate. On that website, from default setup I removed gender and added causes. Then I transposed causes to rows and regions to columns so I would not be bothered by translation of invalid column names. I exported those as .xls, but got only part of the regions. My second attempt was export as .csv. In general I prefer to take .xls from Eurostat as they separate thousands via a ',', however, incomplete was not acceptable, so the .csv is used.
For the mapping the scipt used is based on rpubs: Mapping Data from Eurostat using R. However, some changes were needed as Eurostat reorganized there website. Another adaptation is that I removed all Frances' overseas parts. These parts gave too much whitespace for my taste.

Result

The resulting map is nice to see, there is certainly a structure. Regions close to each other have similar colors hence similar death rates. However color interpretation itself is difficult. Most important of all, these are comparisons between regions.

To explain the colors I did something similar as for the map itself. In order to avoid crowding the figure, only forty of the causes are displayed.

Sunday, January 4, 2015

I guess most people have heard of Piketty and his book capital in the twenty-First century. I don't have that book, but the media attention has made me wonder if I could see any trends in Dutch public data. As I progressed with this post, I have concluded that these data could not tell me much about longer term trends, however, one can see very well most people have been getting more poor as the crisis continued.

Data

Data are acquired from Statistics Netherlands. They have Statline (the electronic databank of Statistics Netherlands. It enables users to compile their own tables and graphs.
The information can be accessed, printed and downloaded free of charge). There is both a Dutch and an English version of the website. Above I did link to the English part, but I could only find the vermogensklassen table in the Dutch language section. As far as I understand estate is a better word than capital for 'vermogen'. It is intended to reflect the net value of all a persons assets.
Since the contents of the tables are also in Dutch, I will be translating, however, it is not so easy, not everything has an English language equivalent.
It should also be noted that these tables are not the most ideal in the 'Piketty context'. For one thing the time range is to short, for another the detail in the upper capital range is too low.Finally, there are three sets of variables within the tables; income categories (ten categories, low to high), income source and capital itself. There is one table which crosses income categories and income source, this table provides counts, means and quantiles. The other table has capital categories, but only by either income categories or income source. After suitable selection of data ranges I have chosen the .csv versions to export the data. As final notes, the data for 2011 has two versions, before and after a method change. When using years in a display, the former got year 2010.9 and the latter 2011.1. The data for 2013 is preliminary.Regarding income source, I translated as follows:c('1 Working','1.1 Employees', '1.2 Civil Servant','1.3 Other working', '2 Own Business','3 Welfare', '3.1 Income Insurance','3.1.1 Unemployment', '3.1.2 Illness','3.1.3 Retirement', '3.2 Social','3.2.1 Sustention','3.2.2 Other Social', '3.3 Other welfare')There is a structure, Working has three sub categories, Welfare has two levels. It should be explained that 3.1 and its sub categories are paid for by employees while working. Unemployment has limited duration, illness is, as far as I understand, a combination of short and long term, the rules have changed a bit, I am not 100% sure. The category 3.2.1 is 'bijstand' which is the last resort for which one can only apply if there are no other sources of income.

Extremes in the data

The presence of both a mean and a 75% quantile gives possibility to finding subsections of data where the mean is larger than the 75% quantile. Below a selection of these data where the quantile is also larger than 100000 and from years 2007 and 2013, ordered by ration mean/75% quantile. income Source count year Mean p501 2e 10% 2 Own Business 35000 2013 152000 150002 3e 10% 1.3 Other working 4000 2013 203000 110003 4e 10% 1.3 Other working 5000 2013 239000 230004 1e 10% (low) 2 Own Business 92000 2013 281000 250005 10e 10% (high) 2 Own Business 221000 2013 816000 2830006 10e 10% (high) 2 Own Business 205000 2007 891000 3800007 1e 10% (low) 3.1.3 Retirement 83000 2013 207000 240008 2e 10% 1.3 Other working 4000 2013 257000 60009 1e 10% (low) 1.3 Other working 11000 2013 474000 6000We can observe that these extremes mostly occur in 2013.There is no data with mean lower than the 25% quantile, however, we can select all data with the first 25% quantile lower than 0. These are mostly working people from the 6th to 8th income category in 2013. I would guess these people have house which lost value while the mortgage did not. It should be noted that mid 2014 house prices did increase again, but since the data reflect information on 1st January, it will be some years ere that is reflected in these tables . On radio today I heard that currently one third of the houses is worth less than the mortgage on it. income Source count year Mean p25 p501 3e 10% 1.2 Civil Servant 22000 2013.0 20000 -13000 20002 7e 10% 1.1 Employees 403000 2013.0 55000 -18000 90003 7e 10% 1 Working 484000 2013.0 63000 -18000 100004 7e 10% 1.2 Civil Servant 67000 2013.0 67000 -19000 180005 6e 10% 1.1 Employees 361000 2013.0 46000 -11000 50006 8e 10% 1.2 Civil Servant 88000 2013.0 82000 -16000 320007 6e 10% 1 Working 425000 2013.0 54000 -9000 60008 8e 10% 1.1 Employees 413000 2013.0 76000 -12000 240009 8e 10% 1 Working 520000 2013.0 85000 -13000 2700010 7e 10% 1.2 Civil Servant 64000 2010.9 82000 -3000 3000011 7e 10% 1.3 Other working 13000 2013.0 289000 -6000 8900012 8e 10% 1.3 Other working 19000 2013.0 294000 -2000 11400013 4e 10% 1.3 Other working 5000 2013.0 239000 -1000 23000
From these data at least it seems that 2013 has quite more extremes than 2007, both at the high and the low side.

Plots of trends

Using these same data it is also possible to make plots. I have chosen only the lowest and highest income categories, otherwise the sub plots became too small.
In the plot the dots are the means, the lines the medians and the bars the 25% and 75% quantile.

In case anybody wonders what why this welfare category is so rich, that is mostly the retired. I suppose these people have worked hard in their life, got a good retirement fund bought a house of which the mortgage is paid. I have no explanation for the low income business owners who still seem worth on average 250000 Euros, their mean estate is more than the low income business owners of the next income segment. They did not lose value that much either, in contrast to the 9th income group of working people who have lower mean value and lose value.

Plots of estate categories

There are some difficulties in making these displays. The categories have wildly different widths. The lowest and highest categories are unbounded. Finally, the 0 to 5000 Euro category is by far most populous. The net effect is then that we get a figure with high peak, long tails where density is unclear and all detailed cropped up together. Hence in the end I dropped histograms.The figure below shows trends for income categories while selecting the lowest estate categories. It is clear that especially the number of people with a clear negative estate is growing. People are not entering the small negative category either, debtors in general owe more that 10000 Euro.

In the highest estate categories again most categories are getting empty. The one exception is the 1 million Euro plus, the number of Euro millionaires increased in 2013. In this latter category somehow the lowest income group relatively often present.

Categories by income source

Again in the lowest category we see a big increase in people with debt. The trend seems to be that the number of working people with debt increases quickly.

For the highest categories we see the corresponding effect. All categories go down, except the richest people. I have chosen to add the retirement category in this plot, hence it is visible that in these categories it is mostly the retired who make up the welfare category

Status in 2013

The final plot shows all estate categories for the three income sources. In this plot we see that for business owners the largest category is the debtors. For the working people the most frequent categories are 0 to 5000 Euro and more than 10 000 Euro debt. We can also see a split. Beside the debtors there are the haves, with more than 100 000 Euros and the have a little bit, with less than 30 000 Euro.

Part 2

This is the code using the estate categories. It starts with a load of data reorganisation. This is because the data sits in columns for income categories, income sources and both of these having amounts and counts. It starts with a section for income categories (low to high) after which a similar section for income sources is used.library(ggplot2)library(dplyr)cvk <- readLines('Vermogensklassen__hu_241214141933.csv')cvk.head <- as.matrix(read.csv2(text=cvk[2:5],header=FALSE))cvk.body <- read.csv2(text=cvk[5:(length(cvk)-1)])keepcol <- c(colnames(cvk.head)[cvk.head[3,]=='Aantal' & grepl('groep',cvk.head[1,])])

Wiekvoet

Wiekvoet is about R, JAGS, STAN, and any data I have interest in. Topics range from sensometrics, statistics, chemometrics and biostatistics. For comments or suggestions please email me at wiekvoet at xs4all dot nl.