All Things Data Science

Thursday, December 1, 2016

Those of you who already have attended a meetup of the Brussels Data Science Community know that, besides excellent talks, those meetups are fun because of the traditional drinks afterwards. So after the last meetup we were on our way to a bar on the campus of the University of Brussels and I had this chat with @KrisPeeters from Dataminded. Now if you are expecting wild stories about beer and loose women (or loose men for that matter), I'm afraid I'll have to disappoint you. Instead we discussed ... sampling. Kris was questioning whether typical sample sizes market research companies work with (say in the hundreds or a few thousand at the max) still matter these days, given that we have other sources that give us much larger quantities of data. I told him everything depends on the (business) question the client has.

To start with we can look at history to answer this question. In 1936 the Literary Digest poll had a sample size in the millions. But, obviously, that sample wasn’t representative because it only consisted of its readers. They predicted that Republican Alf Landon would beat Democrat Franklin D. Roosevelt. Roosevelt won in one of the largest landslides ever.

A more recent example is a study that claimed that the Dutch are the best non-native English speakers. This was debunked in http://peilingpraktijken.nl/weblog/2016/11/beheersen-nederlanders-de-engelse-taal-echt-het-best/ (Dutch). Even though the sample size was 950,000 (in 72 countries) statistician Jelke Bethlehem, a Dutch national himself, concluded that the sample was not representative and did not allow to draw the conclusions that the researchers had claimed.

Of course samples can and are biased as well. But there is a difference: Samples are constructed specifically with a research question in mind, and often are designed to be unbiased. Big data or other sources of data are often created for other reasons than research questions. As a consequence big data might have some disadvantages that are not offset by its bigger size.

Take this hypothetical example. Say you have a population consisting of N=10,000,000 individuals and you want to estimate the proportion of people that watched a certain TV show. Say that you have an unbiased sample of size $n=1,000$ and that you find that 100 of them watched the television show. So, with 95% confidence, you would estimate p=0.10 with a margin of error of $z_{\alpha / 2} \times \sqrt{{pq\over n}}= 1.96 \times \sqrt{{0.1 \times 0.9 \over 1,000}}= 0.01859$, which amounts to an confidence interval in absolute figures from 814,058 to 1,185,942. Suppose your friend has an alternative datasource with $N'=6,000,000$, so for those you know exactly whether they watched or not, with no sample error at all, so no confidence interval (unless you are a Bayesian, but that's another story). Now you know the exact number of people who watched from the 6,000,000. For simplicity's sake assume this is 600,000. To be fair, you know nothing about the remaining $N''=4,000,000$ , but you could assume that since your subpopulation is so big, they will be close to what you already have. This effectively means that you consider the alternative data source as a very large sample of size $n'=6,000,000$. In this case the sample fraction is ${n' \over N}={6,000,000\over 10,000,000}=0.6$ which is pretty high, so you get an additional bonus because of finite population correction yielding a confidence interval between $p_-=p-z_{\alpha / 2} \times \sqrt{{pq\over n}} \times \sqrt{{N-n'\over N-1}}=0.09984$ and $p_+=p+z_{\alpha / 2} \times \sqrt{{pq\over n}} \times \sqrt{{N-n'\over N-1}}=0.10015$. In terms of absolute figures we end up with a confidence interval from 998,482 to 1,001,518, which is considerably more precise than the 814,058 and 1,185,942 we had in the case of $n=1000$. Of course, the crucial assumption is that we have considered the n'=6,000,000 to be representative for the whole population, which will seldom be the case. Indeed, it is very difficult to setup an unbiased sample, it is therefore not realistic to hope that an unbiased sample would pop up accidentally. As argued above, big data sources are often created for other reasons than research questions and hence we can not simply assume they are unbiased.

The question now becomes, at what point is the biasedness offset by the increased precision. In this case bias would mean that individuals in our alternative data source are more likely or less likely to watch the television show of interest than is the case in the overall population. Let's call the proportion people from the alternative data source who watched the television show $p'$. Likewise we will call the proportion of remaining individuals from the population that are not in the alternative data source that have watched the relevision show, $p''$. We can then define the level of bias in our alternative data source as $p'-p$. Since the number of remaining individuals from the population that are not in the alternative data source is $N''=N-N'$, we know that
$$Np=N'p'+N''p'', $$
which is a rather convoluted way of saying that if your alternative data source has a bias, the remaining part will be biased as well (but in the other direction).
Let's consider different values of $p'$ going from 0.05 to 0.15, which, with $N'=6,000,000$ and $N''=4,000,000$, corresponds with $p''$ going from 0.175 to 0.025, and corresponds with levels of bias going from -0.05 to 0.05. We then can calculate confidence bounds like we did above. In figure 1 the confidence bounds for the alternative data source (in black) are hardly noticeable. We've also plotted the confidence bounds for the sample case of $n=1000$, assuming no bias (in blue). The confidence interval is obviously much larger. But we also see that as soon as the absolute value of the bias in the alternative data source is larger than 0.02, the unbiased sample is actually better. (Note that I'm aware that I have loosely interpreted the notions of samples, confidence interval and bias, but I'm just trying to make the point that more is not always better).

As said before, samples can and are biased as well, but are generally designed to be unbiased, while this is seldom the case for other (big) data sources. The crucial thing to realize here is that bias is (to a very large extent) not a function of (the sample) size. Indeed, virtue of the equation above, as the fraction of the alternative data source becomes close to 1, bias is less likely to occur, even if it was not designed for unbiasedness. This is further illustrated in the figure 2. For a few possible values of p (0.10, 0.25, 0.50 and 0.75) we have calculated what biases the complement of the alternative data source should show in function of the fraction that the alternative data source represents in the total population (i.e. sample fraction $N'/N$) and the bias $p'-p$. The point here is that the range of possible bias is very wide, only for sample fractions that are above 0.80 the sheer relative size of the subpopulation starts to limit the possible biases one can encounter, but even then biases can range from -0.1 to 0.1 in the best of cases. Notice that this is even wider than the example we looked at in figure 1.

For most practical cases in market research the fraction of the alternative data source(s) can be high, but will seldom be as high as 0.80. In other words, for all practical purposes (in market research) we can safely say that the potential bias $p'-p$ of alternative data source(s) is not a function of size, but rather from design and execution. I believe it is fair to assume that well designed samples combined with a good execution will lead to biases that will be generally lower than is the case for alternative data sources where unbiasedness is not something that is cared about.

Some concluding remarks.

I focused on bias but with regard to precision the situation is inversed, alternative (big) data sources will generally be much larger than the usual survey sample sizes leading to much smaller confidence intervals such as those in figure 1. The point of course remains that it does not help you much to have a very tight (i.e. precise) confidence interval if it is on a biased estimate. Of course, sampling error is just one part of the story. Indeed, measurement error is very often much more an issue than sampling error.

Notice by the way that enriching the part of your subpopulation that is not covered by the subpopulation with a sample does not work in practice because, in all likelihood, the cost of enriching is the same as the cost for covering the whole population. This has to do with the fact that, except for very high sample fractions, precision is not a function of population size $N$ (or in this case $N''$).

Does that mean that there is no value in those alternative (big) data sources? No, the biggest advantage I see is in granularity and in measurement error. The Big Data datsets are typically generated by devices, and thus have less measurement error and because of size they allow for a much more granular analysis. My conclusion is that if your client cares less about representativity and is more interested in granularity, than, very often, larger data sources can be more meaningful than classical (small) samples, but even then you need to be careful when you generalize your findings to the broader population.

Sunday, February 22, 2015

This is a write up of the talk I gave on the 'Insight Innovation eXchange Europe 2015' conference on 18-02-2015 in Amsterdam. IIeX is a conference that is focused around Innovation in Market Research.

My talk was a rather general one in which I tried to sketch the relationship between market research and big data. After a brief introduction, I started by explaining how computing played an important role in Market Reseach right after the second world war. Then I gave an overview of the current state, and finally I looked at what the future might bring us when it comes to Big Data applications in Market Research.

When I talk to people in market research and I tell them that I work in Big Data I have the impression that I'm greeted with less enthusiasm than was the case a few years a go. Indeed, it appears that the initial enthusiasm for Big Data in the Market Research community has dwindled down a bit.

I like to describe the relationship between market research and big data with The Three Phases of A Narcissistic Relationship (See The Three Phases of A Narcissistic Relationship Cycle: Over-Evaluation, Devaluation, Discard by Savannah Grey). A narcissist will choose a victim who is attractive, popular, rich or gifted. They will then place the target on a pedestal and worship them. The target is seen as the greatest thing ever. Here the Narcissist is ecstatic, full of hopes and dreams. They will talk and think about them constantly, they are euphoric. Now I'm not going to say that market research people where excstatic and full of dreams when it came to big data, but you will have to admit that the initial enthusiasm for big data was especially high amongst market researchers.

But the narcissist is easily bored. The attention they gave to their target is gone and is replaced by indifference. This is the devaluation phase. The narcissist becomes moody, easily agitated, starts to blame and criticize the target. In the market research world, after a while, we saw a larger amount of papers that were quite critical with regard to Big Data. Big Data was often blamed for stuff we are not so good at ourselves (bad sampling, self-selection, dodgy causality).

Finally, in the Discard phase, the narcissist pulls away and starts to devote attention to its next victim, such as Neuro marketing, Internet of Things, and what have you.

Now of course I realize that this story is purely anecdotical, and has no scientific value. All I want to do here is to illustrate the tendency of Market Research to cherry pick innovations in other domains and apply it as a novelty in Market Research and then move on to the next darling.

The 'old' days

Now let me show you an example of true innovation in Market research, albeit from a long time a go. For that, I need to take you to the streets of Chicago in the 1940's, where a young man was thinking about how he could help his father's business become more efficient. His father, Arthur Nielsen Senior, had devised this methodology where he would sample stores in the U.S., and send out people to those stores to measure the stock levels of the products in the stores and look at the purchase invoices. By a simple substraction rule and projecting up to the population he could reasonably estimate sales figures. Back in the fourties there were no computers in private companies yet. They just started to emerge in the army and in some government administrations. In those days it was not unusual to see a team of human calculators who did the number crunching.

I can't read the mind of the son, Arthur Nielsen Junior, but I can imagine that he must have said to himself while looking at his dad's calculation team:

Hmm, Volume seems to be high here. And so is the Velocity.

Indeed, in those days they were doing this every two months. This is slow by today's standards, but it was fast in the 1940's. I can only speculate, but I like to think that he also added:

Hmm, luckily we're doing OK on Variety and Veracity. Otherwise we would have to talk about the 4 V's of Human Calculators.

Back on a more serious note, Arthur Junior was in the army during the war and there he had seen that the army deployed computers to crack the encrypted messages of the Germans. He convinced his dad's company to invest a large amount of money in these new machines. Not many people outside of market research know this, but it was a market research company that was the first private company to ever order a computer. OK, I must admit the first order was in fact a tie with Prudential, and that first order might not have led to the first deployment of a computer in a private company (I believe the order got postponed at some point), but the important point here is the vision that these new machines would be useful in market research.Let me give you a second example. PL/1 stands for Programming Language 1 and is, as the name indicates, one of the first programming languages. It got introduced in the 60's. The first versions of SAS were written in PL/1 and its DATA STEP has a bit of a PL/1 flavour to it. One of my current clients in the financial area still runs PL/1 in production, so it's still around today. Well, Nielsen UK was the 6th company in that country to adopt this new language. Market researchers in those days were true pioneers. We tend to forget that a little bit.

Big Data Analytics in Market Research Today

According to GreenBook GRIT-report Market Research is doing quite well in Big Data.

More than 35%, both of clients and suppliers, have used Big Data Analytics in te past. But notice that this includes those that have done a one-off experiment. Secondly, the ambiguous definition of Big Data might have played a role as well. If we look at those that consider it, we see that that percentage for clients is a bit higher than with the suppliers.

What about evolution?

Let's compare the second half of 2013 with the first half of 2014. In terms of using Big Data Analytics we see a very small increase and in terms of considering it, there is no increase at all. We seemed to have plateaued here, albeit at at a high level.

In terms of papers and articles this list is more anecdotical than representative, but titles such as 'The promise and the peril if Big Data' illustrate the mixed feelings we seem to have.

In other words, market research seems to be bipolar when it comes to Big Data. We want to be part of the game, but we're not really sure.

My advice to suppliers of market research

Don’t look at Big Data as just a fad or hype. By treating it as a fad we will miss an opportunity (and revenue) to answer questions our clients have. The hype will go, but the data will not go away!

Don’t look at Big Data as a threat to Market Research. It's not. Very often we already have a foot in the door. Very often we are seen as the folks who know how to deal with data. If we decline, other will players move in. Yes, in some sectors we might have lost some ground, especially to consultancy firms, Business Intelligence folks and companies with a strong IT background.

But embrace it as a new (business) reality and learn how to process large amounts of structured and unstructured data.

The latter, learning how to process large amounts of data, is not difficult, and it doesn't have to be expensive. You can already do a lot with R on a reasonably priced system and parallelize if need be, if you want to stay away from the typical Big Data Platforms, such as Hadoop.

Distributed storage and processing

But in fact we should not shy away from those new platforms. Again, it's (relatively easy) and it's, in principle, cheap. Any reasonably sized market research company with a few quants should at least consider it.

Hadoop takes care of distributed storage and distributed processing on clusters of commoditidy hardware. The storage part is called HDFS, The processing part is based on Map Reduce. I'm sure a lot of you have heard about Map-Reduce, but for those of you who have not, let me give a quick recap. Map Reduce is a strategy to run algorithms in parrallel on a cluster of commodity hardware.

Let's take the example a making hamburgers. I got the following figure from Karim Douïeb.

Imagine you have plenty of ingredients and a lot of kitchen personnel. What you don't have is a lot of time. People are waiting for their burgers. Furthermore, you have a rather small kitchen, say you only have one place to panfry the burgers. One approach would be to assign one person in the kitchen per order and let them individually slice the ingredients, fry the meat for their burger, assemble it and serve it. This would work, but you quickly would get a queue at the frying pan, and your throughput of burgers would suffer.

An alternative approach is to assign one person in the kitchen per ingredient and have them slice or fry it. One or more other persons would pick up the required number of slices per ingredient, assemble it and serve the burgers. This approach would substantially increase the throughput of hamburgers, at the cost of a bit more coordination. The folks who do the slicing or frying are called the Mappers, the persons who are assembling the burgers and serve it, are called the Reducers. In Hadoop, think about data rather than ingredients, and about processors (CPU's) rather than kitchen personnel.

The trick is thus to try and express your algorithm in this map and reduce framework. This may require programming skills and detailed knowledge of the algorithms that might not be available in your quant shop. Luckily there are some tools that shield the Map and Reduce for you. For instance you can access HDFS easily with SQL (Impala, Hive, ...). If you have folks in your team who can program in, say SAS, where they might already use PROC SQL today, they will have no problem with Impala or Hive.

Another approach is to use R to let your quants access the cluster. It works relatively well, although it needs some tweaking.

The new kid on the block is Spark. Spark does not require you to write barebones Map Reduce jobs anymore. It is sometimes hailed as the successor to Hadoop, although they often co-exist in the same environment. Central to Spark is the Resilent Distributed Dataset (RDD) which allows you to work more in-memory than traditionally, it abstracts some of the Map/Reduce steps for you, and it generally fits better with traditional programming styles. Spark allows you to write in Java, Scala or Python (and soon R as well). With SparkSQL it has an SQL-like interface, Spark Streaming allows you to work in real-time rather than in batch. There is a Machine Learning library, and lots of other goodies.

Tools such as Hive, R and Sparc make distributed processing within reach of market researchers.

Trends

There are few trends in the Big Data and Data Science world that can be of interest to market researchers:

Visualization. There is a lot of interest in the Big Data and Data Science world for everything that has to do with Visualization. I'll admit that sometimes it is Visualize to Impress rather than to Inform, but when it comes to informing clearly, communicating in a simple and understandable way, storytelling, and so on, we market researchers have a head start.

Natural Language Processing. One of the 4 V's of Big Data stands for Variety. Very often this refers to unstructured data, which sometimes refers to free text. Big Data and Data Science folks, for instance, start to analyze text that is entered in the free fields of production systems. This problem is not disimilar to what we do when we analyse open questions. Again market research has an opportunity to play a role here. By the way, it goes beyond sentiment analysis. Techniques that I've seen successfully used in the Big Data / Data Science world are topic generation and document classification. Think about analysing customer complaints, for instance.

Deep Learning. Deep learning risks to become the next fad, largely because of the name Deep. But deep here does not refer to profound, but rather to the fact that you have multiple hidden layers in a neural network. And a neural network is basically a logistic regression (OK, I simplify a bit here). So absolutely no magic here, but absolutely great results. Deep learning is a machine learning technique that tries to model high-level abstractions by using so called learning representations of data where data is transformed to a representation of that data that is easier to use with other Machine Learning techniques. A typical example is a picture that constitutes of pixels. These pixels can be represented by more abstract elements such as edges, shapes, and so on. These edges and shapes can on their turn be furthere represented by simple objects, and so on. In the end, this example, leads to systems that are able to reasonably describe pictures in broad terms, but nonetheless useful for practical purposes, especially, when processing by humans is not an option. How can this be applied in Market Research? Already today (shallow) Neural networks are used in Market Research. One research company I know uses neural networks to classify products sold in stores in broad buckets such as petfood, clothing, and so on, based on the free field descriptions that come with the barcode data that the stores deliver.

Conclusion

My advice to the market research world is to stop conceptualizing so much when it comes to Big Data and Data Science and simply apply the new techniques there were appropriate.

Sunday, June 22, 2014

As a data scientist I'm always happy when a newspaper spends time in explaining something from the field of Statistics. The Guardian is one of those newspapers that does a very good job at that. @alexbellos often contributes to the Guardian and I must say I often like the stuff he writes. Just recently he wrote a piece entitled "World Cup birthday paradox: footballers born on the same day", which was taken over by the Belgian quality newspaper De Standaard. The headline there was "Verbazend veel WK-voetballers zijn samen jarig", which roughly translates to "Surprisingly many Word Cup players share birthdays". Notice already that the headline in De Standaard is less subtle than the one in The Guardian.

Alex Bellos starts with explaning what the birthday paradox is:

The birthday paradox is the surprising mathematical result that you only need 23 people in order for it to be more likely than not that two of them share the same birthday.

He then refers to the internet for explanations of why this is in fact the case (see, for instance, here). He then, rightfully, remarks that the world cup football offers an interesting dataset to verify the birthday paradox. Indeed, the 32 nations that participate have 23 players each. We would therefore expect to see about half of the teams to have shared birthdays. It turns out that 19 of the teams have shared birthdays. So far so good.

The problem I have with the article is in the subsequent part. But before we come to that, let's have a look at the summary in the begining of the article:

An analysis of the birth dates of all 736 footballers at the World Cup reveals that a surprisingly large number of teammates share the same birthday, and that seven were born on Valentines' Day

The observation about Valentine's day is an interesting one because it plays on the same distinction between the "a same day" and "the same birthday" that makes the birthday paradox surprising for some. From that perspective it would have been interesting to mention what the probability is that in a group of 736 we would see 7 or more people that share the same birthday. In defence of the author, I must admit that it is surprisingly hard to find references to this extension of the birthday problem (but see here, here and here). I understand a closed solution for triplets was published by Anirban DasGupta in Journal of Statistical Planning and Inference in 2005. On the web I only found one solution for the general problem, but I could only get it to work for the trivial case of 2 and the more complicated case of 3. But for 7 it gave very strange results. So either the formula was wrong, or, more likely, my implementation of the formula was wrong. I then used the poor man's mathematics, i.e. the simulation.

In a first simulation I randomly selected 736 birthdays from a uniform distribution. I then counted how many players I found that didn't share a birthday with any of the other players, and how many pairs of players shared a birthday, how many triplets, and so on. This is a barplot of the results I got:

As you can see, 7 was present as well. Granted, it was not Valentine's day, but nonetheless it is a birthday shared by 7 players. Notice, by the way, that there are far more players that share a birthday with one other player than those that don't share a birthday (2 times about 110 versus about 100).

I then repeated that process 10,000 times and each time verified whether there were birthdays that were present 7 or more times. This allowed me to estimate the probability that in a selection of 736 players one (or more) birthdays is shared by 7 or more players to around 83%. It is therefore not remarkable at all that in the Worldcup in Brazil we've found such a birthday as well.

The second issue I have with this article is the part where the question was asked why we observed 59.4% (19 out of 32) instead of the expected 50.7% (the theoretical probability for a group of 23). Although the author suggests the possibility that this is because of chance, he doubts it and instead offers an alternative based on the observation that footballplayers are more likely to have their birthdays in the beginning of the year than at the end of the year. The reason for this skewed distribution has to do with the school cut-off date (very often the first of January), height of the children in school and dominance in sports.

I don't question this theory, it's not my area of expertise. Furthermore, I believe that the skewed distribution amongst sportsmen has been observed before. What suprises me, though, is that an article in which the birthday paradox plays an important role, does not use probability theory and statistics more to put these observations in perspective. In this case the natural question to ask is: if, in a team of 23 players, the probability of having a shared birthday is 0.507 and we have 32 teams what is the probability to find 19 or more teams with a shared birthday. This can easily be calculated with the binomial distriubution and results in 0.21, again not unlikely at all. That said Alex Bellos does not exclude that it's all by chance, he simply doubts it, which is fair.

As said earlier, I don't question the theory of the skewed distribution for sportsmen, so I will not calculate what the probability is to observe the worldcup specific distribution under the hypothesis of a uniform distribution. But I do think that the author should also have looked at what the probabality is of having players with shared birthdays under a "footballer"-specific distribution rather than the uniform distribution. I don't have such a distribution or a more general "sportsman"-specific distribution available (although I'm sure it must exist, because the skewed distribution of birthdays of sportsmen is well documented), so here I will simply use those that Alex mentioned in his artcicle, i.e.January 72, February 79, March 64, April 63, May 73, June 61, July 54, August 57, September 65, October 52, November 46, and December 47. I simply transformed those to daily probabilities and then assumed they are generaly valid for the population of "Worldcup attending football players". The plot below shows the two distributions considered.

Furthermore, if we can't rely on the uniform distribution, the calculations for the birthday paradox become complex (at least to me), so I again resort to simulations.

After 10,000 replications, the result of the simulation is 0.518, which means that under the skewed footballer distribution we would expect to see shared birthdays in 51.8% of the teams of 23 players. This is only 1.1 percentage points higher than in the uniform distribution case. If you don't accept 19 out of 36 (i.e. 59.4%) because that's too far from 50.7%, it's hard to see why you would find 51.8% so much more convincing. In other words, the birthday paradox is not such a good measure for indicating whether football players really have a different (skewed) birthday pattern compared to the rest of the population. It would have been clearer if the two topics were separated:

Do football players, like other sportsmen, have a different birthday pattern than the rest of the population?

The worldcup is an excellent opportunity to illustrate the Birthday paradox.

As an interesting side note, in the mean time it turns out that the data Alex used was not completely correct and with the new data the number of teams with shared birthdays has become 16. This is exactly the number we would expect under the uniform distribution. Notice though that under the skewed distribution and using the usual conventions of rounding, we would expect to see 17 teams teams with shared birthdays instead of 16. So, using their own reasoning, the headline in the De Standaard Newspaper now should change to: "Suprisingly few Wordcup players share a birthday". Unless, of course, you follow the reasoning using the binomial distribution mentioned above and conclude that with 32 replications this is likely to be coincidental.

About Me

Istvan Hajnal is a veteran of more than 20 years in the fields of data analysis, survey methodology and market research. First at the University of Leuven, Belgium and then
about 10 years with The Nielsen Company, the world's largest Market Research Company. Istvan is currently Insights Director, Marketing & Data Sciences for GfK, Belgium.
He received a master's degree in computer science (Leuven), a master's degree in quantitative applications in the social sciences (Brussels) and finally a Phd in Social sciences from the University of Leuven.
He blogs about Data Science but occasionally also on management and leadership in general and the Market Research Industry in particular.