There has been a lot written about the skills needed to be a Data Scientist. Not only should you be able to do these standard things:

Wrangle data (get, transform, persist)

Model (explore, explain and predict)

Take action (visualize, summarize, prototype)

…but I would argue that you should also be able to start with a bare machine (or cluster) and bootstrap a scalable infrastructure for analysis in short order. This does not mean you need to be able to administer a 1000-node hadoop cluster, but you should be able to set up a small cluster that can process TBs of log data into something that has business value.

For people who work for a big company it is easy to fall into the habit of using whatever infrastructure is available. Your IT department may have set up a hadoop cluster, there may be databases that are pre-configured and there are probably a lot of nice productivity tools that make it easier to analyze data at work. It makes perfect sense for companies to provide these conveniences and it probably makes your job easier. But it is also easy to get too cozy with this tool chain and come to rely on it.

In this series of posts I am going to talk about the analysis stack on my personal computers that help me do those things.

R (and RStudio)

MySQL

Hadoop (Scala, Cascading, scalding, scoobi)

…

It took me a while to get this set up but I have a goal of being able to start from scratch and install a complete working data science setup in 6 hours or less.

This is an illustration of representing point count in a graphic using transparency. This is easy to do in ggplot2 if you use one of the barchart type of geoms. However I think there are other situations where it would be useful to apply aesthetics based on point count.

Since Hadley did a lot of his canonical examples using this data I thought it would be helpful for comparing and contrasting.

This chart shows the distribution of the price/carat of diamonds segmented by quartile of carats and clarity. The transparency shows how many diamonds each bar represents. This makes it easy to see where the action is.

library(ggplot2)
# create copy of diamonds
df <- diamonds
# compute the quartiles of carat
df$carat.qtiles <- cut(df$carat,unlist(quantile(df$carat)),include.lowest=T)
# plot the probability distribution of price/carat, faceted by clarity and carat quartile
# key point: using the count per bar to set the alpha level. This lets you see how much
# data is represented by each bar (it would be nice to be able to do this
# anytime an aggregate is done...boxplots, bins, etc.)
p <- ggplot(data=df, aes(x=price/carat,y=..count../sum(..count..)))
p <- p + geom_histogram(aes(alpha=..count..),binwidth=1000) +facet_grid(clarity~carat.qtiles)
p

Currently in ggplot2 this method will only work if the ..output.. variables related to count are available. There are a number of areas that could benefit from this capability. It should also be easy to add more output variables to the elements of ggplot for which this behavior would be natural.

geom_boxplot: Geoms that aggregates multiple points are good candidates for this

facet_*: It would be interesting to be able to add a visual cue to each facet to show how many points are in each.

The most appealing idea on this so far is to enable scaling of the facet area by point count (or other things).

Ordering of the facets by point count would also be extremely useful.

Thresholding by count. This would be great to easily chop low-signal facets and keep the visualization clean.

How do I know? Well it is simple; almost everyone evaluates situations in the world using metrics that do not represent their goals with high-fidelity.

For example, Peter Thiel is a great businessman and smart guy but like everyone else, his metrics are broken. His thesis is that “innovation is dead”.

“If you look outside the computer and the internet, there has been 40 years of stagnation,” said Thiel, who pointed to one of his favorite examples: the dearth of innovation in transportation. ”We are no longer moving faster,” Thiel noted. Transportation speeds, which accelerated across history, peaked with the debut of the Concord in 1976. One decade after 9/11, Thiel says, we are back to the travel speeds of the 1960s.

Is going faster and faster a good measure of progress? Is there a point where transportation is fast enough? It is clear that there is not technological barrier to having faster planes but society has made it clear that it does not care to invest in that area to gain that extra speed. Maybe other metrics like miles/passenger/(Joule of energy used) is a more relevant metric. Or maybe it is bad too.

Progress != Growth:

Most people associate progress with growth, but GDP growth by itself is not a good long-term goal because it cannot go on forever. If growth is not sustainable then we should not go after it past a certain point. I do not know the right metric to tell how sustainable a unit of GDP growth is, but I do know that a sustainability component is required to fix the metric.

Why this matters a lot

Creating metrics that reflect your goals (as a person, company, country, ..) is important because people and organizations optimize their activity to metrics. If you are a politician who is judged by whether GDP goes up, you will pursue polices that try to increase GDP. If you are a public company that is judged by short-term earnings growth then you will put a lot of energy into optimizing that.

Fixing metrics is simple but hard

Fixing metrics is very hard in practice but it is conceptually simple because the reason for broken metrics is usually easy to identify.

Top three reasons why most metrics are broken:

The metric is venerable. It used to make sense but the world changed and it is not hi-fi anymore.

The metric is too simple. The world is complicated and goals are similarly complex. Simple metrics usually leave out important factors. People like simple metrics so they get popular and gain momentum.

The metric looks for keys under the lamp post…rather than down the street in the dark where you dropped them. This is related to being too simple but complex metrics can also have this failing. Some goals are hard to represent with metrics with high-fidelity. But that does not stop people from creating metrics to measure those goals. Those metrics are usually chosen for convenience rather than fidelity. An imperfect metric is fine as long as people are aware of the problems and use the metric accordingly.

Even after you figure out that your metrics are broken it is really hard to fix them. A hi-fi metric provides real insight into the world and that is always a challenge. You may even conclude in some cases that there is no simple collections of metrics for a given goal. But fixing your metrics (or your understanding of your metrics) is crucial because failure follows a bad metric around diligently.

Map-reduce is great. It has made it possible to process insane amounts of data on commodity hardware. However it is a very low-level programming abstraction and too low for most problems that analysts and “data scientists” encounter.

M-R is the assembly programming of big data. It is vital as the base level of the stack. Just as assembly is unproductive for general programming compared to python, ruby or <your-favorite-high-level-language>, M-R is too low level for doing significant analysis work.

PIG and Cascading (and other languages that build on top of M-R) are built with language constructs that match what analysts need to do:

load complex data

join multiple data sets

filter rows

project out columns

aggregate based on columns

apply functions to aggregates

Very few non-trivial analysis problems map effortlessly onto the map-reduce model. Most problems will require many M-R stages. This can make for brittle code that is hard to maintain. It might seem like you are saving effort by keeping the stack simple and using raw M-R or streaming through python, but productivity will usually suffer.

Core Ideas:

multivariate modeling is challenging

pair plots make it easy to get a quick understanding of each variable and the relationships between them

Multivariate analysis and modeling can be really challenging. Getting the job done well requires you to know your data really well. People often use the metaphor the you know something well if you “know it like the back of your hand”. However we look at our hands everyday but probably do not recall the details of where each freckle or wrinkle is. You want to know your data in a much more detailed way.

One very valuable first step when working with a new multivariate data set is to look at the relationships between each pair of variables. There are a number of ways to do this in R and I often prefer to use two different scatter plot matrix methods to get a feel for the relationships between the variables.

Here is an example using the mtcars dataset in R.

df<-mtcars[,c(1,2,3,4,5,6,7)]

Scenario(s):

getting to know your numerical data

predictive modeling (feature selection, technique choice,…)

psych::pairs.panels

why use it?

you can see points with an ellipse superimposed in the lower region

you can see the data distribution on the diagonal for each variable

you can see the correlation values in the upper region

works with categorical data

library(psych) pairs.panels(df)

corrgram::corrgram

why use it?

pie chart in the lower region gives a quick visual view of correlations

Based on these plots it is easy to see some important high-level relationships between the variables.

mpg is strongly inversely proportional to:

cyl : number of cylinders

disp: engine displacement

hp: horsepower

wt: vehicle weight

mpg is negatively proportional to:

drat: rear axel ratio

qsec: time to get drive 1/4 mile

rear axel ratio and weight do not have a strong relationship with the 1/4-mile time. This means that if you want to predict 1/4-mile time, you would not want to use these as unconditional predictor. In fact it might cause you to start looking for interactions between the variables so you can do conditional modeling.

rear axel ratio is inversely proportional to wt, hp, disp and cyl. I know nothing about cars, but now I know that heavier, more powerful cars tend to have a smaller rear axel ratio.

There is also a lot of great basic summary info here:

A distribution plot for each variable

The min and max of each variable

This still only provides a very superficial understanding of the data, but this is a good start. There are lots of different options and ways to use both packages, so you can adapt how you use these functions for your own style and preferences.

I’ve been a big fan of ggplot2 for a long time but plyr has been in my toolkit for less than a year and it is now one of my most-used R packages. It is how aggregate/*apply would have been if they were awesome.

In five lines this code computes the cumulative distribution functions of all of the variables in the iris data set and creates a colored, faceted plot to visualize the data.