There is no “Too Big” Data, is there?

A few years ago, a former classmate came back to me with a simple problem. He was working for some insurance company (and still is, don’t worry, chatting with me is not yet a reason for dismissal), and his problem was that their dataset was too large to run (standard) codes to get a regression, and some predictions. My answer was too use sub-sampling techniques, and I still believe that this might be a good idea (actually, I wrote a long post, on that issue, entitled too large datasets for regression ? What about subsampling). But I wanted to go further, since I did not discuss predictions obtained with sub-sampling techniques.

So, consider here a logistic regression , based on some covariates. We have explanatory variables ( will be large, but not too large) and observations (with ), Here we have a big (potentially) matrix product i.e. with a large matrix. Here, assume that we have a matrix, with individual observations, and possible variables (and the intercept). Actually, in my model, only variables were actually used in the real model. Assume further that explanatory variables are – potentially – correlated.

In some sense, it is not too big, since we can run a regression on that dataset with a simple laptop (even if it can still be seen as a large dataset, in the sense discussed in http://businessweek.com/…). But let us consider an alternative strategy, to be able to get some predictions – or some model – in the case we cannot run a regression. Two strategies will be compared,

generate datasets with observations, by sub-sampling

generate datasets with observations, by sub-sampling,

On each dataset, we can now run a regression, and compare the estimation of the coefficients with the “true” regression (on the whole dataset, since here, we can still run it). Then, since out of explanatory variables, only were actually used to generate the ouput, we should probably remove unnecessary variables in our model. So, some stepwise procedures were used.

For instance, if we consider the very first coefficient which should appear in the regression (let us forget about the intercept), or the second coefficient (which was not considered to generate the dataset), we get

where the density in grey is the Gaussian density of some estimator obtained from the large (and complete) dataset and boxplots are estimates obtained on sub-samples (without the stepwise procedure, just to make sure I will keep that variable).

For coefficients associated to variables not used to generate the dataset, we get graphs like the following

So, clearly, the smaller the dataset, the large the dispersion of the estimates. But far, nothing new. In my previous post – too large datasets for regression ? What about subsampling – my point was to discuss computational times, and a possible optimal size of sub-datasets. Now, what about the impact of sub-sampling on predictions. Here, we fit a model on a small sample, but we can get a prediction on the whole dataset. In order to describe the goodness of fit of our regression model, let us plot ROC curves. More specifically, three kinds of lines will be plotted,

the ROC curve for the ‘s obtained with the model on the complete dataset [red]

the ROC curves for the ‘s obtained with the model on the ‘s subsample [light blue]

the ROC curve for the ‘s obtained by averaging the previous estimates [blue]

If we consider sub-samples of size , we get the following, and when we consider sub-samples of size , without the stepwise procedure (most variables have a small coefficient, not significant) and after the stepwise procedure Clearly – and that should not be a surprise – looking at predictions when the model was fitted on !% of the dataset is not great (ROC curves are substantially below the red ROC curve). But the interesting point is that averaging yields great results. In terms of ROC curve, we have the same

running one regression on our matrix

averaging prediction after running regressions on some matrices

Except that the first one might not be possible to run, if the dataset was larger. And I have to admit that with the stepwise procedure, with variables (where should – theoretically – be renoved), it took some time! But still. I have the feeling the sub-sampling is extremely promising in the context of too large datasets.

An Open Lab-Notebook Experiment

Some
sort of unpretentious (academic) blog, by a surreptitious economist and
born-again mathematician. A blog activist, and an actuary, too. Always curious.
Because academics are probably more than the sum of our publication lists, grants and conference talks...

Used to live in Paris (France),
Leuven (Belgium), Hong-Kong (China), and Montréal (Canada). Professor and researcher in
Montréal, currently back in Rennes (France). ENSAE ParisTech & KU Leuven Alumni