How To Prepare Your Data For Machine Learning in Python with Scikit-Learn

It is often a very good idea to prepare your data in such way to best expose the structure of the problem to the machine learning algorithms that you intend to use.

In this post you will discover how to prepare your data for machine learning in Python using scikit-learn.

Let’s get started.

Update March/2018: Added alternate link to download the dataset as the original appears to have been taken down.

How To Prepare Your Data For Machine Learning in Python with Scikit-LearnPhoto by Vinoth Chandar, some rights reserved.

Need For Data Preprocessing

You almost always need to preprocess your data. It is a required step.

A difficulty is that different algorithms make different assumptions about your data and may require different transforms. Further, when you follow all of the rules and prepare your data, sometimes algorithms can deliver better results without the preprocessing.

Generally, I would recommend creating many different views and transforms of your data, then exercise a handful of algorithms on each view of your dataset. This will help you to flush out which data transforms might be better at exposing the structure of your problem in general.

Preprocessing Machine Learning Recipes

You can copy and paste them directly into your project and start working.

The Pima Indian diabetes dataset is used in each recipe. This is a binary classification problem where all of the attributes are numeric and have different scales. It is a great example of dataset that can benefit from pre-processing.

1. Rescale Data

When your data is comprised of attributes with varying scales, many machine learning algorithms can benefit from rescaling the attributes to all have the same scale.

Often this is referred to as normalization and attributes are often rescaled into the range between 0 and 1. This is useful for optimization algorithms in used in the core of machine learning algorithms like gradient descent. It is also useful for algorithms that weight inputs like regression and neural networks and algorithms that use distance measures like K-Nearest Neighbors.

You can rescale your data using scikit-learn using the MinMaxScaler class.

After rescaling you can see that all of the values are in the range between 0 and 1.

1

2

3

4

5

[[ 0.353 0.744 0.59 0.354 0. 0.501 0.234 0.483]

[ 0.059 0.427 0.541 0.293 0. 0.396 0.117 0.167]

[ 0.471 0.92 0.525 0. 0. 0.347 0.254 0.183]

[ 0.059 0.447 0.541 0.232 0.111 0.419 0.038 0. ]

[ 0. 0.688 0.328 0.354 0.199 0.642 0.944 0.2 ]]

2. Standardize Data

Standardization is a useful technique to transform attributes with a Gaussian distribution and differing means and standard deviations to a standard Gaussian distribution with a mean of 0 and a standard deviation of 1.

It is most suitable for techniques that assume a Gaussian distribution in the input variables and work better with rescaled data, such as linear regression, logistic regression and linear discriminate analysis.

You can standardize data using scikit-learn with the StandardScaler class.

The values for each attribute now have a mean value of 0 and a standard deviation of 1.

1

2

3

4

5

[[ 0.64 0.848 0.15 0.907 -0.693 0.204 0.468 1.426]

[-0.845 -1.123 -0.161 0.531 -0.693 -0.684 -0.365 -0.191]

[ 1.234 1.944 -0.264 -1.288 -0.693 -1.103 0.604 -0.106]

[-0.845 -0.998 -0.161 0.155 0.123 -0.494 -0.921 -1.042]

[-1.142 0.504 -1.505 0.907 0.766 1.41 5.485 -0.02 ]]

3. Normalize Data

Normalizing in scikit-learn refers to rescaling each observation (row) to have a length of 1 (called a unit norm in linear algebra).

This preprocessing can be useful for sparse datasets (lots of zeros) with attributes of varying scales when using algorithms that weight input values such as neural networks and algorithms that use distance measures such as K-Nearest Neighbors.

You can normalize data in Python with scikit-learn using the Normalizer class.

4. Binarize Data (Make Binary)

You can transform your data using a binary threshold. All values above the threshold are marked 1 and all equal to or below are marked as 0.

This is called binarizing your data or threshold your data. It can be useful when you have probabilities that you want to make crisp values. It is also useful when feature engineering and you want to add new features that indicate something meaningful.

You can create new binary attributes in Python using scikit-learn with the Binarizer class.

Hi Jason,
Thanks for the post and the website overall. It really explains a lot.
I have a question regarding preparing the data ,if I am to normalize my Input data, does the precision of the values have an effect ? Will it make the weight matrix more sparse while training with higher precision if the training data is not very high?

In that case should I be limiting the precision depending on the amount of training data?

I am interested in sequence classification for EEG, In my case I intend to try out RNN . I was planning on normalizing the data since I wish the scaling to be performed on each individual input sequence.

I intend to build an RNN from scratch for the application similar to sentiment analysis (Many to one). I am a bit confused about the final stage. while training, when I feed a single sequence(belong to one of the class) to the training set , do I apply softmax to the last output of the network alone and compute the loss and leave the rest unattended?
Where exactly is the many to “ONE” represented?

You can normalize the output variable in regression too, but you will need to reverse the scaling of predictions in order to make use of them or quote error scores in a meaningful way (e.g. meaningful to your problem).

@Roy,
– if you don’t normalize and the features are not of similar scale, then the gradient descent would take a very long time to converge [1]
– if Root MSE is much much smaller than the mean/median value of the predicted vector, I think your model is good enough

I have converted rescaledX to a dataframe and plotted histogram for rescaling, standardization and normalization. They all seem to be scaling down the magnitude of an attribute to a small range — 0 to 1 in case of rescaling and normalization.
– are they doing similar transformation i.e. scaling down attributes so they become comparable?
– do you only apply one method in any given situation?
– which would be appropriate in which situation?

Hi Jason, I really like your posts. I was looking for some explanation on using power transformations on data for scaling. Like using logarithms and exponents and stuff like that. I would really like to understand what it does to the data and how we as data scientist can be power users of such scaling techniques

Hello Jason, great post
However,
I have a question (maybe is almost the same that Dimos).
What is the most often approach to preprocess (I mean use 1 of 4 explained)
How values you normalize?
all features (X)
fit_transform train features(X_train_std=model.fit_trainsform(X_train)) and from them transform X_test (X_test_std=model.transform(X_test))

and then:
If we have to predict new features that I get today(for example: 0,95,80,45,92,36.5,0.330,26,0 in diabetes model)

we have to preprocess that feature or is not necessary relevant and predict it without preprocess:

I am applying normalization for network attacker data. i used min/max normalization. but in the real data there is some features have a large values. if i want to apply standard deviation normalization. should i apply only one normalization type? or can i apply min/max for all data and then apply standard deviation for all data. what is the sequence and is it wrong if i apply standard deviation normalization only on the large value features?

Dear Dr. Jason Brownlee, i have prepared my own dataset on hand writing from different people, and i prepared the images in 28X28 pixel so the problem is how i am going to prepare the training and testing data set so as i will then write the code to recognize the data?

That is a great link that shows how to use the existing CIFAR-10, thank you for that, but as i tried to mention it above, i have handwritten images prepared in 28×28 pixels, so how i have to prepare the training set (how to label my dataset)? it can be .csv or .txt file, i need the way how i have to prepare training set and access in tensorflow like MNIST?

Here’s something I don’t understand though. What’s the difference between rescaling data and normalizing data? It seems like they’re both making sure that all values are between 0 and 1 ?
So what’s the difference?

Thanks.
Please email me the answers as well since i do not check this blog often

Hello Sir!! I am planning a research work which is about music genera classification. My work includes preparing the dataset for the type of music I want to use as there are no public dataset for those music. My problem is I don’t know how to prepare music dataset. I have red a lot about spectrogram. But, what are the best mechanisms to prepare music dataset? Is it only spectrogram I have to use or I have alternate choices?

Hi Jason, thanks for your posts.
I have a question about data preprocessing. Can we have multiple inputs with different shape? for example two different files, one including bit vectors, one including matrixes?
If so, how can we use them for ML algorithms?

Thanks for your response. Yes, I understand that. This extra information is like a metadata that gives information about the structure that generates the data. Therefore, it is a separate type that gives mores information about the system. Is there any way to apply it to ML algorithms?

What is Y used for? I realize the comment and description say it’s the output column, but after slicing the ‘class’ column to it, I’m not seeing Y used for anything in the four examples. Commenting it out does not seem to have any effect. Is it just a placeholder for later? If so, why did we assign ‘class’ data to it instead of creating an empty array?

Thanks for great article. I would like to ask a question regarding using simple nearest neighbors algorithm from scikit learn library with standard settings. I have a list of data columns from salesforce leads table giving few metrics for total time spent on page, total emails opened, as well as alphabetical values such as – source of the lead with values signup, contact us etc., as well as country of origin information.

So far I have transformed all non-numerical data to numerical form in the simple way 0, 1, 2, 3, 4 for each unique value. With this approach scoring accuracy seams to reach 70% at its best. Now I want to go one step further and either normalize or standardize the data set, but can’t really decide which route to take. So far I have decided to go with safest advice and standardize all data. But then I have worries about some scenarios, for example certain fields will have long ranges of data, i.e. those representing each country, or those that show number of emails sent. On another hands other fields like source, will have numerical values 0, 1, 2, 3 and no more, but the field itself does have very high correlation to the outcome of winning lead or loosing lead.

I would be very grateful if you could point me to the right direction and perhaps without too much diving into small details, what would be the common sense approach.

Also, is it possible to use both methods for data set, i.e. standardize data first, and then normalize.

The data preparation methods must scale with the data. Perhaps for counts you can estimate the largest possible/reasonable count that you can use to normalize the count by, or perhaps invert the count, e.g. 1/n.

Hi @jason can you please tell why normalizer result and rescaling (0-1) results are different. isn’t there a standard way of doing so which should give the same result irrespective of the class used (i.e MinMaxScaler or normalizer).

Hi Sír. I have a housing datasets whose target variable is a positively skewed distribution. So far that’s the only variable I have seen to be skewed although I think there will be more. Now I have read that there is need to make this distribution approximately a normal distribution using log transformation. But the challenge I’m facing right now is how to perform log transformation on the price feature in the housing dataset. I’d like to if there is a scikit-learn library for this and if not how should I go about it? More so I plan on using linear regression to predict housing prices for this dataset.

Hi Jason
I am using MinMaxScaler preprocessing technique to normalize my data. I have data of 200 patients, where each patient data for single electrode is 20 seconds i.e. 10240 sample.Then, the dimension of my data is 200*10240. I want to rescale my data row-wise but MinMaxScaler scale the data column wise which may not be correct for my data as i want to rescale my data accordingly 1*10240.
What changes are required in order to operate row wise independently of other electrode?

HEllo sir,
I have colleted 1000 tweets on demonetization. Then i am extracting different features like pos based, lexiocn based fetaures, morphological features, ngram features.So different feature vectors are created for each type and then they are stacked column wise. I have divided dataset of 1000 tweets into 80% as training and 20% as testing. I have trained svm classifier but accuracy is not more than 60%.
How should i improve accuracy or which feature selection should i need to use?

Neural nets use random initial values for the weights. This is by design. It allows the learning algorithn (batch/mini-batch/stochastic gradient descent) to explore the weight space from a different starting point each time the model is evaluated, removing bias in the training process.

Thank you so much for all of your help – I have learned a ton from all of your posts!

I have a project where I have 54 input variables, and 8 output variables. I have decent results from what I have learned from you. However, I have standardized all my input variables, and I think I could achieve better performance if I only standardize some of them. Meaning, 5 of the input columns are the same variable type as the outputs, I think it would be better not to scaler this. Additionally, I one of the inputs in the month of the year – I do not think that that needs to be standardized either.

Does my thought process to do selective preprocessing make any sense? Is it possible to do this?

Hello Jason, I follow your posts very closely as I am studying machine learning on my own. With respect to scaling/normalizing data, I always have a dilemma. When do I use what? Is there any way to know beforehand which regression/classification models will benefit from scaling or normalizing data? For which models its not required to scale or normalize data?

Thank you for this post. It was very helpful. I have a question on the normalization/standardization approach,when the dataset contains both numeric and categorical features. I am converting the categorical features into dummy values (contains 0 or 1). Should the numeric features be standardized along with the dummy variables?
or, 2) Should the numeric features be only be standardized?

Thank you for sharing your expertise! I am a complete newbie to Python but have programmed before in stats software like EViews. Are the datasets in sklearn.database formatted differently? I tried to run the following code:

It’s fine with the dataset from sklearn. But once I use pandas.read_csv to load the iris dataset from a url and then run the code, it just gives me tons of angry text. If I were to use pandas.read_csv, how should I format and store the data such that the aforementioned scipy code would work? Thank you so much!

Hi Jason,
your articles are awesome, thank you very much I am subscribed for forever.
I have a question after scaling the input for my regression model and created the model I need to scale again my input data when I use this ckpt file, how can I pass this scaling to the place where I will use the model? Via the TF session?

Maybe more clearly what I want to do

when I train I do (my data is multiple samples in a csv file):
scaler = StandardScaler().fit(X_train)
X_standard = scaler.transform(X_train)

when I validate I do (my data is multiple samples in a csv file):
scaler = StandardScaler().fit(X_validate)
X_standard = scaler.transform(X_validate)

But here it comes the problem using the saved the model I want to restore the model with a single sample as an input:

I am making a model using TensorFlow and before batching the data for training I am scaling it using StandardScaler.
After my model is created I want to restore the checkpoint and make a prediction, but the data that I am inputting is not scaled.
So my question is how to scale the same way the data when restoring a model?
Because when I am training the data I am scaling among all the data, the whole data.csv file but later when restoring the model my input is a single sample.

I am preprocessing CIFAR-10 data by the sklearn StandardScaler(copy = False, with_mean = True, with_std= True). Then I am doing dimensionality reduction by pca followed by lda on the principal components. The problem is that if I do dimensionality reduction without preprocessing everything works fine, however if I do so after preprocessing I am getting “Memory Error”. I am using svd solver for pca and and eigen solver with auto shrinkage for lda(linear discriminants). Do you have any idea about what might be the cause of this problem?
** I tried min max scaling without calling the library function. Even then I am getting the same error.
Thank You