Regression in Python using R-style formula – it’s easy!

August 20, 2016

I remember experimenting with doing regressions in Python using R-style formulae a long time ago, and I remember it being a bit complicated. Luckily it’s become really easy now – and I’ll show you just how easy.

Before running this you will need to install the pandas, statsmodels and patsy packages. If you’re using conda you should be able to do this by running the following from the terminal:

conda install statsmodels patsy

(and then say yes when it asks you to confirm it)

importpandasaspdfromstatsmodels.formula.apiimportols

Before we can do any regression, we need some data – so lets read some data on cars:

df=pd.read_csv("http://web.pdx.edu/~gerbing/data/cars.csv")

You may have noticed from the code above that you can just give a URL to the read_csv function and it will download it and open it – handy!

Anyway, here is the data:

df.head()

Model

MPG

Cylinders

Engine Disp

Horsepower

Weight

Accelerate

Year

Origin

0

amc ambassador dpl

15.0

8

390.0

190

3850

8.5

70

American

1

amc gremlin

21.0

6

199.0

90

2648

15.0

70

American

2

amc hornet

18.0

6

199.0

97

2774

15.5

70

American

3

amc rebel sst

16.0

8

304.0

150

3433

12.0

70

American

4

buick estate wagon (sw)

14.0

8

455.0

225

3086

10.0

70

American

Before we do our regression it might be a good idea to look at simple correlations between columns. We can get the correlations between each pair of columns using the corr() method:

df.corr()

MPG

Cylinders

Engine Disp

Horsepower

Weight

Accelerate

Year

MPG

1.000000

-0.777618

-0.805127

-0.778427

-0.832244

0.423329

0.580541

Cylinders

-0.777618

1.000000

0.950823

0.842983

0.897527

-0.504683

-0.345647

Engine Disp

-0.805127

0.950823

1.000000

0.897257

0.932994

-0.543800

-0.369855

Horsepower

-0.778427

0.842983

0.897257

1.000000

0.864538

-0.689196

-0.416361

Weight

-0.832244

0.897527

0.932994

0.864538

1.000000

-0.416839

-0.309120

Accelerate

0.423329

-0.504683

-0.543800

-0.689196

-0.416839

1.000000

0.290316

Year

0.580541

-0.345647

-0.369855

-0.416361

-0.309120

0.290316

1.000000

Now we can do some regression using R-style formulae. In this case we’re trying to predict MPG based on the year that the car was released:

model=ols("MPG ~ Year",data=df)results=model.fit()

The ‘formula’ that we used above is the same as R uses: on the left is the dependent variable, on the right is the independent variable. The ols method is nice and easy, we just give it the formula, and then the DataFrame to use to get the data from (in this case, it’s called df). We then call fit() to actually do the regression.

We can easily get a summary of the results here – including all sorts of crazy statistical measures!

results.summary()

OLS Regression Results

Dep. Variable:

MPG

R-squared:

0.337

Model:

OLS

Adj. R-squared:

0.335

Method:

Least Squares

F-statistic:

198.3

Date:

Sat, 20 Aug 2016

Prob (F-statistic):

1.08e-36

Time:

10:42:17

Log-Likelihood:

-1280.6

No. Observations:

392

AIC:

2565.

Df Residuals:

390

BIC:

2573.

Df Model:

1

Covariance Type:

nonrobust

coef

std err

t

P>|t|

[95.0% Conf. Int.]

Intercept

-70.0117

6.645

-10.536

0.000

-83.076 -56.947

Year

1.2300

0.087

14.080

0.000

1.058 1.402

Omnibus:

21.407

Durbin-Watson:

1.121

Prob(Omnibus):

0.000

Jarque-Bera (JB):

15.843

Skew:

0.387

Prob(JB):

0.000363

Kurtosis:

2.391

Cond. No.

1.57e+03

We can do a more complex model easily too. First lets list the columns of the data to remind us what variables we have:

We can see that bringing in some extra variables has increased the $R^2$ value from ~0.3 to ~0.8 – although we can see that the P value for the Horsepower is very high. If we remove Horsepower from the regression then it barely changes the results:

We can also see if introducing categorical variables helps with the regression. In this case, we only have one categorical variable, called Origin. Patsy automatically treats strings as categorical variables, so we don’t have to do anything special – but if needed we could wrap the variable name in C() to force it to be a categorical variable.

You can see here that Patsy has automatically created extra variables for Origin: in this case, European and Japanese, with the ‘default’ being American. You can configure how this is done very easily – see here.

Just for reference, you can easily get any of the statistical outputs as attributes on the results object: