Importing CSV Data in Zipline for Backtesting

In ourprevious article on Introduction to Zipline package in Python, we created an algorithm for moving crossover strategy.

Recall, Zipline is a Python library for trading applications and to create an event-driven system that can support bothbacktesting and live-trading.

In the previous article, we learnt how to implement Moving Average Crossover strategy on Zipline. The strategy code in Zipline reads data from Yahoo directly, performs the backtest and plots the results.

We recommend that you brush up a few essential concepts, covered in the previous post, before going further:

Installation (how to install Zipline on local)

Structure (format to write a code in Zipline)

In this article, we will take a step further and learn to backtest on Zipline using data from different sources. We will learn to:

Import and backtest on OHLC data in CSV format

Import and use data from Google Finance for research/analysis

Calculate and print backtesting results such as PnL, number of trades, etc

The post serves as a guide for serious quants andDIY Algo traders who want to make use of Python or Zipline packages independently for backtesting and hypothesis testing of their trading ideas. In this post, we will assume that the data is from the US markets. It is possible to use other markets’ data sets for analysis with some edits and additions in the code. We will share the same in a later post.

The Parts of the code on Zipline – what we have learnt already

Problem with existing method?

Zipline provides an inbuilt function “loads_bars_from_yahoo()” that fetches data from Yahoo in given range and uses that data for all the calculations. Though very easy to use, this function only works with Yahoo data. Using this function, we cannot backtest on different data sets such as

Commodities data – yahoo does not provide

Simulated data sets created and saved in csv format

We have been using this in-built function so far to load stock data in Python IDE and work further with it. To be able to read csv or any other data type in Zipline, we need to understand how Zipline works and why usual methods to import data do not work here!

Zipline accepts the data in panel form . To understand how Zipline treats and understands data, we must learn a little bit about data structures in Python.

Data Structures in Panda

It is a one-dimensional labelled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). The axis labels are collectively referred to as the index.

The basic method to create a Series is to call:

s = pd.Series(data, index=index)

A series accepts different kinds of objects such as a Python dict, an ndarray, a scalar value (like 5).

Data Frame:

It is a two-dimensional labelled data structure with rows and columns. Columns can be of different types or same.

It is one of the most commonly used pandas objects and accepts different types of inputs such as Dict of 1D ndarrays, lists, dicts, or Series; 2-D numpy.ndarray; Structured or record ndarray; a Series.

Panel:

Panel is a lesser used data structure but can be efficiently used for three-dimensional data.

The three axes are named as below:

items: axis 0, each item corresponds to a DataFrame contained inside

major_axis: axis 1, it is the index (rows) of each of the DataFrames

minor_axis: axis 2, it is the columns of each of the DataFrame

Zipline only understands data structure in Panel format.

While it is easy to import .csv data in Panda as a dataframe, it is not possible to do the same in Zipline directly. However, we have found a roundabout to this problem:

This is a powerful technique which will help you in importing data from different sources such as:

Import OHLC data in a CSV format in zipline (we will show how)

Read data from online sources other than Yahoo which connect with Panda (we will show how)

Read data from Quandl in Zipline (this is left as an exercise for you!)

Let us get started with the three steps!

Import the data in python

We can use any method to import the data as a Dataframe or just import the data and convert it into a Dataframe. Here, we will use two methods to fetch data: DataReader & read_csv function.

Use DataReader to read data from Google

Pandas provide a function Datareader which allows user to specify the date range and the source. You can use Yahoo, Google or any other data source.

Use read_csv function to import a CSV file:

Pandas provide another function read_csvthat fetches the csv file from a specified location. Please note that the CSV should be in a proper format so that it runs in a correct fashion when called by a strategy algorithm in Zipline.

Format of CSV file:

First column is “Date”, second column is “Open”, third column is “High”, fourth column is “Low”, fifth column is “Close” and sixth column is “Volume” and seventh column is “Adj Close”. None of the columns should be blank or with missing values.

Zipline accepts [‘Open’, ‘High’, ‘Low’, ‘Close’, ‘Volume’, ‘Price’] data as minor axis and ‘Date’ as major axis in UTC time format. Since if we did not have the Date in UTC format, we convert it by using “tz_localize(pytz.utc)”.

Now ‘panel’ is the dataset ‘data’ saved in panel format. This is how a Panel format looks like:

Use this new data structure Panel to run your strategy

We use this new data structure ‘Panel’ to run our strategy with no changes in the “initialize” or “handle_data” sections. The strategy logic and code remains the same. We just plug the new data structure while running the strategy.

That’s it! Now you can easily run the previously explained Moving Crossover strategy on a CSV data file! Go on, give it a try!

You can fetch the Quandl(US data) data, and try generating signals on the same.

Backtesting on Zipline

In the previous post, we backtested a simple Moving Crossover strategy and plotted cash and PnL for each trading day. Now, we will calculate PnL and total number of trades for the entire trading period.

Recall that the results are automatically saved in ‘perf_manual’. Using the same, we can calculate any performance ratios or numbers that we need.

To change the initial capital and other parameters to optimize your backtesting results, you need to initialize the TradingAlgorithm() accordingly. ‘capital_base’ is used to define the initial cash, ‘data_frequency’ is used to define the data frequency. For example:

If you are serious about writing advanced trading strategies and executing them through Python, read more about our Executive Programme in Algorithmic Trading. Over 250 hours of intensive training, with customized learning solutions, interactions with industry experts, traders, quants and two months of practical project work under Algo & HFT traders is what you get at throw-away prices! The new batch is starting from 27 th August! Start immediately!

Conclusion

If you are a coder or a tech professional looking to start your own automated trading desk. Learn automated trading from live Interactive lectures by daily-practitioners. Executive Programme in Algorithmic Trading covers training modules like Statistics & Econometrics, Financial Computing & Technology, and Algorithmic & Quantitative Trading. Enroll now!