Quantitative research, trading strategy ideas, and backtesting for the FX and equity markets

Main menu

In this post, I will demonstrate how to quickly visualize correlations using the PerformanceAnalytics package. Thanks to the package creators, it is really easy correlation and many other performance metrics.

The first chart looks at the rolling 252 day correlation of nine sector ETFs using SPY as the benchmark. As expected the correlation is rather high because the sector ETFs are part of the S&P 500 index, but has been even more pronounced the last few years.

rbresearch

Chart 2 shows the correlation of five ETFs. Note that there is no single instrument I am using as a benchmark, all five ETFs will be benchmarked against one another. (note that I removed the legend because it literally took up the entire plot).

rbresearch

Chart 3 shows the same 4 ETFs, this time using SPY as a benchmark.

rbresearch

In my opinion, the beauty of the chart.RollingCorrelation function is that the inputs are time series returns. This means that the correlations of instruments (ETFs, stocks, mutual funds, etc.), hedge fund managers, portfolios, and even strategies we test in quantstrat.

Here is the R code used to generate the first chart. To do you own correlation analysis, just change the symbols or add in new data sets of different returns.

In part 2, we saw that adding a volatility filter to a single instrument test did little to improve performance or risk adjusted returns. How will the volatility filter impact a multiple instrument portfolio?

In part 3 of the follow up, I will evaluate the impact of the volatility filter on a multiple instrument test.

*Note the difference in start dates. The volatility filter requires an extra 52 periodsto process the RBrev1 indicator so the test dates are offset by 52 weeks (one year).

Both tests will risk 1% of account equity and the stop size is 1 standard deviation.

Test #1 is a simple moving average strategy without a volatility filter on a portfolio of the nine sector ETFs mentioned previously. This will be the baseline for comparison of the strategy with the volatility filter.

Test #1 Buy and Exit Rules

Buy Rule: Go long if close crosses above the 52 period SMA

Exit Rule: Exit if close crosses below the 52 period SMA

Test #1 Performance Statistics

Test

CAGR (%)

MaxDD (%)

MAR

Test#1

7.976377

-14.92415

0.534461

rbresearch

Test #2 will be a simple moving average strategy with a volatility filter on the same 9 ETFs. The volatility filter is the same measure used in Follow-Up Part 2. The volatility filter is simply the 52 period standard deviation of close prices.

Test #2 Buy and Exit Rules

The new volatility filter will be the 52 period standard deviation of close prices. Now, the buy rule can be interpreted as follows:

Buy Rule: Go long if close is greater than the 52 period SMA and the 52 period standard deviation of close prices is less than its median over the last 52 periods.

Exit Rule: Exit if long and close is less than the 52 period SMA

Test#2 Performance Statistics

Test

CAGR (%)

MaxDD (%)

MAR

Test#2

7.6694587

-14.6590123

0.523191

rbresearch

Both strategies perform fairly well. I would give a slight edge to Test#1, the strategy without a volatility filter. The strategy without a volatility filter has a slightly higher maximum drawdown (MaxDD), but also a higher CAGR.

Test

CAGR (%)

MaxDD (%)

MAR

Test#1

7.976377

-14.92415

0.534461

Test#2

7.6694587

-14.65901

0.523191

Below I will include the R code for the test#2, shoot me an email if you want the code for test#1.

In the Follow-Up Part 1, I explored some of the functions in the quantstrat package that allowed us to drill down trade by trade to explain the difference in performance of the two strategies. By doing this, I found that my choice of a volatility measure may not have been the best choice. Although the volatility filter kept me out of trades during periods of higher volatility, it also had a negative impact on position sizing and overall return.

The volatility measure presented in the original post was the 52 period standard deviation of the 1 period change of close prices. I made a custom indicator to incorporate the volatility filter into the buy rule. Here is the original RB function:

I will test the strategy on the adjusted close of the S&P500 using weekly prices from 1/1/1990 to 1/1/2000 just as in the previous post.

And the winner is… both! There is no difference in performance on this single instrument in this specific window of time I used for the test.

rbresearch

Always do your own testing to decide whether or not a filter of any kind will add value to your system. This single instrument test in the series of posts showed that choosing the “wrong” volatility filter can hinder performance and another choice of volatility filter doesn’t have much impact, if any, at all.

How do you think the volatility filter will affect a multiple instrument test?

Analyzing transactions in quantstrat

This post will be part 1 of a follow up to the original post, Simple Moving Average Strategy with a Volatility Filter. In this follow up, I will take a closer look at the individual trades of each strategy. This may provide valuable information to explain the difference in performance of the SMA Strategy with a volatility filter and without a volatility filter.

Thankfully, the creators of the quantstrat package have made it very easy to view the transactions with a simple function and a single line of code.

For ease of comparison, I exported the transactions for each strategy to excel and aligned the trades as close I could by date.

First, lets look at the trades highlighted by the red rectangle. Strategy 2 executed a trade for 548 units on 1/13/1995 and closed on 9/4/1998 for a total profit of $278340.16. By comparison, Strategy 1 executed a trade for 247 units on 5/19/1995 (about 4 months later) and closed on 9/4/1998 for a total profit of $112,310.90. This is a significant difference of $166,029. It is clear that this single trade is critical to the performance of the strategy.

Now, lets look at the trade highlighted by the yellow rectangle. Both trades were closed on 10/22/1999. Strategy 1 resulted in a loss of $2,250.45 and Strategy 2 resulted in a gain of $15,706.64… a difference of $17,957.09.

The equity curve of Strategy 1 compared with Strategy 2 shows a clearer picture of the outperformance.

rbresearch

Why such a big difference?

For an even closer look, we will need to take a look at the measure of volatility we use as a filter. I will make a few modifications to the RB function so we can see the volatility measure and median.

The sd for 1995-01-13 is 0.0135 while the SDEV is 8.924. The sd for 1995-05-19 is 0.0124 while the SDEV is 21.16… the SDEV is almost 3 times larger even though our volatility measure is indicating a period of low volatility! (note: SDEV has a direct impact on position sizing)

Perhaps we should take a second look at our choice of volatility measure.

If you want to incorporate a volatility filter into your system, choose the volatility measure wisely…

I would describe my trading approach as systematic long term trend following. A trend following strategy can be difficult mentally to trade after experiencing multiple consecutive losses when a trade reverses due to a volatility spike or the trend reverses. Volatility tends to increase when prices fall. This is not good for a long only trend following strategy, especially when initially entering trades.

Can adding a volatility filter to a simple system improve performance?

SMA System with Volatility Filter Rules

Buy Rule: Go long if close is greater than the N period SMA and a volatility measure is less than its median over the last N periods.

Exit Rule: Exit if long and close is less than the N period SMA

SMA System without Volatility Filter Rules

Buy Rule: Go long if close is greater than the N period SMA

Exit Rule: Exit if close is less than the N period SMA

For this test, my volatility measure is the 52 period standard deviation of the 1 period change of close prices and I will use a 52 period SMA.

I will test the strategy on the total return series of the S&P500 using weekly prices from 1/1/1990 to 4/17/2012.

yuck… the equity curves look pretty good up until 1999, then not so good after that.

rbresearch

rbresearch

CAGR

maxDD

MAR

# Trades

Ending Equity

Percent Winning Trades

SMA with Volatility Filter

4.369174

-22.3993

0.195059

34

$239,104.70

58.82

SMA System

7.442673

-22.2756

0.334119

57

$464,198.80

53.57

This test shows that adding a volatility filter to our entries can actually hinder performance. Keep in mind this is ny no means an exhaustive test on a single instrument. I also chose the 52 period SMA and SDEV somewhat arbitrarily because it represents a year.

Reading through trading forums, it is clear to see that people are in search of the “holy grail” trading system. Some people claim to have found the “holy grail” system, but that system is usually combination of 10+ indicators and rules that say “use indicator A, B, and C when the market is doing X or use indicators D, E, and F when the market is doing Y.” Beware of these “filters” and always test yourself.

Stay tuned for future posts that will look at adding a similar filter on a multiple instrument test.

Low volatility and minimum variance strategies have been getting a lot of attention lately due to their outperformance in recent years. Let’s take a look at how we can incorporate this low volatility effect into a monthly rotational strategy with a basket of ETFs.

Not the greatest performance stats in the world. There are some things we can do to improve this strategy. I will save that for later. The purpose of this post was an exercise using quantstrat to implement a low volatility ranking system.

We can see from the chart that the low volatility strategy does what it is supposed to do… the drawdown is reduced compared to a buy and hold strategy on SPY. This is by no means a conclusive test. Ideally, the test would cover 20, 40, 60+ years of data to show the “longer” term performance of both strategies.

The measure of volatility that I will use is a rolling 12 period standard deviation of the 1 period ROC. The 1 period ROC is taken on the Adjusted Close prices. My approach for the ranking system is to first apply the standard deviation to the market data and then assign a rank of 1, 2, …9 for the instruments. There may be a more elegant way to do this in R, so if you have an alternative way to implement this I am all ears.

Now that the market data is “prepared”, we can easily implement the strategy using quantstrat. Note that the signal is when the “RANK” column is less than 3. This means that the strategy buys the 3 instruments with the lowest volatility. See end of post for quantstrat code.

Welcome to the first post of the RB Research blog. Inthis blog, I will focus on quantitative research, trading strategy ideas, and backtesting; primarily in the Foreign Exchange (FX) and equity markets. In the past, I had done nearly all of my testing and analysis in microsoft excel, but over the past 6 months I have been “bitten” by the programming bug. My language of choice is the R language because of the vast amount of contributed packages and tremendous support community. It has been frustrating, insightful, and rewarding all at the same time. My initial inspiration for moving my testing to R from excel, was a series of posts over at FOSS Trading. If you haven’t checked out his blog, I highly recommend it! Other blogs that have been influential are:

As stated earlier, the themes of my post will be research driven using the R programming language and maybe even some vb.net. I consider myself a beginner programmer and hope that through this blog my programming skills will continue to develop by sharing my work with others.