The critical ingredient is a maverick mind. Focus on trading vehicles, strategies and time horizons that suit your personality. In a nutshell, it all comes down to: Do your own thing (independence); and do the right thing (discipline). -- Gil Blake

Thursday, November 25, 2010

Running Variance

Variance - kinda the bread and butter for analysis work on a time series. Doesn't get much respect though. But, take the square root of the variance and you get the almighty standard deviation. Today, though, let's give variance its due...
For an intro into variance...check out these posts:

Problem with variance is calculating it in the traditional sense. Its costly to compute across a time series. It can be quite a drag on your simulation engine's performance. The way to reduce the cost is to calculate the running variance. And that's when you get into quite a briar patch - loss of precision and overflow issues. See John D. Cook's post covering the variance briar patch:

John does great work and I learn a lot from his posts. But, I was still having problems finding a variance formula that fit my needs:

Reduced the precision loss issue as much as possible;

Allowed an easy way to window the running variance;

Allowed an easy way to memoize the call.

Thankfully, I found a post by Subluminal Messages covering his very cool Running Standard Deviations formula. The code doesn't work as is - needs correcting on a few items - but you can get the gist of the formula just fine. The formula uses the power sum of the squared differences of the values versus Welford's approach of using the sum of the squared differences of the mean. Which makes it a bit easier to memoize. Not sure if its as good in solving the precision loss and overflow issues as Welford's does....but so far I haven't found any issues with it.

Now, one problem with all these formulas - they don't cover how to window the running variance. Windowing the variance gives you the ability to view the 20 period running variance at bar 150. All the formulas I've mentioned above only give you the running cumulative variance. Deriving the running windowed variance is just a matter of using the same SMA I've posted about before and adjusting the Power Sum Average to the following:

def powersumavg(bar, series, period, pval=None):
"""
Returns the power sum average based on the blog post from
Subliminal Messages. Use the power sum average to help derive the running
variance.
sources: http://subluminal.wordpress.com/2008/07/31/running-standard-deviations/
Keyword arguments:
bar -- current index or location of the value in the series
series -- list or tuple of data to average
period -- number of values to include in average
pval -- previous powersumavg (n - 1) of the series.
"""
if period < 1:
raise ValueError("period must be 1 or greater")
if bar < 0:
bar = 0
if pval == None:
if bar > 0:
raise ValueError("pval of None invalid when bar > 0")
pval = 0.0
newamt = float(series[bar])
if bar < period:
result = pval + (newamt * newamt - pval) / (bar + 1.0)
else:
oldamt = float(series[bar - period])
result = pval + (((newamt * newamt) - (oldamt * oldamt)) / period)
return result

Code for the Running Windowed Variance:

def running_var(bar, series, period, asma, apowsumavg):
"""
Returns the running variance based on a given time period.
sources: http://subluminal.wordpress.com/2008/07/31/running-standard-deviations/
Keyword arguments:
bar -- current index or location of the value in the series
series -- list or tuple of data to average
asma -- current average of the given period
apowsumavg -- current powersumavg of the given period
"""
if period < 1:
raise ValueError("period must be 1 or greater")
if bar <= 0:
return 0.0
if asma == None:
raise ValueError("asma of None invalid when bar > 0")
if apowsumavg == None:
raise ValueError("powsumavg of None invalid when bar > 0")
windowsize = bar + 1.0
if windowsize >= period:
windowsize = period
return (apowsumavg * windowsize - windowsize * asma * asma) / windowsize

No way I know of to avoid storing the previous data point in order to obtain the running variance. Just have to decide where you'd like to store it. You could store it and then use it in the call to your function. Or you could create a class for your variance logic and memoize the previous data point (cache the value) in the class. Then the object will keep track of it for you through the window.

Peter, sorry about the bad links. I have updated the links for the SMA post. Also, you might be interested in the code I have over on github that contains both the Running SMA and EMA: statio. Take care. MT