TAA Backtest and Expectations

I want to put this discussion about TAA into perspective. Just how much juice can we really expect from this TAA thingamajig?

Note that when I talk about TAA, I’m using Mebane Faber’s model as a jumping off point, so I’m trading a diversified basket of asset classes infrequently (once per month or less) with a focus on trend-following and momentum (read more).

In this post I’ll show a backtest of my take on a TAA model, and in a follow up post I’ll take a more technical look at what I considered when building the model.

[logarithmically-scaled, growth of $10,000, monthly-interval]

The graph above shows backtested results of the TAA model (red) versus the S&P 500 (grey) since 1971. The model does not employ leverage. See end of post for assumptions about return on cash and trade frictions.

The real benefit of this flavor of TAA is NOT generating returns, it’s managing losses. To illustrate, below I’ve included a chart showing drawdowns for the model (red) vs the S&P 500 (grey) since 1971. I like this very much…

[based on month-end values]

And lastly, numbers for the number-lovers…

[based on month-end values]

Key points…

1. IGNORE RETURNS. The fact that the model outperformed the market (or any other asset class) is meaningless. Returns are an illusion; they’re just a function of risk. Much more important are returns relative to volatility/drawdown and the smoothness of the equity curve over different market regimes. On both, the model shines.

2. These results are made even better by the fact that this model is very much not curve-fitted. My rules are similar to Faber’s: select asset classes that are in an uptrend and showing positive momentum. These are tried-and-true rules that have worked since the dawn of ticker tape.

3. There are limitations to this flavor of TAA because of how infrequently it trades and how broad the asset classes are that it’s holding. I don’t claim that my take on TAA is the best one, but I do think we’re pretty close to the ceiling of what one could reasonably expect from this type of approach. Without more active trading there’s only so much blood to squeeze from this turnip.

4. There are two other variations on the model I’m also tinkering with: (a) trading with leverage, and (b) holding positions for at least a year to take advantage of reduced long-term CG rates in taxable accounts.

In a follow up post I’ll take a more technical look at what I considered when building the model. As always, more to follow.

Test assumptions: (a) when an appropriate ETF existed, I ran this test using actual ETF data. For periods prior, I used index or futures data but did not adjust for an ETF expense ratio, which would add some drag to the results presented here, (b) results do not account for transaction costs or slippage, but given how infrequently the model trades, an investor with a reasonably sized portfolio should be able to closely reproduce these results,(c) taxes have been ignored, and (d) I’ve assumed a return on cash of HALF the nearest 13-week Treasury.

. . . . .

To stay up to date with what’s happening at the MarketSci Blog, we recommend subscribing to our RSS Feed or Email Feed.

Related

MS,
Looking at that equity curve it looks very similar to Madoffs.. kidding, not considering returns, the results looks awesome so far, especially the drawdown numbers are very low, very much looking forward to more analysis.

Maybe I missed the details somewhere (I have not read Mebane’s Ivy Portfolio book yet) but are you just going long the assets?

Hello Jens – just long (for the moment). This model (and Faber’s original) still performed well during bear markets as a result of either moving to cash or rotating into assets less correlated with equities.

I know that it is fancy nowadays to ignore the raw return and emphasize the volatility instead (or vol. adj. return),
but could you (or others) please convince me a little bit more.
:)

I said volatility with intention. I don’t like defining risk as it equals to pure volatility.
For me, the risk is the loss of real money.
For example, suppose if I invested 100USD into a strategy. And it went up to 200USD smoothly quite quickly (en emerging market fund).
After that it wiggles like crazy with -50% drawdown. Do I really care?
No. Even if it has a huge volatility, for me (the subjective) risk (= losing real money) is zero.

There are other methods to define risk as well. (not only my previous, very subjective, primitive example that obviously depends on entry point timing).
For example, some fund managers define risk as the number of months losing money. Unorthodox.
Since ‘risk’ is not a properly defined term, I like that you write
‘volatility/drawdown adjusted return’ instead of the popularly used ‘risk adjusted return’.
Actually, I would ban the usage ‘risk’ from textbooks, because of its ambiguity.

But back to my question. Why should I ‘ignore returns’ as suggested?
That is all I care. (OK, that is not true.) But suppose, you have money you don’t need. I mean literally. You invest it for your children who are not even born yet. They may or may not need it 30 years from now. In that scenario, do you really care about volatility during those 30 years?
Ok, this was really an extreme example for the debate on return vs. volatility.
George

Hello George – good question – because return in and of itself doesn’t tell you anything about the true nature of a strategy.

First example…trading asset A, a strategy is correct 55% of the time with a W/L ratio of 1.2. Trading asset B, the strategy is correct 55% of the time with a W/L ratio of 1.5. Which asset is the strategy most effective trading? The clear answer is asset B, but if asset A is much more volatile than asset B, then in terms of return, it could appear to be the more effective. Here we’ve been fooled by volatility.

Second example…trading a given strategy without leverage results in a 10% annualized return and 20% annualized volatility. At 2:1 margin (and no margin cost), the strategy returns 20% with 40% annualized volatility. Which variation was more effective? The answer is neither – they’re equal. But variation #2 looks much more effective in terms of return. Here we’ve been fooled by leverage.

Again, return doesn’t tell us anything about how effective a strategy is. Rather, we should find the most effective strategy and then add or subtract leverage, or use more or less volatile assets, to reach the desired return.

We had experience with weekly TAA.
I reckon you will find better return with weekly than monthly rebalancing.
Note however that for weekly rebalancing the day of the week on which the portfolio starts/rebalances does matter. Probably, it is not a surprise to you now. We found examples in which the TR for the Monday version was +160% while the Wednesday version gave +230%. (Some randomness prevail)
Extrapolate this to monthly. Run your monthly backtests with different rebalancing days (1st, 2nd, etc. day of the month) to reveal the true nature of your TAA.

Hello (again) George – I’ve tested other start dates (rather than month-end). No significant difference in results. Weekly trading could potentially improve results, but I don’t want to increase trading frequency. One of the benefits of the strategy is how simple it is to maintain. michael

A quick point on leverage, a strategy with a small draw down is very important if you plan to use leverage. A strategy with large draw downs can obviously wipe out your account quite quickly if you plan to use leverage. So seeing the above TAA model with a low lvg as 11% is awesome, compared to market with 50% which would have wiped you out.

Hello Jens – I agree – one small points to add – note the note I made under the stats. These stats are based on monthly-end, so true intra-month peak-to-trough drawdown would have been higher (but still far, far below U.S. stocks). michael

you’re really making GTAA very attractive, and I agree, with such low trading frequency, the performance is great and probably close to its best potential.

Not sure if I mentioned the ETF that Faber is launching (ticker GTAA), info:http://advisorshares.com/fund/gtaa – it trades between 50-100 ETFs, as opposed to the model from his paper.

From a TF point of view I was initially surprised to see the “relatively low” vol-adjusted performance of track record since 2007… But of course this is because it does not go short (TF funds had their year of the decade in 2008 during the global big decline).
Wonder how your model compares with it since 2007?
The Faber/GTAA perf values are:
CAGR on last 1yr: 11.24%
CAGR on last 2yrs: 3.88%
CAGR on last 3yrs: 4.39%

Hello Jez – yeah, I think I replied to you re: the new ETF in a previous post. Exciting stuff – I’m very keen on seeing it in action. One very small critique: I think there is limited advantages to including so many ETFs given how highly correlated so many asset classes have grown. I don’t know what the optimal number is. It’s definitely more than the original 5 Meb included in his academic paper, but probably somewhere south of 50/100.

Those CAGRs you show. Are those using the 5-asset portfolio (20% invested in each asset) with monthly rebalance? Where did you find those? If that is where they’re from, what I’m talking about here is a very different beast. More of the portfolio allocated and higher volatility.

Agree that you can probably not have 50-100 different ETFs as individual components in the model but looking at how faber describes it on his website:

“Global Diversification – The GTAA strategy targets 50-100 ETFs in all of the major asset classes including stocks, bonds, real estate, commodities, and currencies. This approach allows for each asset class to be examined in more granularity than the published models (think spreading the MSCI EAFE into Japan, UK, Germany etc and Commodities into Agriculture, Energy, etc and the S&P500 in Tech, Energy, etc.)”

I would think that he considers 5 or so major asset classes, but all broken down in smaller parts – and allocate accordingly to each major asset class.

If correlation between all individual ETFs within an asset class goes to 1, it’s just the same as having a only 5 components. If correlation goes to less than 1, you should benefit from some diversification within the asset class (possibly allowing for slightly extra more risk reduction)…

This does seem to make sense to me on paper, but surely need to be tested to get a better idea of the figures.
I am actually working on a ETF-based TF system, and the idea is to have a large diversified ETF portfolio (similarly to a large diversified futures portfolio traded by large long-term TF CTAs such as BlueTrend, Chesapeake Capital, TransTrend, etc.)

In terms of leverage increase for ETF, there are ways around it without requiring funding (and costs) but this is down to money management (ie not allocating to a fixed figure per asset class, but as a function of the available equity to boost allocation when other asset classes are in cash). This is a bit more complex and is explained by Garner in his book about ETF Trading Systems. Been a while I have read it (I reviewed it here: http://www.automated-trading-system.com/practical-guide-to-etf-trading-systems-garner/ ) but it was a very decent read. I cant remember the extra costs from the method in terms of increased volatility though.

Jez – sounds like the ETF TF system your building is more or less what I’m calling a TAA model here. At its core it’s just a TF/momentum system in the same vein as Faber’s. The only bit of added slickness is the way that I’m allocating between the individual asset classes (i.e. not simply dividing the portfolio into equal chunks).

That also sounds like the bit you wrote about Gardner’s approach (i.e. boosting allocation when other equities in cash, or in my case, boosting allocation to assets that contribute less to volatility).

The GTAA stats you mentioned are his benchmark (equal investment in 5 asset classes with no timing). That shouldn’t be too hard to beat over the last 3 years. That’s why I asked…Faber’s approach (even the simple one he talks about in the academic paper) would have trounced those numbers.

Not too surprising about putting 90% of his net worth in. Heck, 100% of my long-term net worth is in my own strategies. I think Faber is like me…we like to eat our own cooking =)

Mike,
Yes, I guess TF and TAA do overlap when going at longer timeframes.

The system I am building is trying to mirror what is done with TF in futures but for smaller account sizes (benefits are the potentially wider diversity, lower minimum trading lot, but downside is the lack of embedded leverage as with futures).

Sorry about the error about these stats, I obviously scanned the doc too quickly and assumed it was hypothetical returns of the GTAA ETF model…

I’m just surprised he puts 90% in that ONE strat… (ie I suppose TAA is going to be one of your personal investment strats). On one hand, it does seem pretty robust, but what if we go into a big 2-3 decade deflationary spiral never experienced before? That strat – being long-only – will not be that great…

Of all the things I have seen on MktSci, and I am an avid reader, this is the most insane (as in good). Can you tell us which securities were in the portfolio? How did you differentiate from trend and momentum?

Hello Rod – asset classes included are US, Japan, and China stocks, 10-year US Treasuries, Real Estate (NAREIT), gold, oil, and commodities. More on the trend/momentum question in a follow up post (too geeky to explain in a comment =). michael

michael, your etf choices are kind of odd. you have two foreign stocks etf’s that are pac rim but avoid the rest of the world completely (except US) . you have one bond etf that is really a “safe haven” choice but avoid riskier bonds.you have exposure to three commodities markets when oil and commodities are highly correlated. feedback?

STOCKS: most indices that folks traditionally include in a TAA model (ex. emerging market and international), are consistently better correlated to US stocks than China or Japan. I could go down the food chain and pick smaller markets (and I might still do that) but I thought going with the 3 biggest economies was a reasonable approach. Japan and China are less correlated than one would think geographically (at least less than I thought they would be). Picking China is sort of a hindsight 20/20 pick, but because of some choices I made in designing the model, that asset class actually contributed very little to the model’s performance (plus I’m only using data for China going back to 1993).

BONDS: I don’t like the return vs volatility characteristics of longer (or shorter) bonds. Longer bonds are too volatile for the potential return and shorter bonds aren’t volatile enough. The 10-year is a nice middle ground.

COMMODITIES: I include oil and the commodities index but I don’t allow the model to hold both of those in the same month.

MS – I’ve run similar tests before. However, I am not using the results.

The key thing to remember is that interest rates (and thus the discount rate for asset prices) at the start of your sample were at multi-decade highs. As a result, assets were priced at extremely cheap levels. It didn’t really matter what financial asset you bought, you would’ve gotten good returns. And by mixing it, you would’ve performed better.

But now were in a different environment. Getting risk/reward ratios at these levels when 10yr treasuries are yielding 2.5% is a mathematical impossibility. In other words, for a long only strategy, these types of returns are unachievable over the intermediate future using this type of TAA, IMO.

RE to GMT: I wholeheartedly disagree. Example – run a rolling 5-year volatility-adjusted annualized return for the model versus the S&P 500 (or your asset class of choice). You’ll see that (a) the efficacy of the model has been for all intents and purposes stable over the ENTIRE sample, and (b) relative to equities (and by extension, many other asset classes) the model has been more effective over say the last decade than at any other point. I like your narrative from an anecdotal perspective, but the numbers don’t reflect that. michael

Michael – You are absolutely right that the numbers are good and stable over the entire sample, but that is also my point. Your sample only includes a secular period of falling interest rates and increasing leverage. That’s changing. In other words, I think we have had a regime shift.

Take the S&P 500 for example (only because I can produce the most historical data). Long-term moving average crossovers coupled with momentum have consistently worked (in terms of vol-adjusted returns relative to the market) for the last CENTURY (excluding some brief periods like the late 1990’s).

What you’re saying is that we’re about to enter a period where that’s not going to be the case b/c of your boogey man scenario. That’s very anecdotal evidence.

If you could show me a period that lined up fundamentally with where we’re at today that LT MA crossovers/momentum didn’t work I would be concerned. In the absence of empirical data, I am most definitely not.

Michael – I can understand your point of view. You are saying that over the very long haul, the data supports this strategy. And you are right.

You are also saying that because you don’t have the data to calculate the returns for this strategy in a different secular environment, you are not going to be concerned. I think that is a bit riskier. Housing prices also didn’t fall until 2005.

Look, I hope that you are right and that we can get reasonable returns across asset classes over the coming years. I’m just not positioning my portfolio that way.

Michael, is this based on the basket of ETF/indexes that you mentioned in your earlier post – U.S. Stocks, China Stocks, Japan Stocks, Gold, Oil, Commodities, Real Estate, and U.S. 10-Year Treasuries? Or is it some other universe?

Hello Jerry – same basket (at the moment) plus a return on cash when less than fully invested of half the nearest 13-week Treasury. I mention that because I think Faber uses the full 13-week Treasury, so that will be a point of discrepancy between us. michael

Hello Carl – after further testing I’m actually leaning away from a leveraged model. The cost of margin (not so much now, but historically and someday again in the future) makes it difficult to justify relative to the increase in volatility. Still fleshing out my thoughts on this.

If you revisit the leverage question then I think you’ll find that Risk Parity reduces the volatility since it allocates less to the assets with higher (name your volatility measure here such as Max DD, Ulcer Index, Std Dev, Hist Vol, etc..). Or conversly, if you look at RP allocation results, you may find that you want to revisit the leverage question. ;-)

RE to Carl: the model already attempts to reach risk parity to determine position sizes (the kind of results shown in this post couldn’t be achieved with straight flat allocations). The problem isn’t with the model, the problem is in how high margin rates tend to run (present market excluded). michael

I am not sure margin is very expensive. Of course it is expensive if you use the retail brokers. but if you use futures as your leverage unit, the cost of margin is pretty much just a little more than the cost of the market risk-free interest rate for the period of maturity of the future ? I am not suggesting using futures with full leverage of course, but you could leverage only as much as you like, without BORROWING from the broker.

Would you consider comparing this to a similar model: buying only the 1 sector etf of your basket which has the best 6 month rel. strength compared to the others, and perhaps only if some benchmark is above its x month moving average? (again, tested once each month).
Great series. thanks.

Hello Jon – my code has gotten too complex to go back and do something more simple like that. Perhaps one of the smart folks reading this can assist.

Off the cuff, based on the numbers I’ve been crunching, a one asset portfolio might be able to keep pace in terms of return (I dunno), but it’s going to be way more volatile and I don’t think a worthwhile approach.

I think that tests like these are great in the rear-view mirror. 10 years ago, however, would you really have said, “sure, I will let oil be 20% of my portfolio”? Or, is it more like I suspect, that you have only included oil and commodities because you know they went on an absolute tear in the last 7 years?

I think we can mentally masturbate to this idea all day, trying to ‘justify’ our use of the different assets as a means to diversification … but the actual choice of the assets themselves is often tainted with our biases on previous performance.

For example — I have seen many of these TAA models that include gold. 10 years ago, would anyone realistically have allowed a precious metal holding to be more than 5% of their portfolio? I would say that 99% of people would say no. And yet almost every one of the TAA models I see allows gold to get up to 20% of the portfolio.

I like the concept … but I am always wary of the choices of assets. Without a fundamental reason why the assets chosen should represent highly uncorrelated set, I remain skeptical.

Corey – putting your snarkiness aside for a moment, the underlying point you are making is a good one…

First, I’ve designed the model to allocate to individual assets based on (attempting to achieve) risk parity. That actually hurt historical performance over some more curve-fit solutions, but it seemed the logical thing to do for the reasons you bring up above. So take allocation off the table.

Second, the only asset class I chose that you couldn’t reasonably make the case was a “major” over the life of the test was China, and as I’ve mentioned multiple times in these comments, because of how I’m defining momentum (and limiting highly correlated assets from simultaneously trading), China was rarely traded and had little impact on results. I include it because it makes sense today.

Japan has been a horrific long-only bet, yet I include it too, because it makes sense in the context of the model and demonstrates the ability to withstand prolonged bear markets.

I see from your own blog that you are (like me) a geek. In the future it might make sense to treat a conversation with a fellow geek as such and ask “why?” (sans snarkiness). Your fellow geeks would appreciate it.

First, I want to offer my apologies! Snarkiness was truly not my intention. I have great respect for your work and am an avid reader of your blog. On second read of my comment, I can definitely see how it read as ‘snarky.’

What worries me about the long-term stability of models like this is being able to fundamentally argue for their design. I think the TAA model makes sense, but I have a hard time justifying the inclusion of certain asset classes. While there was plenty of hard evidence for diversifying across asset classes using liquid instruments, I don’t think it really took off until 2007 (at least, from my anecdotal evidence), which lead to a lot of people including arbitrary choices as ‘assets’ simply because they had performed well in the previous few years.

Perhaps if we developed a quantitative method for including assets over time, so we could be confident that in the future, we could ‘rotate’ into new asset classes. Perhaps some sort of liquidity, accessibility, and correlation constraints? Plus, if we were to bin each asset class and provide ‘reasonable’ limits to them, we could feel more comfortable in the stability of the model in the future. As I said in my previous post: how many people would realistically have allowed precious metals to be more than 5% of their assets in 2000? Did anyone want anything but equity? How many people would have allowed 50% emerging markets? Can we put realistic limits on new asset classes that allow them to grow as part of our portfolio over time? Perhaps in the first year of inclusion, they can be 5%, in the second 10%, et cetera. This sort of restriction would probably also model retail investors acceptance of new ideas.

Again, I wasn’t trying to be ‘snarky’ in any manner. I would be very interested to see if we could develop a model that realistically identified how we could put new assets in our model over time with realistic allocations.

Hello Corey – I appreciate the thoughts and especially like the idea of creating a “blind” metric for the inclusion of new asset classes based on liquidity/availability. Will roll that around in me noggin’ for a bit.

I don’t think the amount investors would have invested in a given asset is the issue. I’m dividing up the portfolio based on (a) risk parity between the assets (at that moment in time), and then (b) sizing the entire portfolio to stay under a dynamic benchmark for volatility. In other words, it’s a blind non-optimized approach (meaning my feelings about any asset class today has no impact on allocation in the past).

The bit about the inclusion of assets is spot on though. More to follow on this.

Michael: we really need to do some ( very hard ) sould searching and ask ourselves how we choose the ETFs to trade. In fact, the real question would be: how would we have chosed the ETFs (or indexes) at the start of the test.
In other words, seriously, would we have chosen China back in 1993 ?
It is really a very personal question, I don’t know the answer for my personal case.
eber

RE to Eber: good question…I answered this one in a previous comment asking why I chose those particular markets…

“Most indices that folks traditionally include in a TAA model (ex. emerging market and international), are consistently better correlated to US stocks than China or Japan. I could go down the food chain and pick smaller markets (and I might still do that) but I thought going with the 3 biggest economies was a reasonable approach. Japan and China are less correlated than one would think geographically (at least less than I thought they would be). Picking China is a hindsight 20/20 pick, but because of some choices I made in designing the model, that asset class actually contributed very little to the model’s performance (plus I’m only using data for China going back to 1993).”

The point of the above is that yes, China is the hindsight pick, but it doesn’t significantly alter backtested results (because I don’t momentum in the traditional ROC sense, I measure it relative to expected volatility so it has historically rarely been chosen).

I have to say the returns here look impressive. A simple TAA strategy I looked at circa March 2008 that I like to think influenced Faber’s ETF broke the asset classes into separate sectors and then evaluated them based on momentum. The Sharpe ratios there were around 1, so I view the 1.5 with envy. I like to think that if you’re catching the upside on Japan and China while avoiding the downside, you’re probably going to be making some bank. Perhaps substituting the MSCI EAFE and MSCI EM indices might be more appropriate.

Recently I’ve been moving toward a more Bayesian approach. If you start from a prior that is the market-capitalization weights and back out returns (a la Black-Litterman), then you avoid some of the issues mentioned above where you pick some assets in advance. Then you can use your “dimmers” you scale the expected returns by something like u+c*Z*std, where u is the prior expected return, c is a multiplier, Z is a Z-score, and std is the prior standard deviation.

Two questions:
1) I was curious if you use any adaptive strategies in this or if you only use “dimmers” (as opposed to Faber’s light switch) in a more simplistic fashion.
2) When you say you are “allocating the portfolio between them in a “smart” way (i.e. in a way that maximizes expected return versus volatility)”, I take that to mean optimization. Is that the case or am I mistaken?

RE: Sharpe – considering all of the vagaries in calculating the Sharpe, maybe their 1.0 equals my 1.5…who knows =)

RE: MSCI EFA/MSCI EM –I like the larger coverage of those indices, but I dislike the consistently high correlation to the US market. Will consider a way to inject these in place of the early Japan/China data in a “blind” way. I like where you’re going with your suggested solution. Doesn’t work with the model as I’ve defined it, but gets me thinking.

RE: Questions – it’s a “dimmer” in terms of allocation, but the choice to buy/sell is a “light switch” (because I’m trying to reduce the number of transactions).

By “smart” way I definitely did NOT mean optimization. I meant in a way that attempts to manage expected volatility and keep it below a dynamic benchmark I’ve set. This part of the model was very much from the gut, and backtested performance could have been improved (read “curve fit”) with a more optimized approach.

Your TAA posts seem to have triggered an earthquake in the quant world.

In the past, I believe you’ve mentioned having a self-imposed rule that you don’t write about (at least in detail) any strategy that you are thinking about commercializing. However, have you considered making an exception and taking this strategy live for subscription once its polished?

Hello Sven – you’re right, I don’t share details about anything I want to trade as managed accts/subscriptions (and I don’t show backtested results). But this strategy is pretty far outside of our core busienss, so it’s going to be a blog freebie. Each month I’ll share the allocation I’ll be taking on the final trading day of the month prior to the close, and I’ll be tracking performance.

specifically, “When real GDP growth rates and inflation rates were low or negative, market timing strategies were favorable. These results were robust to country level of development, negative market return years and other control variables. The conditions for pursing market timing strategies were time variant and are detectable with macro-economic and finance variables. “

First off, I think that these result are amazing (especially the low drawdown numbers).

I’m not much of a quant guy (i’m a CPA so I have generally followed a fundamental individual stock selection approach to my investing), but I have followed your blog for about a year now and have learned a lot about what you do and how you do it. So thank you for explaining everything so that someone without a quantitate background like me can understand!

Anyways my question is this. for most of your other strategies you trade non transaction fee mutual funds. This strategy seems to be using ETFs. Is there a way to use mutual funds to reduce the costs. I also remember another post of yours where you exploded investing in commission free efts (as long as they were held for a month). Could that be used with this strategy? Basically what i’m asking is, Is there a way to trade this strategy for a very low cost? Possible for no cost?

Couldn’t agree with you more regarding the statement “The real benefit of this flavor of TAA is NOT generating returns, it’s managing losses.” I’m seeing this same behavior with my own TAA-like portfolios. Thanks for sharing your great work!

have you ever considered backtesting utilizing a short approach to the asset classes when they are below the specific moving average in question? Perhaps instead of allocating the complete asset class to a short exposure one could only short say 30% of that specific asset class and leave the remaining 70% in cash? thanks

RE to Tom: my personal opinion is that it’s very difficult to accurately short most assets in anything but the very, very short-term. In other words, I think it’s much easier to say that asset X will fall in the next day or so than the next month or so. michael

Hello Tom – I have personal exposure to all of the active MarketSci strategies, this new TAA strategy, and some oddball stuff I trade which I sometimes talk about on the blog (like Luby’s SOTW portfolio).

Very interesting read – I’ve been lurking here for a long time now, and really enjoy the “freebe”. I’m fairly new to managing my own longer term portfolio, and have mostly used an ETF – 6month lookback, TF system. While this has worked fairly well, it would perform better if I folded in volatility weighted allocation.

I’m looking forward to the next installment to learn more about your thoughts on weighting each asset for a new month (eg. Volatility weighted returns based on annual / historical volatility , or Volatility weighted with closer months have a larger weighting then previous months, etc..)