I recently found a 30-page paper introducing the ideas and principles of his Leverage Space Model. I thought reading it might be a good way to get back into Vince’s material.

Follows a summary of the paper (PDF download link), which is maths-less (only concepts and principles are discussed). It is a good introduction to Ralph Vince theories.

Vince’s Optimal f

This is what Vince is famous for. It is basically a way to determine trading quantity (aka leverage) using the probability distributions of the trade outcomes.

If f represents the fraction of capital to wager (risk) on each bet (trade), the optimal value is the one which optimises the geometric growth of the bankroll (account balance).

In his previous books, Vince has defined the formula to determine the optimal f. The first part of the paper discusses the optimal f concept and is a good introduction for the non-initiated (showing how over-betting on a game with positive expectancy can and will result in a loss).

Leverage Space Model promises

It is presented as an improvement on the Modern Portfolio Theory, briefly discussed. This is based on the following advantages:

Risk is defined as drawdown (instead of variance in the MPT)

The fallacy and danger of correlation is eliminated

Valid for any distributional form – Fat-tails are addressed

The model is all about Leverage, which is not addressed in the MPT model.

Return aspect

The model starts by building a multi-dimensional terrain, drawing the overall expected return, based on multiple combinations of components in the portfolio and their respective f-values.

In this example the model builds the terrain for 2 simultaneous coin-toss with a payoff of 2:1. The x and y axis represent the respective f-values (leverage) for each of the bets/trades - while the z-axis (vertical) represents the expected return

The maximum portfolio growth is located at the peak of the terrain, resulting from the specific corresponding f-values combination. The terrain construction does not take into account correlation between the instruments – instead, the model uses the joint probability of two scenarios occurring simultaneously, dictated by the price data history.

The Risk Aspect

So far, the model has only looked at returns. To introduce the risk component, you must determine your maximum allowed drawdown. This is a hard and fast rule: no combination should breach that limit.

Using a derivation of the risk of ruin, the model computes the risk of maximum drawdown for each set of f-values (for a specific timeline – as, in the long run, the risk of drawdown tends to 100%). If the risk of drawdown is too high, the specific f-values combination is ignored.

In practice, the initial terrain is truncated: by removing all points breaching the maximum drawdown threshold.

The terrain has been truncated: all areas deemed too risky, from a drawdown perspective, have been removed.

The Algorithm

Vince implements a genetic algorithm to calculate the terrain, by initially calculating the expected return for each set of f-values, and secondly by running the maximum drawdown test on this same set. Once the whole set combination has been run through, the terrain is built (including truncations). The the aim is then to find the optimal set (highest return with lowest f-values).

The ideas in the paper are an interesting take on position sizing. Vince uses a simple objective/bliss function (CAGR with a binary risk/drawdown filter) to evaluate all possible scenarios of portfolio allocation/leverage. It might be interesting to use the concepts of the model with your own bliss function recipe.

One of Vince’s claim that the MPT does not address leverage sounds a bit simplistic – surely the percentage of Cash as an asset in the portfolio is an implicit measure of leverage. On the other hand, the approach on correlation/joint probability of scenarios sounds interesting and seems to go in the right direction. As Vince says:

Counting on correlation fails you when you need it the most.

Another point that seems missed out is how the model handles non-stationarity of the market. Vince mentions the chronomorphism of market prices distributions (i.e. they change with time) and even draws a betting comparison with blackjack – in which the optimal f curve changes for each card dealt. However there is no mention of how the model takes an adaptive approach to these chronomorphic distributions.

Vince’s homepage contains a link to the java software that implements his model (needs to register/leave email to download) and another one with a spreadsheet example. I have not had time to take a serious look at all those. Please let me know your feedback if you do.

Joshua Ulrich – blogger and reader of this blog (hello there: finally got round to adding you to the blogroll!) – is collaborating with Ralph Vince to port the Leverage Space Model to the R platform. His FOSS Trading blog is definitely worth a read too.

I read that Leverage Space Model article a while ago, and think it is an interesting article. I particularly like the presentation of the concept “over-betting on a game with positive expectancy can and will result in a loss”. This is the Jesse Livermore syndrome: “if I am really certain I have an edge then I should go all in with leverage.”

I am sure that LSM has the capability to outperform MVO (as traditionally conceived) because LSM uses geometric returns and draw-downs instead of arithmetic returns and variance.

The assertion that MVO doesn’t deal with leverage is of course false, as the other link points out. One major problem that LSM shares with MVO is the optimization goal of maximizing return for a fixed level of “risk”. I have found this approach to always flounder in walk-forward testing. This is because the optimizer will over-weight the portfolio into the latest red-hot investment because “risk-adjusted” returns look so great right before a crash ;->

It would be nice if the article answered two empirical questions: 1) does the method have predictive power for future returns? 2) what is predictive power for future losses?

With the RiskCog portfolio optimizer I chose the opposite approach of minimizing risk for a specified level of return. (some people will immediately presume that there is no difference, and that’s true at a single point in time, however walk forward testing is about performance across time. You have to choose a single portfolio at each point in time not an efficient frontier.)

A related concept is that it makes economic sense to dollar-cost average when saving for retirement, and then it makes sense to withdraw savings in terms a fixed number of shares. As far as I know, this works because financial markets trend and over-revert, rather than distributing returns probabilistically. No matter what fat-tail, skewed distribution you conceive it is wrong if the process you are describing is not probabilistic!

@Nick: I don’t think that draw-down based optimizers would work reliably with only 1 or 2 years of data. I think they need more data to draw conclusions about worst case scenarios. A strength of MVO is that it can come up with “okay” portfolios using shorter periods of data.

Hi RiskCog,
Thanks for the comments. It would be interesting to have a comparison between your model and the LSM (as it’s mentioned in your FAQ that you’d like to do such article)? They seem to share similarities (use of CAGR + drawdown as measure of risk)
I am assuming your model is proprietary and you will not be revealing the precious details of how it works?

Hi Jez, it would be interesting to do a comparison. What I need is a LSM optimizer or an article which specifies an example LSM optimal allocation for a given set of assets at a given point in time.

The RiskCog methodology is simple and public “RiskCog optimal portfolios have the lowest risk for a given CAGR.” Where risk is a time-domain measure such as worst year or max draw-down. CAGR can be either real or nominal and is measured over the full period, median sub-period or worst sub-period; depending on which type of return is most important to you.

Ok – thanks
I guess Vince’s material (java software or the spreadsheet on his homepage) could be a way of doing this although having only taken a brief look at it, it does not look like a 5 mins job (what is?).
I’ll let you know if I get anywhere further with this.

Your objective function does not have to be the geometric mean in order to use LSM. While most of the discussion has focused on maximizing return subject to some “risk” constraint, you can also maximize the probability of profit or minimize the probability of drawdown subject to some constraint(s)… or define whatever objective function and constraint(s) you deem most appropriate.

We (Ralph, Soren, and I) are working to generalize the LSPM-R interface to ease the specification of custom objective functions and constraints. Any/all input is welcomed.

Having a “framework” which implements the LSPM model by allowing the definition of a custom objective function sounds great. Probably a difficult question, but what are your timelines for implementing the LSPM in R? (I briefly checked the R-forge page and could see that you are currently in “very alpha” phase). Anyway – another good reason for me to start looking at R… And good luck with the project!

The “very alpha” status is mostly because it’s still being tested and the user interface may change. All the code behind the scenes doing the heavy lifting is pretty stable and mature.

We’re going to discuss the general interface / framework at this year’s R/Finance conference in Chicago. Assuming we get a good plan in place, it should take a month or two to code. So, it could be “beta” by June / July.

The LSPM R package is open source, so you could glean the equations from it. I don’t know of any free online links showing the equations; nor do I know of any white papers, or articles that provide an example of using the LSPM to create a portfolio of securities.

Clearly, we need more precision. I am still contemplating the topic, but when Vince uses the term ‘leverage’ he is not referring to a margin account versus a cash account. So when he says that MPT doesn’t use leverage, he’s not saying what you think he’s saying.

I actually was really tempted and even checked the flights… The conference agenda looks really interesting and it would have also been nice to meet some readers.
But I remembered I have commitments (big family reunion…). Will keep an eye for the next one.

@Milktrader, Joshua Ulrich, et al: as noted, “when Vince uses the term ‘leverage’ he is not referring to a margin account versus a cash account. So when he says that MPT doesn’t use leverage, he’s not saying what you think he’s saying.” This is very true. Not to be critical, but I don’t think we need to be more precise; I think Vince needs to be more precise. One of the shortcomings of MPT is that it’s ALL ABOUT LEVERAGE – in fact, in the most extreme case of MPT, Modigliani & Miller’s capital structure irrelevance principle, firms in the U.S. should all be all debt all the time because after-tax debt is a cheaper source of capital than equity (which of course is totally insane, but that’s another story).

@RiskCog, I liked this comment “[using] the optimization goal of maximizing return for a fixed level of ‘risk’. I have found this approach to always flounder in walk-forward testing. This is because the optimizer will over-weight the portfolio into the latest red-hot investment because ‘risk-adjusted’ returns look so great right before a crash”. Part of the problem, of course, is that the real world does trend but it also mean-reverts. The person who figures out the correct balance between those two challenges will have a very good model.

I don’t know a lot about walk-forward testing yet, but with regard to “walk forward testing is about performance across time” and creating difficulty in either maximizing return with a risk constraint, or in minimizing risk with a return constraint, I’d think that trying to minimize the forecasting error of the walk forward model in the out-of-sample time periods would be an interesting avenue to explore.

If you’re familiar with the work of Dr. Andrew Lo in “A Non-Random Walk Down Wall Street”, he proposes several alternatives to MVO including maximizing the R-squared of the forecast (in sample). As a former half-quant & half-fundamental professional investor, I eventually tried to maximize the consistency of alpha with respect to the investment benchmark and had reasonable success doing that. In traditional finance terms, that’s most like trying to maximize the information ratio . . . . . .

the biggest problem with constructing models of any kind in finance is that the data isn’t stationary over time. Walk Forward historical testing over multiple periods is an adaptation to help with the problem, but might not be the best approach.

Alternatively, if you can come up with a fairly accurate definition of “Regimes” that’s based on observable facts, so that there’s little room for argument over which “Regime” that you’re in, I think you’d have a killer model. One fairly successful quant money manager I know used to have three observable regimes for his model: (a) Fed is tightening, (b) Fed is easing or (c) Fed is neutral. If you look at stock market returns over time, they’re much more positive when the yield curve has a positive slope and the Fed is easing and much more negative when the Fed is tight or tightening and the yield curve has a flattish or even negative slope.

Looking further at levels of the Vix, or recent price volatility or at credit spreads (high yield versus treasuries) or liquidity spreads (t-bills versus cd’s or euro-dollars) might also be fruitfall in exploring definitions of Regimes.

In traditional quant finance, early on there were the fixed-weight model builders and the variable-weight model builders. The fixed-weight modelers kind of considered themselves purists and the variable-weight modelers were, well, less-pure. Eventually a lot of fixed-weight modelers went to “regimes”, where you had one set of fixed-weights for one regime and another set of fixed-weights for another regime.

Myself, I’m trying to look into a modeling routine where individual factor weights vary across time as a function of an observable variable. If anyone has URLs for interesting trading or investment studies on this topic, I’d be interested.

I personally don’t do regimes, maybe I will some day, I just look at price and volume to decide what to invest in. I am getting more used to my models putting my account into the correct position before a move, then after the move I can read in the blogosphere why the political/fiscal/seasonal regime was right for this move. For example my currency model said “short Yen” a couple days ago and then we got a move in that direction today. The commentators are buzzing now – but not two days ago…

RiskCog: Thanks very much. I have CXO Advisory on my browser list of favorites but haven’t plumbed the depths of their research archives. Interesting stuff they have there.

A lot of the “Fed Model” stuff involves looking at the nominal yield on the 10 year treasury versus the earnings yield on the S&P 500. That’s more of a stocks and bonds are substitute goods switching model, and in my experience it hasn’t worked very well at all. Also, I was at a large group dinner with Jeremy Siegel a few years ago, and he was pretty critical of the model for mixing apples and oranges – while the explicit treasury coupon is 100% nominal, the implicit earnings coupon on the S&P 500 is (very crudely) 50% nominal and 50% real.

The Fed models that work well in my experience are the ones that just either use the current direction of Fed policy in one of three states (easy, on hold or tightening) or else rely on the shape of the yield curve. And of course, the Fed tried to ease policy in 2008 and early 2009 without any positive effect on the stock market – though you can make a “hindsight is 20-20″ case that the stock market eventually bottomed in March 2009 just as the Fed’s aggressive Quantitative Easing program was starting to kick in . . . .

Every time you trade you have an f value. It’s simply the cost of the trade divided by account value. But you don’t always borrow, ramp up and lever your equity to put on a trade (ie, if you trade a cash account only)

Milktrader, thanks for addressing the weird assertion that MVO doesn’t do leverage. I still don’t understand what Vince means though. With any type of portfolio optimizer I can think of, you can add T-bills as one of the asset classes to be optimized. If the optimum portfolio has a positive allocation to T-bills then that means the portfolio is de-levered to less than 1.0 of your stake. If the T-bills receive a negative allocation then that means that the best portfolio involves borrowing and levering up to greater than 1.0.

My main criticism of the LSM book was the St. Petersburg betting strategy he suggested. His most recent paper addresses my complaint, by showing how to bet following the St P. allocation /with/ a drawdown constraint. The paper is also interesting because it treats position sizing in a psychological context, rather than a purely mathematical one.

I think I’ll have a project where I test Ulrich’s LSPM R package soon, and I’m really looking forward to the results.

That paper is essentially the last chapter in his book, The Leverage Space Trading Model. My take on his small Martingale model is that he is basically enabling scaredy cats. Of course, if I had $1B I probably would be more concerned with satisfying the irrational urges of my place along the Prospect Theory curve too.

What I believe he is getting at is that you can approach the n-dimensional portfolio space in any way you see fit. But he insists that you recognize that this space exists. With his small Martingale example, he is essentially providing the math for a unique approach to that space.

@Max – thanks for that link to the paper. I am not sure I could resolve myself to take that approach of consciously aiming for “sub-optimal” performance (as an investor or as a fund manager) – although I know we over-estimate our abilities to withstand “bad performance” (ie large drawdowns, etc.) – I havent read the Prospect Theory though..

@Milk good point about your objective being a function of your conditions – age could be a factor to consider also (ie maximise geometric growth when young and profitability probability when older)

Guys, Just found this thread. A little clarification where I say LSP addresses leverage directly, whereas MPT does not.

I say this because LSP has variance embedded (negatively, properly) in its return aspect, rather than juxstaposed to it, as is the case with MPT.

If you recall the Pythagorean relationship between the three parameters of Geometric Mean, Arithematic Mean, and Standard Deviation in Holding Period Returns, the effect of this — and that the dispersion parameter, used correctyl in LSP — becomes evident.

Ralph Vince has put in practice his LSP model creation.
He has created a series of funds. Dow Jones LSP Position Sizing Equal Sector U.S. Large-Cap 50 Index is one of them. Ralph Vince explains his Methodology : “The proprietary algorithm is a rules-based application of an investment strategy known as Leverage Space Portfolio, which was created by LSP Partners, LLC. The strategy aims to maximize the probability of positive performance, rather than seeking to maximize performance, by employing a risk-control process focused on draw-down management.”
Ralph then explains how the index is created.

Have you tried to replicate that methodology? What do you think of his application of the LSPM concept?
Ralph insists on reducing draw-down, but looking at the equity curve on his white paper, the system doesn’t seem to have done well during the 2008 crisis. Compared to the S&P 500 this draw-down looks similar to me. A simple timing approach based on ma200 would have fare better imo.

Raphael,
If you look around the blog, you should find some posts where I go more in detail and test the LSPM framework. I have not been successful in testing aspects of it like adding probability of positive performance (over profit optimization) or probability of MaxDD (yet) but I definitely think it’s a promising avenue to pursue – just been on the “back-burner” since then. Josh (Ulrich) has hinted at building a web-based implementation of the LSPM. I’m looking forward to checking that out – as well as the live performance of the ETFs based on Vince’s LSPM indices.
By the way, which white paper are you referring to (mind posting a link here?)
Thanks,
Jez

I have indeed read your work on LSPM and follow Josh blog as well. I find Ralph Vince application of his LSPM very interesting ( see the rules for allocation) and was wondering if we could reproduce this strategy in Trading blox or R ?

Have you tried using the LSPM R package. It implements a lot of the LSPM features?

As far as Trading Blox is concerned, I do not think it would be trivial to re-implement LSPM in it: there is a lot of optimization computation required for the LSPM algorithm and R seems much better adapted for this (+ you can leverage other R packages)

Au.Tra.Sy blog, Systematic Trading research and development, with a flavour of Trend Following.

Disclaimer: Past performance is not necessarily indicative of future results. Futures trading is complex and presents the risk of substantial losses; as such, it may not be suitable for all investors. The content on this site is provided as general information only and should not be taken as investment advice. All site content, shall not be construed as a recommendation to buy or sell any security or financial instrument, or to participate in any particular trading or investment strategy. The ideas expressed on this site are solely the opinions of the author. The author may or may not have a position in any financial instrument or strategy referenced above. Any action that you take as a result of information or analysis on this site is ultimately your sole responsibility.

HYPOTHETICAL PERFORMANCE RESULTS HAVE MANY INHERENT LIMITATIONS, SOME OF WHICH ARE DESCRIBED BELOW. NO REPRESENTATION IS BEING MADE THAT ANY ACCOUNT WILL OR IS LIKELY TO ACHIEVE PROFITS OR LOSSES SIMILAR TO THOSE SHOWN; IN FACT, THERE ARE FREQUENTLY SHARP DIFFERENCES BETWEEN HYPOTHETICAL PERFORMANCE RESULTS AND THE ACTUAL RESULTS SUBSEQUENTLY ACHIEVED BY ANY PARTICULAR TRADING PROGRAM. ONE OF THE LIMITATIONS OF HYPOTHETICAL PERFORMANCE RESULTS IS THAT THEY ARE GENERALLY PREPARED WITH THE BENEFIT OF HINDSIGHT. IN ADDITION, HYPOTHETICAL TRADING DOES NOT INVOLVE FINANCIAL RISK, AND NO HYPOTHETICAL TRADING RECORD CAN COMPLETELY ACCOUNT FOR THE IMPACT OF FINANCIAL RISK OF ACTUAL TRADING. FOR EXAMPLE, THE ABILITY TO WITHSTAND LOSSES OR TO ADHERE TO A PARTICULAR TRADING PROGRAM IN SPITE OF TRADING LOSSES ARE MATERIAL POINTS WHICH CAN ALSO ADVERSELY AFFECT ACTUAL TRADING RESULTS. THERE ARE NUMEROUS OTHER FACTORS RELATED TO THE MARKETS IN GENERAL OR TO THE IMPLEMENTATION OF ANY SPECIFIC TRADING PROGRAM WHICH CANNOT BE FULLY ACCOUNTED FOR IN THE PREPARATION OF HYPOTHETICAL PERFORMANCE RESULTS AND ALL WHICH CAN ADVERSELY AFFECT TRADING RESULTS.

THESE PERFORMANCE TABLES AND RESULTS ARE HYPOTHETICAL IN NATURE AND DO NOT REPRESENT TRADING IN ACTUAL ACCOUNTS.