Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.

I happened to know walk forward optimization(WFO,for short),but I can't clearly get it,from what I knew:dividing data into several segments,optimizing first segment,measure the system with the "optimal value" on the second segment,and go on....I got an idea that it's far more reasonable than back-tesing optimization and could tell us more properties of the system.But then?Does it drops the former parameters away,and optimizes the range of the parameters on the third segment,then perform on the fourth segment with new optimal value?
After all data are tested,it shows the performence segment by segment?

I heared that a book "The Encyclopedia Of Technical Market Indicators
" by Robert W. Colby describes nine steps about it,but unfortunately,It's hard for me to buy the book outside US,could anyone help to paste the steps or tell me how the WFO works?

You would like to follow the discussion and the thoughts of c.f. on back-testing:

"So while I believe the concept of out-of-sample testing is valid, I think there is a better approach, to test the entire population available. So while I understand the rationale behind it, I'm not a big fan of "walk-forward" testing."

A lot of emphasis is given to out-of-sample (OOS) testing because it provides a reality check for the curve-fitting tendencies that lurk inside all of us. Applying your optimised parameters to unseen data is the best guide you have to the robustness of your strategy.

However, an easy trap to fall into is when you find that your OOS equity curve does not look as good as your in-sample results and you then go back to your in-sample data and start over.

This is effectively optimising by proxy on your OOS data. You can only walk-forward on your OOS data once. After that it becomes in-sample data.

Another problem with optimising using a split data set (for in-sample testing & out-of-sample walk-forward evaluation) is that you are using less data and hence generate fewer trades. As a general rule fewer trades makes for a less robust strategy.

I prefer c.f.' approach of using all data from all available markets. This methodology still allows walk-forward evaluation but in real time instead of on historical data.

TC wrote:This is effectively optimising by proxy on your OOS data. You can only walk-forward on your OOS data once. After that it becomes in-sample data.

What an insightful remark! I knew this forum had great contributors

To me, this single observation just kills OOS forever. It is also calls for a simpler universe of attributes/data to test, which is a good thing. This also answers the question as to why VeriTrader wasn't offering direct support for OOS (though you can always trick it by playing with the start and end dates).

This is also good, because now I know that I can't rely on OOS as a safeguard: it's far better to swim knowing you've got no life belt, than beleiving you've got one

"The step-forward process will usually select an inconsistent, fast-trading method over a better long-term system simply because the test window forces this result. Instead, use all the data in one long test to get continuous performance over as many changing patterns as possible."

Quoted from the book "Smarter Trading" by Kaufman.

If you use Walk-Forward Optimization, the function of optimization becomes a part of your system and you have to keep re-optimizing continuously as you did during development. This could be thought of as adapting to market changes. It can also be thought of as an advanced way of curve fitting .

yoyo2000, for an in-depth discussion on WFO by someone who actually promotes its use there's a lot of information in Robert Pardo's book "Design, Testing and Opt. of Trading Systems". You can get it from many on-line bookstores, not necessarily located in the US.

Of course, this raises a paradox, because you are unlikely to know that you donâ€™t know what youâ€™re doing unless you know what youâ€™re doing.

There is far more truth to what you say than what you said!

Maybe another way of stating this is: Ignorance is bliss, until you start looking at the consequences.

One point here, which is more of a question than a statement. I'm not sure that one pass thru OOS data forever nullifies the data as OOS. I think (without any direct evidence or proof) that the quality of the data may degrade with each pass. So you may be able to use OOS data a second or possibly third time but knowing that each pass thru the data will degrade it's value.

I'm not basing that statement on any formal theory, but I recall a statement by Doyne Farmer where he discusses the use of OOS data to verify a model. He implied that the data would be used up quickly, but not after the first pass.

This is a topic worth researching. There are probably some good papers discussing this topic in general. (Not necessarily with regard to testing trading systems.)