Todd - first, thanks for the questions. They've caused me to find an error which is very helpful.

Todd asked:What exactly do the Bull Win % and Bear Win % mean?

It means the percent of the screens selected that resulted in a winning result. Keep in mind, that my indicator produces more trades than an EMA crossover, so there are many more opportunities to fail. Anything over 79% is quite impressive.

A couple of cases seemed odd with these values... Just one example, "Beta DESC" had a Bullish CAGR of 40.10% and a Bearish CAGR of 4.57%. However, its Bear Win % was higher than its Bull Win % (69% vs. 61%, respectively). That was confusing to me.

Just because the one had a higher CAGR means nothing about its success rate. These values at this level are more noise than anything else.

Also, what does Reb Win % mean, and how does it differ from Screen Win % ?

The Reb Win % refers to the number of rebalance points that resulted in a win for the combined five screens held. So for example, on a particular date the signal went bullish and five new screens were selected. Of these five three returned a positive result and two failed, but the latter two failed more than the better three, resulting in the combined return being negative. This would count again the Reb Win % since it did not come out with a positive return. The Screen Win % refers to the individual screens win rate. All of the screens held are added up and the number with a positive return are used to determine the percent positive.

I used datahelper.com and Google to try to find a definition of "Normalized Trough Count", but I couldn't seem to find one. What does the term mean, and how is it calculated?

Here is one case where I employed it and mentioned its usefulness at the bottom of the post. Actually, I've run many a test using this measure. It is one of my own measures, not something I have ever read about elsewhere.http://boards.fool.com/Message.asp?mid=26231522

I apologize, I'm sure you've answered this before, but how did you choose blends for the very earliest years of the backtest (1989, 1990, 1991, etc.), given that before 1989, many of the screens don't have any historical data to examine?

Good question -- this question is what made me realize my error. I reran all screen tests before running these tests on measures. I did it by grouping them into their Olympic medal standard. Unfortunately, I made the mistake of having these tests start in 1989 rather than 1986 as they should have. That is why the returns are so low for 1989 and 1990. You would need at least two years of data to have enough for the sorting measure to work. If I limit the test to just 1991 to present the results are all above 50% CAGR and the Sharpe is at 2.24 for Normalized Trough Count (NTC) and 2.18 for Sharpe/GSD. It is interesting that my home-grown NTC beats out everything else as to predicting the best screens to hold going forward in time. I actually like it because I think it is an excellent pain metric that accurately reflects the typical investor's psyche.

Anyway, I'm rerunning all the tests and will share the results with the extended data once they are complete. Again, thanks for the question that led me to track down this foolish mistake.