$\begingroup$Did you see the source code provided by one of the authors of the Ledoit and Wolf paper?$\endgroup$
– Bob Jansen♦Mar 13 '12 at 19:18

1

$\begingroup$The point of White is to adjust for multiplicity. Why use it to compare only two Sharpe ratios? Or maybe you want to use it for $n >> 2$ Sharpe ratios?$\endgroup$
– JamesSep 12 '14 at 14:28

1

$\begingroup$@ColinTBowers apparently it was under patent until Jan 2017, as a simple google search would have revealed: patents.google.com/patent/US5893069A/en?oq=5893069 . And yes, Hansen's loglog trick is certainly an improvement on WRC. Can I ask what the "tests proposed in some recent publications" are?$\endgroup$
– steveo'americaFeb 18 at 19:15

1

$\begingroup$ah, thanks. I only recalled White's patent because I remember feeling subversive when implementing it. I too reflexively doubt patentability of 'obvious' uses of known methods, but I also worked for a company that lost +500M in a patent case for using Kalman Filters (!). A recent survey of several methods for MHT correction on Sharpe is Pav, 2019. Again, none of these are required for $n=2$ strategies.$\endgroup$
– steveo'americaFeb 19 at 1:09

1

$\begingroup$@ColinTBowers Yes, the CLT/Delta method analysis was proposed by Jobson & Korkie, later turned into F-test by Leung & Wong, and a $\chi^2$ test by Wright et al.$\endgroup$
– steveo'americaFeb 20 at 18:30

1 Answer
1

As James has pointed out in the comments, White's Reality Check is specifically designed to control the family-wise error rate given $k > 2$ statistics. The theory does not depend on $k$ asymptotics, so there is nothing invalid about using White's Reality Check for $2$ statistics, but in practice there would be little point to doing this. Further, as stevo`america points out above, the Reality check had a patent on it until two years ago - whether it would be enforced in a court case though is another question entirely...

In particular, for $k=2$, it is fairly straightforward to construct a simple statistical test for the difference in two Sharpe ratios. Presumably there is some insight in Ledoit and Wolf's paper that makes their statistic superior to what I am about to suggest. Also, see stevo`americas comments on the question for references to some other sophisticated testing measures. But if what you're after is simplicity, then the following is still perfectly valid:

So we have a CLT for our statistic. For the purposes testing a difference in two statistics, it is easier if our statistic can be phrased as a single sample mean. This is straightforward. Let:
\begin{equation}
Y_{1,t} = (\bar{\sigma}_1)^{-1} R_{1,t} ,
\end{equation}
where it is worth emphasizing that it immediately follows that:
\begin{equation}
\hat{S}_1 = \bar{Y}_1 .
\end{equation}
Incorporating the second asset, we now define:
\begin{equation}
d_t = Y_{1,t} - Y_{2,t} .
\end{equation}
The theory thus far is sufficient to show that under:
\begin{equation}
H_0 : S_1 = S_2 ,
\end{equation}
we have:
\begin{equation}
\bar{d} \overset{d}{\rightarrow} \mathcal{N}(0, \alpha) .
\end{equation}
So we've literally transformed the problem into testing whether a sample mean is equal to zero, with a CLT existing for the sample mean. If you think $d_t$ exhibits time-series dependence, then you will need to estimate $\alpha$ using a HAC estimator, or else you could just bootstrap the statistic. Both are likely to give you similar outcomes. If you aren't worried about time-series dependence then just estimate $\alpha$ using the sample standard deviation of $d_t$ over $\sqrt{T}$.