In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being ... [more ▼]

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being rarely available. We argue that this method makes a poor use of the available information and creates data mining possibilities. Instead, we introduce an alternative bootstrap approach, based on the .632 bootstrap principle. This method enables to build in-sample and out-of-sample bootstrap data sets that do not overlap and exhibit the same time dependencies. We illustrate our methodology on IBM and Microsoft daily stock prices, where we compare 11 trading rules specifications. For the data sets considered, two different filter rule specifications have the highest out-of-sample mean excess returns. However, all tested rules cannot beat a simple buy-and-hold strategy when trading at a daily frequency. [less ▲]

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being ... [more ▼]

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being rarely available. We argue that this method makes a poor use of the available information and creates data mining possibilities. Instead, we introduce an alternative bootstrap approach, based on the .632 bootstrap principle. This method enables to build in-sample and out-of-sample bootstrap data sets that do not overlap and exhibit the same time dependencies. We illustrate our methodology on IBM and Microsoft daily stock prices, where we compare 11 trading rules specifications. For the data sets considered, two different filter rule specifications have the highest out-of-sample mean excess returns. However, all tested rules cannot beat a simple buy-and-hold strategy when trading at a daily frequency. [less ▲]

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the ... [more ▼]

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we don't use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure (Mercurio and Spokoiny, 2004): the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures (Berk-Jones tests, kernel density-based selection, censored likelihood score, coverage probability) based on the so-obtained residuals. These methods enable to assess the global fit of a given distribution as well as to focus on its behavior in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the ... [more ▼]

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we don't use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure (Mercurio and Spokoiny, 2004): the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures (Berk-Jones tests, kernel density-based selection, censored likelihood score, coverage probability) based on the so-obtained residuals. These methods enable to assess the global fit of a given distribution as well as to focus on its behavior in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

Since 2008 and its ﬁnancial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We ... [more ▼]

Since 2008 and its ﬁnancial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a ﬁrst step, unlike to the traditional approach, we do not use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure: the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspeciﬁcation for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures tests based on the so-obtained residuals. These methods enable to assess the global ﬁt of a given distribution as well as to focus on its behaviour in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

Since 2008 and its ﬁnancial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We ... [more ▼]

Since 2008 and its ﬁnancial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a ﬁrst step, unlike to the traditional approach, we do not use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure: the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspeciﬁcation for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures tests based on the so-obtained residuals. These methods enable to assess the global ﬁt of a given distribution as well as to focus on its behaviour in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional ... [more ▼]

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional estimation and model selection procedures (Berk-Jones (1978) tests, Sarno and Valente (2004) hypothesis testing, Diks et al. (2011) weighting method), based on the local volatility estimator of Mercurio and Spokoiny (2004) and the bootstrap methodology to compare the fit performances of candidate density functions. In particular, we introduce the sinh-arcsinh distributions (Jones and Pewsey, 2009) and we show that this family of density functions provides better bootstrap IMSE and better weighted Kullback-Leibler distances. [less ▲]

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional ... [more ▼]

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional estimation and model selection procedures (Berk-Jones (1978) tests, Sarno and Valente (2004) hypothesis testing, Diks et al. (2011) weighting method), based on the local volatility estimator of Mercurio and Spokoiny (2004) and the bootstrap methodology to compare the fit performances of candidate density functions. In particular, we introduce the sinh-arcsinh distributions (Jones and Pewsey, 2009) and we show that this family of density functions provides better bootstrap IMSE and better weighted Kullback-Leibler distances. [less ▲]

In 2008, the financial crisis put forward the relative inaccuracy of the market risk forecasting models in the financial industry. In particular, extreme events were shown to be regularly underestimated ... [more ▼]

In 2008, the financial crisis put forward the relative inaccuracy of the market risk forecasting models in the financial industry. In particular, extreme events were shown to be regularly underestimated. This problematic, initially developed in the seminal work of Mandelbrot (1963), is mainly due to financial models using the normal law while empirical evidence show strong leptokurticity in financial time series. This stylized effect is particularly damaging the forecasting of indicators like Value-at-Risk (VAR). In this study, we try to tackle problem by testing a newly-developed probability distribution, never used in finance: sinh-arcsinh function. By creating different datasets from non-parametric and GARCH models, we adjust common functions (normal, t location-scale, GED, gen. hyperbolic) and sinh-arcsinh function on the data. We show that, regarding the leptokurtic datasets extracted from the DJA and the NIKKEI 225, the sinh-arcsinh function performs a better adjustment than any other function tested. We also tested simple VAR models using normal laws, Student’s t or sinh-arcsinh functions, to assess the operational efficiency of the sinh-arcsinh function. We show that models using sinh-arcsinh functions provide more accurate and better in-sample and out-of-sample VAR forecasts than any other model using the normal laws. [less ▲]