Recent stress tests performed on some of the largest banks appear to support the observation that they are better capitalized. While it is too early to assess the impact of the financial regulations just passed in the US, it is clear that trading risk still remains under the purview of financial institutions. As the past two years have clearly illustrated, accurate evaluation of correlated tail outcomes is paramount in risk management. This issue contains papers that highlight the variety of perspectives one may have in the pursuit of this goal. They include identifying the correct dynamic model, selecting the appropriate parameter estimator and setting the right downside target for a given risk profile.

The first paper, by Eberlein and Madan, presents a return distribution modeling
view. In this context, Lévy processes have gained significant acceptance in capturing
dynamics involving jumps and fat tails. They better fit financial time series data,
which commonly exhibit high skewness and kurtosis, and better model volatility
smile effects observed in option markets. Although most of the applications are
univariate, their paper develops a simple framework for modeling multi-asset dependence.
By exploiting the property that a Lévy process is a time-changed Brownian
motion, they build dependence by correlating Brownian motions. They observe that
for such processes the actual correlation between the returns will be below the
Brownian correlations but show how to recover the latter from the knowledge of
marginal laws.

The second paper, by Weiß, is focused on estimation issues arising in the copula
approach. Weiß presents a comprehensive simulation study of the finite-sample
properties of several copula parameter estimators. His results show that, in most
settings, the canonical maximum likelihood method yields smaller estimation biases
with less computational effort than any of the minimum distance estimators based
on copula goodness-of-fit tests. There exist, however, cases (especially when the
sample size increases) where minimum distance estimators based on the empirical
copula process are superior to the maximum likelihood estimator. Minimum distance
estimators based on Kendall’s transform, on the other hand, yield only suboptimal
results in all configurations of the simulation study. The estimates for these risk
measures differ considerably depending on the choice of parameter estimator. Weiß
stresses, therefore, the need for carefully choosing the parameter estimators in
contrast to focusing all attention on choosing the parametric copula model. For
practical applications, these results are of high relevance as copula models can vastly
improve correlation-based models if calibrated correctly. At the same time, model
misspecification and biased estimation can lead to woefully wrong estimates of risk measures thus partly justifying the criticism, especially of the Gaussian copula model, during the subprime crisis.

In the third paper of this issue, Mansouri et al conduct an empirical study comparing
a number of value-at-risk forecasts from distributions arising from various
long-memory GARCH-based volatility models. Using daily data from different
countries covering more than 11 years, they find that the long-memory, asymmetric
FIAPARCH model generally performs best. Their results are in line with similar
studies and illustrate the generalized nature of FIAPARCH, which can fold into
several other long-memory GARCH-based models.

While the previous three papers do not make explicit reference to an investor’s
risk attitude, the fourth, by Olmo, directly addresses risk preferences through a utility
function that captures downside risk in an alternative form to disappointment and loss
aversion forms. In this model investors are not only averse to downside movements of
stocks but also to the variance of their investments. In this framework, Olmo develops
an asset pricing equilibrium formula in which the risk premium on a risky asset is
given by a weighted sum of the regular beta capital asset pricing model and a market
portfolio downside risk beta. An appealing feature of this model is its statistical
tractability. In particular, Olmo develops a simple econometric model to test for the
significance of the correlation between market and stock returns under distress and,
more importantly, he develops a statistical device to test for the numerical value
and statistical significance of the target return. This threshold value is considered
endogenous to the agent or market and not exogenously given, as is done in most
of the related literature so far. This new model is competitive against the benchmark
three-factor model proposed in the literature when tested with real data.

A quant at Citi has revived debate about the changing nature of the profession (www.risk.net/2417747). The scope is narrower, he claims; the job has been dumbed down, and today's quants are little more than programmers. Is he right?