2008-2009

The talk presents a class of structural adaptive smoothing methods developed at WIAS. The main focus will be on the Propagation-Separation (PS) approach proposed by Polzehl and Spokoiny (2006). The method allows to simultaneously identify regions of homogeneity with respect to a prescribed model (structural assumption) and to use this information to improve local estimates. This is achieved by an iterative procedure. The name Propagation-Separation is a synonym for the two main properties of the algorithms. In case of homogeneity, that is if the prescribed model holds with the same parameters within a large region, the algorithm essentially delivers a series of nonadaptive estimates with decreasing variance and propagates to the best estimate from this series. Separation means that, as soon as in two design points (X_i) and (X_j) significant differences are detected between estimates, observations in (X_j) will not be used to estimate the parameter in (X_j). Both points are separated. The power of the approach will be demonstrated using examples from imaging. Current applications range from nonstationary time series to the analysis of functional Magnetic Resonance and Diffusion Weighted Magnetic Resonance experiments in neuroscience.

9/12/2008

Estimating Operational Risk

Bayesian Subset Selection in Regression Models

The selection of predictors to include is an important problem in building a multiple regression model. The Bayesian approach simply converts the problem into the elementary problem of evaluating conditional (or posterior) distributions and is desirable. This approach often assumes a normal error, which is a restriction. The Bayesian mixture method can be used to relax this restriction to allow for seemingly more realistic errors that are unimodal and/or symmetric. The main thrust of this method essentially reduces an infinite-dimensional stochastic analysis problem of averaging random distributions to a finite-dimensional one based on averaging random partitions. The posterior distribution of the parameters is an average of random partitions. Nesting a Metropolis-Hastings algorithm within a weighted Chinese restaurant process of sampling partitions results in an MCMC, which provides a stochastic approximation to the posterior mode of the parameters. Numerical examples are given. (Joint with Baoqian Pao.)

We propose a simple rank-based test for the unit root hypothesis. Our test is semiparametrically efficient if the model contains a non-zero drift, as is the case for many applications. Our test always enjoys the advantages of a rank-based test as distribution freeness and exact finite sample sizes. The test, being semiparametrically efficient, outperforms the appropriate Dickey-Fuller test, in particular when errors have infinite variance. (Joint with Marc Hallin and Ramon van den Akker.)

10/10/2008

The financial crisis, the bailout, and alternative plans

Wiener chaos, Malliavin calculus and central limit theorems

Mark Podolskij University of Aarhus

Abstract

We present some recent results on central limit theorems for functionals of Gaussian processes. New necessary and sufficient conditions on the contraction operator and the Malliavin derivative are demonstrated. Finally, we show some illustrating examples.

10/27/2008

Jump Activity in High Frequency Financial Data

We propose statistical tests to discriminate between the finite and infinite activity of jumps in a semimartingale discretely observed at high frequency. The two statistics allow for a symmetric treatment of the problem: we can either take the null hypothesis to be finite activity, or infinite activity. When implemented on high frequency stock returns, both tests point towards the presence of infinite activity jumps in the data. We then define a degree of activity for infinitely active jump processes, and propose estimators of that degree of activity. (Joint work with Jean Jacod).

Carr's randomization and new FFT techniques for fast and accurate pricing of barrier options

I will explain how Carr's randomization approximation can be applied to the problem of pricing a knock-out option with one or two barriers in a wide class of models of stock prices used in mathematical finance. The approximation yields a backward induction procedure, each of whose steps can be implemented numerically with very high speed and precision. The resulting algorithms are significantly more efficient than the algorithms based on other approaches to the pricing of barrier options.
In the first part of my talk I will focus on the classical Black-Scholes model and Kou's double-exponential jump-diffusion model, as well as a class of models that contains those two as special cases, namely, the hyper-exponential jump-diffusion (HEJD) models. For HEJD models, each step in the backward induction procedure for pricing a single or double barrier option can be made very explicit, so that the calculation of an option price using our method takes a small fraction of a second.
In the second part of my talk I will discuss other prominent examples of models used in empirical studies of financial markets, including the Variance Gamma model and the CGMY model. In these examples, the aforementioned backward induction procedure can be reduced to computing a sequence of Fourier transforms and inverse Fourier transforms. However, the numerical calculation of Fourier transforms via FFT may lead to significant errors, which are often hard or impossible to control when standard FFT techniques are used. I will describe a new approach to implementing FFT techniques that allows one to control these errors without sacrificing the computational speed.
The material I will present is based on joint works with Svetlana Boyarchenko (University of Texas at Austin) and Sergei Levendorskii (University of Leicester).

11/14/2008

Why is Financial Market Volatility so High?

Taking risk to achieve return is the central feature of finance. Volatility is a way to measure risk and when it is changing over time the task is especially challenging. Measures of volatility are presented using up to date information on US Equity markets, bond markets, credit markets and exchange rates. Similar measures are shown for international equities. The economic causes of volatility are discussed in the light of new research in a cross country study. These are applied to the current economic scene. Long run risks are then discussed from the same perspective. Two long run risks are discussed – climate change and unfunded pension funds. Some policy suggestions are made to reduce these risks and benefit society today as well as in the future.

We consider what sorts of stochastic processes can explain asset prices using tools of complex analysis and system theory, and a small amount of empirical evidence. The resulting processes are not the usual suspects of financial mathematics.

11/21/2008

Maximization by Parts in Extremum Estimation

In this paper, we present various iterative algorithms for extremum estimation in cases where direct computation of the extremum estimator or via the Newton-Ralphson algorithm is difficult, if not impossible. While the Newton-Ralphson algorithm makes use of the full Hessian matrix which may be difficult to evaluate, our algorithms use parts of the Hessian matrix only, the parts that are easier to compute. We establish convergence and asymptotic properties of our algorithms under regularity conditions including the information dominance conditions. We apply our algorithms to the estimation of Merton's structural credit risk model. (Joint work with Yanqin Fan and Sergio Pastorello).

11/24/2008

Arbitrage bounds on the prices of vanilla options and variance swaps

In earlier work with David Hobson (Mathematical Finance 2007) we established necessary and sufficient conditions under which a given set of traded vanilla option prices is consistent with an arbitrage-free model. Here we ask, given that these conditions are satisfied, what are the bounds on the price of a variance swap written on the same underlying asset (given that its price is a continuous function of time). It turns out that there is a non-trivial lower bound, computed by a dynamic programming algorithm, but there is no upper bound unless we impose further conditions on the price process. In view of the well-known connection between variance swaps and the log option, appropriate conditions relate to left-tail information on the price S(T) at the option exercise time T, such as existence of a conditional inverse power moment. One can also reverse the question and ask what information a variance swap provides about the underlying distribution.

12/12/2008

Skewness and the Bubble

Eric Ghysels University of North Carolina at Chapel Hill
Federal Reserve Bank, New York

Abstract

We use a sample of option prices, and the method of Bakshi, Kapadia and Madan (2003), to estimate the ex ante higher moments of the underlying individual securities' risk-neutral returns distribution. We find that individual securities' volatility, skewness and kurtosis are strongly related to subsequent returns. Specifically, we find a negative relation between volatility and returns in the cross-section. We also find a significant relation between skewness and returns, with more negatively (positively) skewed returns associated with subsequent higher (lower) returns, while kurtosis is positively related to subsequent returns. To analyze the extent to which these returns relations represent compensation for risk, we use data on index options and the underlying index to estimate the stochastic discount factor over the 1996-2005 sample period, and allow the stochastic discount factor to include higher moments. We find evidence that, even after controlling for differences in co-moments, individual securities' skewness matters. However, when we combine information in the risk-neutral distribution and the stochastic discount factor to estimate the implied physical distribution of industry returns, we find little evidence that the distribution of technology stocks was positively skewed during the bubble period--in fact, these stocks have the lowest skew, and the highest estimated Sharpe ratio, of all stocks in our sample. (Joint with Jennifer Conrad and Robert Dittmar.)

1/16/2009

A New Approach For Modelling and Pricing Equity Correlation Swaps

A correlation swap on N underlying stocks pays the average of the average correlation coefficient of daily returns observed in a given time period. The pricing and hedging of this derivative instrument is non-trivial. We show how the payoff can be approximated as the ratio of two types of tradable variances in the special case where the underlying stocks are the constituents of an equity index, and proceed to derive pricing and hedging formulas within a two-factor 'toy model'.

1/30/2009

The Mathematics of Liquidity and Other Matters Concerning Portfolio and Risk Management

Ranjan Bhaduri Alphametrix

Abstract

This talk will cover liquidity matters. A game-theoretic example will demonstrate how it is easy for humans to underestimate the value of liquidity. Some problems in the hedge fund space concerning popular analytics will be explored, and new potential solutions such as liquidity buckets, liquidity derivatives, liquidity duration, and liquidity indices will be introduced. Applications of the Omega function, and some important insights on proper due diligence will be examined. The AlternativeEdge Short-Term Traders Index will also be discussed.

2/6/2009

Approximations of Risk Neutral Measures and Derivatives Pricing

Fangfang Wang University of North Carolina at Chapel Hill

Abstract

Risk neutral measures are a key ingredient of financial derivative pricing. Much effort has been devoted to characterizing the risk neutral distribution pertaining to the underlying asset. In this talk, we revisit the class of Generalized Hyperbolic(GH) distributions and study their applications in option pricing. Specially, we narrow down to three subclasses: the Normal Inverse Gaussian distribution, the Variance Gamma distribution and the Generalized Skewed T distribution. We do this because of their appealing features in terms of tail behavior and analytical tractability in terms of moment estimation. Different from the existing literature on applying the GH distributions to option pricing, we adopt a simple moment-based estimation approach to the specification of the risk neutral measure, which has an intuitive appeal in terms of how volatility, skewness and kurtosis of the risk neutral distribution can explain the behavior of derivative prices. We provide numerical and empirical evidence showing the superior performance of the Normal Inverse Gaussian distribution as an approximation compared to the existing methods and the other two distributions.

2/20/2009

Quasi-Maximum Likelihood Estimation of Volatility with High Frequency Data

Dacheng Xiu Princeton University

Abstract

This paper investigates the properties of the well-known maximum likelihood estimator in the presence of stochastic volatility and market microstructure noise, by extending the classic asymptotic results of quasi-maximum likelihood estimation. When trying to estimate the integrated volatility and the variance of noise, this parametric estimator remains consistent, efficient and robust as a quasi-estimator under misspecified assumptions. A variety of Monte Carlo simulations show its advantage over the nonparametric Two Scales Realized Volatility estimator in terms of the efficiency and the small sample accuracy.

2/23/2009

Testing for finite activity for jumps and for the presence of a Brownian motion

Affine models are very popular in modeling financial time series as they allow for analytical calculation of prices of financial derivatives like treasury bonds and options. The main property of affine models is that the conditional cumulant function, defined as the logarithmic of the conditional characteristic function, is affine in the state variable. Consequently, an affine model is Markovian, like an autoregressive process, which is an empirical limitation. The paper generalizes affine models by adding in the current conditional cumulant function the past conditional cumulant function. Hence, generalized affine models are non-Markovian, such as ARMA and GARCH processes, allowing one to disentangle the short term and long-run dynamics of the process. Importantly, the new model keeps the tractability of prices of financial derivatives. This paper studies the statistical properties of the new model, derives its conditional and unconditional moments, as well as the conditional cumulant function of future aggregated values of the state variable which is critical for pricing financial derivatives. It derives the analytical formulas of the term structure of interest rates and option prices. Different estimating methods are discussed (MLE, QML, GMM, and characteristic function based estimation methods). Three empirical applications developed in companion papers are presented. The first one based on Feunou (2007) presents a no-arbitrage VARMA term structure model with macroeconomic variables and shows the empirical importance of the inclusion of the MA component. The second application based on Feunou and Meddahi (2007a) models jointly the high-frequency realized variance and the daily asset return and provides the term structure of risk measures such as the Value-at-Risk, which highlights the powerful use of generalized affine models. The third application based on Feunou, Christoffersen, Jacobs and Meddahi (2007) uses the model developed in Feunou and Meddahi (2007a) to price options theoretically and empirically. (Joint with Bruno Feunou).

4/17/2009

The extremogram: a correlogram for extreme event

We consider a strictly stationary sequence of random vectors whose finite-dimensional distributions are jointly regularly varying (regvar) with a positive index. This class of processes includes among others ARMA processes with regvar noise, GARCH processes with normal or student noise, and stochastic volatility models with regvar multiplicative noise. We define an analog of the autocorrelation function, the extremogram, which only depends on the extreme values in the sequence. We also propose a natural estimator for the extremogram and study its asymptotic properties under strong mixing. We show asymptotic normality, calculate the extremogram for various examples and consider spectral analysis related to the extremogram.

4/17/2009

What are the Risks of Treasury Bonds?

A representative consumer uses Bayes' law to learn about parameters and to construct probabilities with which to perform ongoing model averaging. The arrival of signals induces the consumer to alter his posterior distribution over parameters and models. The consumer copes with the specification doubts by slanting probabilities pessimistically. One of his models puts long-run risks in consumption growth. The pessimistic probabilities slant toward this model and contribute a counter-cyclical and signal-history-dependent component to prices at risk.

5/15/2009

Efficient estimation for discretely sampled ergodic SDE models

Simple and easily checked conditions are given that ensure rate optimality and efficiency of estimators for ergodic SDE models in a high frequency asymptotic scenario, where the time between observations goes to zero while the observation horizon goes to infinity. For diffusion models rate optimality is important because parameters in the diffusion coefficient can be estimated at a higher rate than parameters in the drift. The focus is on approximate martingale estimating functions, which provide simple estimators for many SDE models observed at discrete time points. In particular, optimal martingale estimating functions are shown to give rate optimal and efficient estimators. Explicit optimal martingale estimating functions are obtained for models based on Pearson diffusions, where the drift is linear and the squared diffusion coefficient is quadratic, and for transformations of these processes. This class of models is surprisingly versatile. It will be demonstrated that explicit estimating functions can also be found for integrated Pearson diffusions and stochastic Pearson volatility models.

5/29/2009

Cancelled

Kjell G. NyborgNorwegian School of Business Administration

6/5/2009

Volatility and Covariation of Financial Assets: A High-Frequency Analysis

Using high frequency data for the price dynamics of equities we measure the impact that market microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variance-covariance matrix of n assets. We propose a Kalman-filter-based methodology that allows us to deconstruct price series into the true efficient price and the microstructure noise. This approach allows us to employ volatility estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this price series decomposition allows us to estimate the variance covariance matrix of n assets in a more efficient way than the methods so far proposed in the literature. We illustrate our results by calculating how microstructure noise affects portfolio decisions and calculations of the equity beta in a CAPM setting.

Multivariate extensions of two continuous time stochastic volatility models driven by Lévy processes - the Ornstein-Uhlenbeck type and the COGARCH model - are introduced. First, Ornstein-Uhlenbeck type processes taking values in the positive semi-definite matrices are defined using matrix subordinators (special matrix-valued Lévy processes) and a special class of linear operators. Naturally these processes can be used to describe the random evolvement of a covariance matrix over time and we therefore use them in order to define a multivariate stochastic volatility model for financial data which generalises the popular univariate model introduced by Barndorff-Nielsen and Shephard. For this model we derive results regarding the second order structure, especially regarding the returns and squared returns, which leads to a GMM estimation scheme. Finally, we discuss the tail behaviour and extensions allowing to model long memory phenomena. Thereafter, an alternative stochastic volatility model driven only by a single d-dimensional Lévy process - the multivariate COGARCH process -is introduced and analysed.