2006-2007

We consider a dynamical model for the loss distribution of a pool of names. The model is based on the notion of generalized Poisson process, allowing for the possibility of more than one jump in small time intervals. We introduce extensions of the basic model based on piecewise-gamma, scenario-based and CIR random intensity in the constituent Poisson processes. The models are tractable, pricing and in particular simulation is easy, and consistent calibration to quoted index CDO tranches and tranchelets for several maturities is feasible, as we illustrate with detailed numerical examples.

Ultra-high-frequency (UHF) data is naturally modeled as a marked point process (MPP). In this talk, we propose a general filtering model for UHF data. The statistical foundations of the proposed model - likelihoods, posterior, likelihood ratios and Bayes factors - are studied. They are characterized by stochastic differential equations such as filtering equations. Convergence theorems for consistent, efficient algorithms are established. Two general approaches for constructing algorithms are discussed. One approach is Kushner's Markov chain approximation method, and the other is Sequential Monte Carlo method or particle filtering method. Simulation and real data examples are provided.

We create an analytical structure that reveals the long run risk-return relationship for nonlinear continuous time Markov environments. We do so by studying an eigenvalue problem associated with a positive eigenfunction for a conveniently chosen family of valuation operators. This family forms a semigroup whose members are indexed by the elapsed time between payoff and valuation dates. We represent the semigroup using a positive process with three components: an exponential term constructed from the eigenvalue, a martingale and a transient eigenfunction term. The eigenvalue encodes the risk adjustment, the martingale alters the probability measure to capture long run approximation, and the eigenfunction gives the long run dependence on the Markov state. We establish existence and uniqueness of the relevant eigenvalue and eigenfunction. By showing how changes in the stochastic growth components of cash flows induce changes in the corresponding eigenvalues and eigenfunctions, we reveal a long-run risk return tradeoff.

January 19, 2007

Accounting for Nonstationarity and Heavy Tails in Financial Time Series With Applications to Robust Risk Management

In the ideal Black-Scholes world, financial time series are assumed 1) stationary (time homogeneous) and 2) having conditionally normal distribution given the past. These two assumptions have been widely-used in many methods such as the RiskMetrics, one risk management method considered as industry standard. However these assumptions are unrealistic. The primary aim of the paper is to account for nonstationarity and heavy tails in time series by presenting a local exponential smoothing approach, by which the smoothing parameter is adaptively selected at every time point and the heavy-tailedness of the process is considered. A complete theory addresses both issues. In our study, we demonstrate the implementation of the proposed method in volatility estimation and risk management given simulated and real data. Numerical results show the proposed method delivers accurate and sensitive estimates.

February 2, 2007

Hedging and portfolio optimization in a Levy market model

We consider a market model where the stock price process is a geometric Levy process. In general, this model is not complete and there are multiple martingale measures. We will present the completion of this market with a series of assets related to the power-jump processes of the underlying Levy process. On the other hand, we will discuss the maximization of the expected utility of portolios based on these new assets. In some particular cases we obtain optimal portfolios based on stocks and bonds, showing that the new assets are superfluous for certain martingale measures that depend on the utility function we use.

March 2, 2007

Nonparametric Regression Models for Nonstationary Variables with Applications in Economics and Finance

Zongwu Cai University of North Carolina at Charlotte, Mathematics

Abstract

In this talk, I will talk about how to use a nonparametric regression model to do forecasting for nonstationary economic and financial data, for example, to forecast the inflation rate using the velocity variable in economics and to test predictability efficiency in stock returns using the log dividend-price ratio and/or the log earnings-price ratio and/or the three-month T-bill and/or the long-short yield spread.
A local linear approach is developed to estimate the unknown functionals. The consistency and asymptotic normality of the proposed estimators are obtained. Our asymptotic results show that the asymptotic bias is same for all estimators of coefficient functions but the convergence rates are totally different for stationary and nonstationary covariates. The convergence rate for the estimators of the coefficient functions for nonstationary covariates is faster than that for stationary covariates with a factor of $n^{-1/2}$. This finding seems new and it leads to a two-stage approach to improve the estimation efficiency.
When the coefficient function is a function of nonstationary variable, our new findings are that the asymptotic bias term is the same as that for stationary case but the convergence rate is different and further, the asymptotic distribution is not a normal but a mixed normal associated with the local time of a standard Brownian motion. Moreover, the asymptotic behaviors at boundaries are investigated. The proposed methodology is illustrated with an economic time series, which exhibits nonlinear and nonstationary behavior.
This is a join work with Qi Li, Department of Economics, Texas A&M University and Peter M. Robinson, Department of Economics, London School of Economics.

March 9, 2007

Robust asset allocation using benchmarking

In this paper, we propose and analyze a new approach to finding robust portfolios for asset allocation problems. It differs from the usual worst case approach in that a (dynamic) portfolio is evaluated not only by its performance when there is an adversarial opponent (``nature"), but also by its performance relative to a fully informed benchmark investor who behaves optimally given complete knowledge of the model (i.e. nature's decision). This relative performance approach has several important properties: (i) optimal decisions are less pessimistic than portfolios obtained from the usual worst case approach, (ii) the dynamic problem reduces to a convex static optimization problem under reasonable choices of the benchmark portfolio for important classes of models including ambiguous jump-diffusions, and (iii) this static problem is dual to a Bayesian version of a single period asset allocation problem where the prior on the unknown parameters (for the dual problem) correspond to the Lagrange multipliers in this duality relationship.

March 30, 2007

Models for Option Prices

In order to remove the market incompleteness inherent to a stock price model with stochastic volatility and/or with jumps, we suggest to model at the same time the price of the stock and the prices of sufficiently many options, in a way aimilar to what the Heath-Jarrow-Morton model achieves for interest rates. This gives rise to a number of mathematical difficulties, pertaining to the fact that option prices should be equal to, or closely related with the stock price near maturity. Some of these difficulties are solved, under appropriate assumptions, and then we get a model for which completeness holds.

April 6, 2007

The Kelly Criterion and its variants: theory and practice in sports, lottery, futures, and options trading

William Ziemba University of British Columbia, Sauder School of Business

Abstract

In capital accumulation under uncertainty, a decision-maker must determine how much capital to invest in riskless and risky investment opportunities over time. The investment strategy yields a stream of capital, with investment decisions made so that the dynamic distribution of wealth has desirable properties. The distribution of accumulated capital to a fixed point in time and the distribution of the first passage time to a fixed level of accumulated capital are variables controlled by the investment decisions. An investment strategy which has many attractive and some not attractive properties is the growth optimal strategy, where the expected logarithm of wealth is maximized. This strategy is also referred to as the Kelly strategy. It maximizes the rate of growth of accumulated capital. With the Kelly strategy, the first passage time to arbitrary large wealth targets is minimized, and the probability of reaching those targets is maximized. However, the strategy is very aggressive since the Arrow-Pratt risk aversion index is essentially zero. Hence, the chances of losing a substantial portion of wealth are very high, particularly if the estimates of the returns distribution are in error. In the time domain, the chances are high that the first passage to subsistence wealth occurs before achieving the established wealth goals.
We survey of the theoretical results and practical uses of the capital growth approach. Alternative formulations for capital growth models in discrete and continuous time are presented. Various criteria for performance and requirements for feasibility are related in an expected utility framework. Typically, there is a trade-off between growth and security with a fraction invested in an optimal growth portfolio determined by the risk aversion criteria. Models for calculating the optimal fractional Kelly investment with alternative performance criteria are formulated. The effect of estimation and modeling error on strategies and performance is discussed. Various applications of the capital growth approach are made to futures trading, lotto games, horseracing, and the fundamental problem of asset allocation between stocks, bonds and cash. We conclude with a discussion of some of the great investors and speculators, and how they used Kelly and fractional Kelly strategies in their investment programs.

Going from theory to practice can be exciting when real money is on the line. This presentation itemizes and discusses from a theoretical and practical perspective a list of lessons learned from 20 years of investing using Bayesian statistical forecasting techniques linked to mean-variance optimization systems for portfolio construction. Several simulations will be provided to illustrate some of the key points related to risk management, time decay of factor data, and other lessons from practical experience. The forecasting models focus on currencies, global government benchmark bonds, major equity indices, and a few commodities. The models use Bayesian inference (1) in the estimation of factor coefficients for the estimation of future excess returns for securities and (2) in the estimation of the forward-looking covariance matrix used in the portfolio optimization process. Zellner's seeming unrelated regressions is also used, as is Bayesian shrinkage. The mean-variance methodology uses a slightly modified objective function to go beyond the risk-return trade-off and also penalize transactions costs and size-unbalanced portfolios. The portfolio optimization process is not constrained except for the list of allowable securities in the portfolio, given the objective function. This is a multi-model approach, as experience has rejected the one-model "Holy Grail" approach to building the one model for all seasons, so several distinct and stylized models will be discussed.

The Gaussian copula remains a standard model for pricing multi-name credit derivatives and measuring portfolio credit risk. In practice, the model is most widely used in its single-factor form, though this model is too simplistic to match the pattern of implied correlations observed in market prices of CDOs, and too simplistic for credible risk measurement. We discuss the use of multifactor versions of the model. An obstacle to using a multifactor model is the efficient calculation of the loss distribution. We develop two fast and accurate approximations for this problem. The first method is a correlation expansion technique that approximates multifactor models in powers of a parameter that scales correlation; this reduces pricing to more tractable single-factor models. The second method approximates the characteristic function of the loss distribution in a multifactor model and applies numerical transform inversion. We analyze the errors in both methods and illustrate their performance numerically. This talk is based on joint work with Sira Suchintabandid.

May 18, 2007

How Do Waiting Times or Duration Between Trades of Underlying Securities Affect Option Prices

Alvaro Cartea and Thilo Meyer-Brandis Birbeck College, University of London, Commodities Finance Centre and School of Economics, Mathematics, and Statistics

Abstract

We propose a model for stock price dynamics that explicitly incorporates (random) waiting times, also known as duration, and show how option prices are calculated. We use ultra-high frequency data for blue-chip companies to justify a particular choice of waiting time or duration distribution and then calibrate risk-neutral parameters from options data. We also show that implied volatilities may be explained by the presence of duration between trades.