Compstat 2002

XV. Conference on Computational Statistics

Berlin, August 24–28

Computational Finance Research Conference

Berlin, August 24–25

Contents

1 Welcome to Berlin

2 Organization

3 Useful Information

4 Social Programme

5 Conference Schedule

6 Computational Finance

7 Computational Finance Conference – List of abstracts

1 Welcome to Berlin

We are delighted to be hosting the COMPSTAT conference here at the Humboldt- Universit¨at zu Berlin. For the second time the conference is in Berlin, but for the first time in the former Eastern part of the city: a dream of many IASC members became reality!More wishes for this 15th COMPSTAT may become real since this COMPSTAT is different from all the former conferences. There are several main changes:1. We have only accepted electronic submissions2. All papers were graded and handled electronically3. Registration and workflow was entirely web-based4. We have changed the conference model to a LONG weekend type one5. We provide the proceedings on-line as an and on CD
We have undergone this effort to signal to the statistical and computer science com- munity that COMPSTAT and IASC are the number one markets and organizations for conferences and top publication activities in Computational Statistics. You will be given the opportuinity to express your opinion in a questionnaire. We had 280 submissions of which 90 papers have been selected for long presentations and 74 as short communica- tions and 63 posters. Given the high quality of the papers we have also created a CD with the Short Communications. In cooperation with Physika and Springer Verlag we realized the platform. This technique allows the conference visitor and reader of our COMPSTAT proceedings to read the book parallel on the website www.mdtech.de. From this web-site and also from the CD that is enclosed in all conference bags the reader may rerun examples interactively and follow the computational arguments given in the papers.This conference was only possible through the combined efforts of a number of different people. First I would like to personally thank the Scientific Programme Committee (SPC) who dedicated so much of their time in selecting and judging the high quality submissions. I thank them for all of their patience and initiative in working with the new on-line grading system. Their comments and feedback have been extremely valuable and highly appreciated. I would also like to thank the session chairs and discussants whose participation is vital to the success of our conference.Many thanks go to the number of people who have helped to bring this conference together through numerous hours of planning and organizing. These are my colleagues of the Local Organizing Committee (LOC) who managed the boat trip, the conference dinner with the musical framework, the tour to Potsdam and the internal framework. These are of course also the support staff,the Masters and Ph.D. students who helped in numerous ways to smoothly run this COMPSTAT 2002. In particular these are Uwe Ziegenhagen and Benjamin Schu¨ler who had been the hands of the LOC and were crucial to the on-line implementations. The willingness of all these people to dedicate their evenings and weekends to this great conference will certainly bring future fruits. Finally, I would like to express my appreciation and deep gratitude to our sponsors without whom this conference would have been not possible.Berlin, 2nd October 2002Wolfgang H¨ardle

Local Organizing Committee

The preparation of the conference was only possible through the combined effort of sev- eral institutes: The Institute for Statistics and Econometrics from Humboldt-Universit¨at zu Berlin, the Institute for Statistics from Freie Universit¨at zu Berlin and the Weierstrass Institute for Applied Analysis and Stochastics.The Local Organizing Committee (LOC) was responsible for functional organization of Compstat 2002, including the selection of the most suitable locations, preparation of the internet site and conference software, arrangement of the social programm, production and publication of the proceedings volume, organization of the software and book exhi- bitions and coordinating the contact between invited speakers, discussants, contributing authors, participants, sponsors, publishers and exhibitors. The LOC consists of:Dr. Yasemin BoztugProf. Dr. Herbert Bu¨ning Prof. Dr. Wolfgang H¨ardle Prof. Dr. Lutz Hildebrandt Dr. Sigbert KlinkeProf. Dr. Uwe Ku¨chler Prof. Dr. Bernd R¨onz Dr. Peter SchirmbacherDipl. Ing. Benjamin Schu¨ler Prof. Dr. Vladimir Spokoiny Prof. Dr. Hans Gerhard Strohe Prof. Dr. Ju¨rgen WoltersCand. econ. Uwe Ziegenhagen

3 Useful Information

Registration desk

The registration desk in the foyer of the main building (Unter den Linden 6) is open:Saturday, August 24 from 8:00 until 18:00Sunday, August 25 from 8:00 until 18:00Monday, August 26 from 8:00 until 14:00Tuesday, August 27 from 8:00 until 18:00Wednesday, August 28 from 8:00 until 13:00

Participation identification

Meeting badges are essential for admission to the Meeting venues and to the academic sessions and social events. Therefore we would like to ask that you to wear your badge at all times.

Accompanying persons

Accompanying persons are welcome to attend the Welcome Reception in the Sauriersaal at the Museum for Natural History on August 24th.For the Boat tour with the conference dinner on Monday, accompanying persons are asked to buy have a ticket. Dinner tickets can be purchased at the conference desk in the main University building. All tickets for the Potsdam tour are sold, but due to cancellation the staff at the conference desk might be able to offer you a ticket.

Coffee breaks

Drinks will be served in the conference tent in the garden of the university building. Coffee, Tea, soft drinks and juices will be served.

Workshops and Tutorials

XploRe-Course

XploRe is a combination of classical and modern statistical procedures, in conjunction with sophisticated, interactive graphics. XploRe is the basis for statistical analysis, research, and teaching. Its purpose lies in the exploration and analysis of data, as well as in the development of new techniques. The statistical methods of XploRe are provided by various quantlets.In this workshop we give an overview about functions and features of XploRe and demon- strate detailed examples of data analysis with XploRe.

August 27, room 3038 from 16:30 until 18:00

XploRe-Usermeeting

During the Computational Statistics Conference 2002 held in Berlin and proudly spon- sored by MD*Tech, we are planning to organize the first XploRe User Group (XUG) meeting.Our planned topics are:1. Future of XploRe (codename: Ixylon)2. Suggestions to improve the user interface3. Suggestions for further statistical methods4. Requests for comments5. Open discussion

August 27, room 3038 from 14:30 until 16:00

XML-Course

XML, a markup language for presenting information as semi-structured documents, is often called an enabling technology“ . To cite the Oxford English Dictionary: Enable,”v. To Supply with the requisite means or opportunities to an end or for an object. Butmarkup languages do not, by themselves give off-the-shelf answer to questions related to electronic or web-publishing or content- dissimination. They provide an opportunity. An opportunity to take a look at the information from a more abstract level. Not as pages, lines, tables or graphics, but as packets with labels. Once information is tagged“ ,”it has been classified into distinct logical elements, independently from any layout orvisualization concepts. This turns the information into a resuseable miniature library, searcheable, retrieveable, useable.This 4 hour tutorial will give an introduction to the concepts and languages that stand behind the eXtensible Markup Language. It will show, how XML works, how a serverside11usage can be implemented and what the advantages of this technology are in compar- ison to other standards. Example implementations will go into details of XML usage. They will demonstrate how XML technology can be used for applications in the field of computational statistics.Tickets can be purchased at the conference desk.

August 28, room 2097 from 12:00 until 16:30

Prizes

There will be three prizes:the Young Authors Prize for the best FULL or SHORT contribution by an author younger than 35 yearsthe Poster Competition for the best poster presented during the conferencethe MD*Tech Software Award sponsored by MD*Tech for the best Software con- tributionThe winners of the prizes will be presented during the Closing Session on Wednesday.

Internet access

Access to internet and email will be available via workstations, kindly sponsored by SUN Microsystems. Furthermore there are several internet cafes near Humboldt-Universit¨at zu Berlin.The computers are in the Computer Center (Humboldt-Galerie). Please check the door to the Computer center for opening hours.

Liability

The Humboldt-Universit¨at zu Berlin will not assume any responsibility for accident, loss or damage, or for delays or modifications in the programme, caused by unforeseen circumstances. We will not assume indemnities requested by contractors or participants in the case of cancellation of the Meeting due to unforeseen circumstances.

Bank and exchange

Official opening hours of banks in Germany vary. Some exchange offices located at the larger train stations (Friedrichstrasse and Alexanderplatz) are open on weekends. It is also possible to change foreign currency into Euro in many hotels, but for a higher transaction fee.

Electricity

Electric sockets in Germany carry 220V/50Hz and conform to the standard continental type. Travel adaptors may be useful for electric appliances with other standards and can be bought, e.g. in the Saturn -department store at Alexanderplatz.

Shopping hours

At the bigger shopping-centers and malls, shops are open until 8p.m., on Saturdays un- til 4p.m. Other shops close at 6:30. Generally, shops are not closed for lunch breaks. The high temple of the consumer religion is the KaDeWe department store on Witten- bergplatz (U-Bahn station), the largest department store in Europe. The newly opened complex on the Potsdamer Platz offers more than 100 new shops and a multitude of restaurants, cafes, and pubs.

Markets

Each weekend many flea-markets invite you to look for great bargains. The most pop- ular one is the Kunst- und Tr¨odelmarkt“ on Straße des 17. Juni, but also along the”Kupfergraben near Museum’s Island and Humboldt-Universit¨at. There are many stallswith art objects, books and records.

Phone Numbers of Taxi-Companies

Funk-Taxi Berlin: (030) 26 10 26Spree-Funk: (030) 44 33 22

Pharmacies

In Berlin pharmacies can be found all over the town. For overnight service there are always one or two pharmacies open in every district. All pharmacies post signs directing you to the nearest open one.Hotline: 0 11 89

Emergency Numbers

4 Social Programme

Boat tour with conference dinner

The boat trip through Berlin will take place on the afternoon of August 26th 2002.We will board the MS Brandenburg“ at the”Berliner Dom“ which is near Humboldt-”Universit¨at and start the cruise in the historical center of Berlin: Museumsinsel, theatrequarter, station Friedrichstrasse, former border between the Eastern and the Western part of Berlin. After visiting the Reichstag (parliament), the seat of the German Chan- cellor and the construction site for the new Central Station Lehrter Stadtbahnhof“ we”turn around at Humboldt-harbor and go back to the historical center where we pass theSt. Nicolas quarter, the Alexanderplatz and the Berlin Town Hall.After passing the remains of the Berlin Wall at the East Side Gallery and the industrial districts of East Berlin we cruise to K¨openick, where Berlin’s green heart is beating and see the beautiful castle.We are visiting the big and the small Mu¨ggelsee, take a look on Neu Venedig“ and after”
going around the Müggelberge“ mountains we head towards the Marriott Courtyard”
Hotel in Köpenick for an exellent dinner.

Potsdam Tour

The Brandenburg state capital used to be the residence of Prussian kings and German Kaisers before World War I. That is the reason why the city is surrounded by picturesque parks and palaces. One of the smaller palaces which is nevertheless a masterpiece of Ger- man Rococo architecture is Sanssouci. It was built for king Frederick II the Great by his friend and architect Georg Wenzeslaus von Knobelsdorff around 1746. The palace gave its name to the huge park Sanssouci including several other Palaces, original buildings and hundreds of sculptures and monuments. Besides the Sanssouci park, you will visit

the historical city centre with the so-called Dutch Quarter and the classicist St. Nicolai church which was built by the most creative Prussian architect Karl-Friedrich Schinkel.Furthermore, you will have a walk in the park Neuer Garten“ with the comparatively”new palace Cecilienhof built for the last imperial crown prince Wilhelm von Preußen andhis wife duchess Cecilie von Mecklenburg-Schwerin. Here, the leaders of the victorious powers held the Potsdam Conference after World War II. in order to re-structure post- war Central Europe. The bus tour for Potsdam starts and ends in front of the main building of the Humboldt-Universit¨at zu Berlin.

Sightseeing

Brandenburger Tor, Unter den Linden, Friedrichstraße, Alexanderplatz ... you could continue the enumeration of first-class sights in Berlin’s old and new centre endlessly, an excursion through this historical as well as lively district belongs to every Berlin tour. In the avenues Unter den Linden and Karl-Liebknecht-Straße every building has its own story to tell - the government and embassy buildings, Staatsoper, Komische Oper (national and comic opera), Maxim-Gorki-Theater at the Lustgarten park, the Humboldt University, Museums’ Island with exhibitions of worldwide rank, Berlin’s cathedral, the former Palast der Republik“ which replaced the busted city castle, the Zeughaus, the”Neue Wache, to name only the most significant attractions.The Brandenburger Tor, symbol of Berlin, of German separation and reunification is surely the city’s most famous building. In one of its wings you will find an office of Berlin’s tourist information. Alexanderplatz square with its coolish but impressive tower blocks is surmounted by the 365 meter high Fernsehturm (television tower), the city’s highest building.The shopping and strolling avenue Friedrichstraße heads south towards the former border crossing Checkpoint Charlie (see Kreuzberg) and north towards Oranienburger Straße. This former Jewish quarter has developped a vital clubbing site that is overtopped by the New Synagogue’s golden dome. Exclusive and eccentric shops, chic cocktail bars and scruffy backyard romance encounter in and around Hackesche H¨ofe. Around Schiffbauerdamm, near Deutsches Theater, Charit´e clinic and new government quarteryou will often meet prominent politicians.If you prefer to avoid turbulences you might wish to visit the Dorotheenst¨adtischer”Friedhof“ cemetary, where outstanding personalities like Hegel, Brecht or the architectSchinkel are buried.

Mitte has got a lot to offer south of Unter den Linden / Karl-Liebknecht-Straße, too: Next to the majestic Rotes Rathaus townhall the Nikolaiviertel quarter has preserved the charme of a small town of the 18th century. Not far from the conspicuous dome of St. Hedwig’s cathedral you will encounter one of Europe’s most beautiful places: Schinkel-designed Gendarmenmarkt square with Schauspielhaus theatre and concert hall and German and French dome. Between the districts of Mitte and Tiergarten spreads Potsdamer Platz, on of Berlin’s centres.

Public transport and tickets

Berlin has three different fare zones (A, B, C):Zone A: This is the area within the Berlin urban rail (S-Bahn) ring line. Zone B: The area outside the ring up to the city border.Zone C: comprises the area surrounding Berlin (3 honeycombs). This sub-area is divided into 8 parts, each belonging to an administrative district. The Potsdam-Mittelmark area is included in the city district of Potsdam.With the AB ticket, you can always be sure of having the right fare when travelling in Berlin. Single fare tickets are valid for 2 hours whereas short trip tickets can be used for at most three stops only.For further information about tickets, timetables, stops, etc. you can look at: http://www.bvg.de/e_index.html.

Aug 26 2002

09:00-09:45 Keynote Lecture (Audimax) Chair: Michael G. Schimek

09:00 Supervised Learning from Micro-Array Data

Trevor Hastie

09:45-10:45 Applications 1 (2091) Chair: Anna Bartkowiak

09:45 Development of a Framework for Analyzing Process Monitoring Data with Applications to Semiconductor Manufacturing Process Yeo-Hun Yoon10:15 A 2 step experimental method for the analysis of multiple tables of categorical variables

Juan Ignacio Modron˜o

10:30 A computer and statistics analysis of the 2000 United States pres- idential election

William Conley

09:45-10:30 Invited Session 5 (3094) Chair: Wenceslao Manteiga

09:45 Trans-Dimensional Markov Chains and their Applications in Statis- tics

Steve Brooks

09:45-10:45 Spatial 1 (3038) Chair: Antony Unwin

09:45 Testing for simplification in spatial models

Richard Martin

10:15 A study on the spatial distribution of convenience stores in theTokyo metropolitan area

10:00-12:15 Financial Econometrics (2091) Chair: J¨org Feuerhake

10:35 An algorithm to detect multiple change-points in the residual vari- ance of financial time series

Andreas Stadie

10:55 Exchange rate returns and time varying correlations

Edy Zahnd

11:15 Likelihood inferences in GARCH-BL models

Giuseppe Storti,Cosimo Vitale

11:35 Online forecasting of house prices with the Kalman filter

Axel Werwatz

14:00-15:15 Risk II (2091) Chair: Karel Komorad

14:00 Portfolio optimization under credit risk

Rudi Zagst

4314:35 Fitting a normal-Pareto distribution to the residuals of financial data

Faans Steyn

14:55 Credit contagion and aggregate losses

Stefan Weber

15:30-17:05 Derivatives II (2091) Chair: Stefan Jaschke

15:30 A PDE Based Implementation of the Hull&White Model for Cash- flow Derivatives

Sascha Meyer

16:05 Testing the Diffusion Coefficient

Torsten Kleinow

16:25 Using spreadsheets for financial statistics

G¨okhan Aydinli

16:45 Implied trinomial trees and their implementations with XploRe

Karel Komorad

17:05 Nonparametric Moment Estimation for Short Term Interest RateOptions

Jun Zheng

44

7 Computational Finance Conference

– List of abstracts

On Explicit Option Pricing in OU-type and related Stochastic

Volatility Models

Robert G. Tompkins rtompkins@ins.at, Vienna University of TechnologyFriedrich Hubalek Vienna University of Technology

Given that the traded prices for options are inconsistent with the assumptions of Geomet- ric Brownian Motion and constant volatility, a number of authors have considered alternative modelling approaches to Black and Scholes (1973). An extremely popular approach has been to include stochastic volatility when pricing options. Heston (1993) among others has derived a closed form solution for such a model. Given that the price process is not necessarily contin- uous, one approach has been to include jumps in the underlying asset price process. The first to consider this was Merton (1976) in a jump diffusion context. Recently, Bates (1996, 2000) and Barndorff-Nielsen and Shephard (2001) extended this approach to consider models incor- porating non-normal price processes (mimicking jumps) with subordinated stochastic volatility processes.

In this paper, we consider closed form solutions for a model drawn from the general framework suggested by Barndorff-Nielsen and Shephard (2001). This model assumes that an Ornstein- Uhlehbeck process describes the volatility with disturbances following a generalised hyperbolic distribution. As with the Madan, Carr and Chang (1998) Variance Gamma Process, the vari- ance of this model follows a Gamma distribution. We also consider alternative generalised hyperbolic distributions including the inverse Gaussian distribution.

We provide a derivation of an explicit option pricing formula for European options which preserves the (minimal) martingale measures and relies upon Laplace inversion methods. Sim- ulations show that this model can address the return skewness and strike-price biases in the Black and Scholes (1973) model. Empirical tests of the model for the S&P 500 and British

A Functional Principal Components Approach to Modelling Implied

Volatilities

In this paper we propose a functional principal components approach to modelling implied volatilities. In a first step we regard the realizations of implied volatilities of a certain ma- turites as random functions, and develop a functional principal components approach. We provide stability analysis of principal components and hypothesis testing for principal func- tions. Secondly we show how this can be used for modeling the several maturity strings jointly. Our results indicate that this may be superior modelling approach than linear specifications as proposed in earlier literature.

Efficient Price Caching for Derivatives Pricing

Market makers for derivatives products have to quote the prices of thousands of instruments continuously during the trading day while market parameters (underlying prices, implied volatil- ities) are fluctuating. Being able to quickly recalculate the product prices is essential for market makers to prevent trading losses because their contributed prices are observed from competitors and other traders searching for arbitrage opportunities all the day. Many papers of the standard computational finance literature answer the question how a single option can be priced a single time quickly and precisely. Our paper discusses several methods to price many options very often with slightly different market parameters throughout the trading day. The basic idea is to calculate a lot of prices for each instrument on a market parameter grid during the night and to store the results in appropriate matrices (price caches). During the trading day, the precalculated prices are only read from the price caches. The different methods vary in the choice of coordinates and pricing models, the design of the price caches and the interpolation schemes that are applied to the price data. Numerical examples are given.

46

Multiscale Estimation of Processes related to the Fractional

Black-Scholes Equation

Rosaura Fernandez-Pascual rpascual@ujaen.es, University of Jaen, Spain M.D. Ruiz-Medina University of Granada, Spain J.M. Angulo University of Granada, Spain

The Black-Scholes theory plays an important role in the derivation of volatility estimates (see Barles and Soner, 1998; Dempster and Richards, 2000; Pechtl, 1999; Smith, 2000, among others). Different generalizations of the Black-Scholes equation have been studied, involving, for example, stochastic volatility (see Bates, 1996; Ghysels, Harvey and Renault, 1996; Hobson and Rogers, 1998; Lewis, 2000), fractional derivatives (see Wang, Qiu and Ren, 2001; Wyss, 2000), fractal and multifractal processes (see Heyde, 1999; Mandelbrot, 1997), etc. In particular, in the case where fractional derivatives are considered, the Black-Scholes model is defined in terms of fractional integrated white noise when the derivatives are of order larger than or equal to

0.5, and in terms of fractional Brownian motion when the derivatives are of order less than

0.5. In this paper we address the filtering and prediction problems associated with these models using the theory of fractional generalized random fields (see Angulo, Ruiz-Medina and Anh,

Assessing the Quality of VaR Forecasts

The simplest models for Value-at-Risk prediction are based on the reduction of the dimension- ality of the risk-factor space. We discuss briefly various types of models based on this approach. The main focus of the paper is the comparison of the quality of VaR forecasts. For this pur- pose, we adapt methods for the verification of probability forecasts which are often used in the

47

weather forecasts (Murphy and Winkler, 1992). These methods of comparing the VaR forecasts are based on the probabilities of fixed intervals rather than on single quantile of the underlying probability distribution and allow thus to assess the differences between the considered VaR modells from yet another point of view.

Comparison of Fourier Inversion Methods for

Delta-Gamma-Approximations to VaR

Different Fourier inversion techniques are compared in terms of their worst-case behavior in the context of computing Value at Risk in Delta-Gamma-Approximations.

Two-step Generalized Maximum Entropy Estimation of Dynamic

Programming Models with Sample Selection Bias

Rosa Bernardini Papalia bernard@stat.unipg.it, Perugia University

Over the last decade there has been growing a substantial literature on the structural estimation of dynamic discrete choice models based on dynamic programming (DP) (Rust 1994; Hotz, Miller et al. 1992, 1993). They have found application in many fields such as industrial or- ganization, labor economics, and public finance, among others. However, the computational burden of estimation has been an important constraint with relevant implication for empirical works. In this paper we are concerned with estimation problems in presence of corner solu- tions. In this case, the common approach to estimate the Euler stochastic equations (Hansen and Singleton, 1982) has to be extended to allow for discontinuity and non-differentiability in the utility function. However, marginal conditions of optimality hold only at interior solutions, and a selection bias is introduced when the sub-sample of interior solutions is used. Moreover, not all the structural parameters can be identified exploiting only the structure in the marginal conditions of optimality. This paper presents an estimation approach based on the Maximum Entropy Principle (MEP). The GME formalism represents a new basis to recover the unknown parameters and the unobserved variables. Following Hotz and Miller (1993) a representation of the conditional choice value functions in term of choice probabilities is considered. We show how GME can be used to deal with the problem of defining the Euler equation for the corner solution case and we suggest a specification of the DP model based on the idea to introduce an additional constraint on the conditional choice value functions, which provides a correction for the sample selection bias. The presented two-step GME estimator is more flexible than

standard ML and ME estimators. The estimation procedure has the advantage of being consis- tent with the underlying data generation process and eventually with the restrictions implied by economic theory or by some non sample information. The GME method of estimation proceeds by specifying certain moments of unknown probability distributions and does not require the specification of the density function of the population. Since we can have a large number of density functions with the same moments, the MEP is used to get a unique distribution with these moments. In other worlds, using MEP, it is possible to indirectly determine the density function of the population. Unlike ML estimators, the GME approach does not require explicit distributional assumptions, performs well with small samples, and can incorporate inequality restrictions.

Spreadsheet-based option pricing under stochastic volatility

Christian Hafner Christian.Hafner@electrabel.com, Electrabel

Option pricing under stochastic volatility does not yield, in general, closed form solutions and often relies on time-consuming methods such as Monte Carlo simulation. In practice, this is the most important argument for still using Black-Scholes type formulae. In this talk it is argued that using simple approximations of the Hull and White type, one can generate closed form expressions of sufficient accuracy for a variety of underlying processes such as GBM and OU. These approximations can conveniently be used in spreadsheet applications, as will be demonstrated in the talk.

Risk Analysis for Large Diversified Portfolios

The measurement of risk for portfolios of large financial institutions plays a crucial role in the future development of financial technologies. Based on a new characterisation of asymptotic portfolios highly efficient risk measurement methodologies can be derived. These are suitable for portfolios with hundreds or thousands of instruments. Value at Risk analysis can be performed without time consuming simulations.

Evaluation of Out-of-Sample Probability Density Forecasts with

Applications to Stock Prices

Yongmiao Hong Cornell University

A recent important development in time series econometrics and financial econometrics is to forecast probability distributions and to track certain aspects of distribution forecasts such as value at risk to quantify portfolio risk. In this paper, a rigorously founded, generally applicable, omnibus procedure for evaluating out-of-sample probability density forecasts is proposed, using a newly developed generalized spectral approach. This procedure is supplemented with a class of separate inference procedures that focus on various specific aspects of density forecasts to reveal information on possible sources of suboptimal density forecasts. The finite sample performance of the proposed procedures is examined via simulation. Applications to S&P 500 index and IBM daily returns are considered, where a variety of popular density forecast models, including RiskMetrics of J.P. Morgan and various GARCH models, are evaluated using the proposed procedures. Evidence on various deficiencies in these forecast models is documented, suggesting room and directions for further improvements upon them. In particular, it is found that poor modelling for conditional mean is as important as poor modelling for conditional variance and improper specification for innovation distribution. This sheds some light on the existing density forecast practice, which focuses on volatility dynamics and innovation distributions and pays little attention to conditional mean dynamics.

An Algorithm to Detect Multiple Change-points in the Residual

Variance of Financial Time Series

Andreas Stadie astadie@uni-goettingen.de, University G¨ottingen

The cummulative sums of squares (CUSUMSQ) provide a means for detecting change-points in the volatility (variance) of time series. An algorithm for this purpose was proposed by Inclan and Tiao (1994) and an alternative by Diebold, Hahn and Tay (1998). I describe a third algorithm that combines some of the features of the previous two. The results of a simulation experiment to compare the performance of the algorithms are presented. As an example the algorithm is applied in the following context: A popular method of checking the fit of financial time series models is to examine the probability integral transform of the observations with respect to their forecast distribution. If the model is correct then the transformed values are iid U(0,1) distributed. Applying a second transformation, the inverse standard normal distribution function, produces ”pseudo-residuals” that are iid N(0,1) distributed. Applying the above algorithm to the CUSUMSQ of the pseudo-residuals provides a method of detecting

50

periods in which the model fails to predict the observed volatility in the series. This is illustrated for a moving-density model fitted to the daily returns of the Deutsche Bank AG.

Exchange Rate Returns and Time-varying Correlations

Edy Zahnd edy.zahnd@vtx.ch), Rue des Parcs 67, 2000 Neuchatel

Switzerland

We will estimate GARCH models without the restriction of constant conditional correlation.

Likelihood Inferences in GARCH-BL Models

Dr Giuseppe Storti storti@unisa.it, University Salerno

In this paper we analyse the statistical properties of the GARCH Bilinear (GARCH-BL) model recently proposed by Storti and Vitale (2001). With respect to the standard GARCH (Bollerslev,

1986), the GARCH-BL model allows for two main advantages. First, the model allows to capture leverage effects in the volatility process by including interaction terms between past returns and volatilities into the equation for the conditional variance. Second a more flexible parametric structure is permitted. Under the assumption of conditional normality, the log- likelihood function of the model can be maximized with respect to the unknown parameters by means of an EM type algorithm. The procedure is illustrated in the paper. The availability of likelihood inference for this model allows to define a formal testing procedure for verifying the hypothesis of asymmetry in the volatility process (which means presence of leverage effects). More precisely the null hypothesis of no asymmetric effects is tested against the alternative hypothesis of a GARCH-BL model of a predefined order. The actual modelling procedure will be illustrated by means of an application to some financial time series data. From a statistical point of view, the so called leverage effect, often observed in financial time series, stems from the presence of a negative correlation between past shocks and volatility. Hence, we expect the effect of a negative return on the current volatility to be higher than that associated to a positive return of the same magnitude.

In the last ten years, starting from the seminal paper of Nelson (1991) a substantial amount of research efforts has been devoted to the analysis of conditionally heteroskedastic models allowing for similar asymmetric effects. Among the others, we remember the work of Glosten, Jagannathan and Runkle (1993) proposing the GJR model and also the Generalized Quadratic ARCH (GQARCH) by Sentana (1995) and the Threshold GARCH (TGARCH) model proposed by Rabemananjara and Zakoian (1993).

Online Forecasting of House Prices with the Kalman Filter

We develop a statistical model of house price determination, that is well-grounded in economic theory.

The model is cast into state space form which opens up the application of the Kalman filter to estmate the model and use it for online forecasting. These forecasts combine the model parameters with the house characteristics provided by users through an HTML interface.

Point forecasts and prediction intervals are returned online. These provide useful information for potential buyers and sellers of a house in an otherwise intransparent market.

Portfolio Optimization under Credit Risk

Based on the models of Hull and White (1990) for the pricing of non-defaultable bonds and Schmid and Zagst (2000) for the pricing of defaultable bonds we develop a framework for the optimal allocation of assets out of a universe of sovereign bonds with different time to maturity and quality of the issuer. Our methodology can also be applied to other asset classes like corporate bonds. We estimate the model parameters by applying Kalman filtering methods as described in Schmid and Kalemanova (2001). Based on these estimates we apply Monte Carlo simulation techniques to simulate the prices for a given set of bonds for a future time horizon. For each future time step and for each given portfolio composition these scenarios yield distributions of future cash flows and portfolio values. We show how the portfolio composition can be optimized by maximizing the expected final value or return of the portfolio under given constraints like a minimum cash flow per period to cover the liabilities of a company and a maximum tolerated risk. To visualize our methotology we present a case study for a portfolio consisting of German, Italian, and Greek sovereign bonds.

Fitting a Normal-Pareto Distribution to the Residuals of Financial

Data

The Normal-Pareto (NP) distribution assumes that, for negative log returns of financial series, the innovations after the fitting of AR-GARCH models, have a normal distribution below a threshold value and a generalised Pareto distribution (GPD) above it. This threshold value can be estimated by maximum likelihood estimation (MLE). Monte Carlo simulations of normal, as well as heavy tailed error-distributions, are used to compare the fit of this distribution with other methods to calculate Value-at-Risk and Expected Shortfall. It is also applied to real time series like stock exchange data.

Credit Contagion and aggregate losses

Credit contagion refers to the propagation of economic distress from one firm or sovereign government to another. In this paper we model credit contagion phenomena and study the fluctuation of aggregate credit losses on large portfolios of financial positions. The joint dy- namics of firms’ credit ratings is modeled by a voter process, which is well-known in the theory of interacting particle systems. We clarify the structure of the equilibrium joint rating distri- bution using ergodic decomposition. We analyze the quantiles of the portfolio loss distribution and in particular their relation to the degree of model risk. After a proper re-scaling taking care of the fat tails of the contagion dynamics, we provide a normal approximation of both the equilibrium rating distribution and the portfolio loss distribution. Using large-deviations techniques, we consider the issue of gains from portfolio diversification.

A PDE Based Implementation of the Hull&White Model for

Cashflow Derivatives

A new implementation for the one-dimensional Hull&White model is developed. It is motivated by a geometrical approach to construct an invariant manifold for the future dynamics of forward zero coupon bond prices under the forward martingale measure. This reduces the option-pricing problem for cashflow derivatives to the solution of a series of heat equations. The heat equation

is solved by a standard Crank-Nicolson scheme. The new method avoids the calibration used in traditional solution aproaches like trinomial trees. A generalisation of the reduction approach is also presented for some generalised Hull&White models.

Testing the Diffusion Coefficient

In mathematical finance diffusion models are widely used and a variety of different parametric models for the drift and diffusion coefficient coexist in the literature. Since derivative prices de- pend on the particular parametric model of the diffusion coefficient function, a misspecification of this function leads to misspecified option prices. We develop two tests about a parametric form of the diffusion coefficient. The finite sample properties of the tests are inverstigated in a simulation study and the tests are applied to the 7-day Eurodollar rate, the DAX and the NEMAX stock market indizes. For all three processes, we find in the empirical analysis, that our tests reject all tested models.

Using Spreadsheets for Financial Statistics

MD*ReX is a client/server based spreadsheet solution designed for statistical analysis. Using Excel’s wellknown interface, this application provides intuitive access to XploRe’s statistical methodology. Circumventing the numerical impreciseness of Excel while exploiting its graphi- cal capabilities is the development rationale for MD*ReX. The design addresses novice users of statistical applications as well as experienced researchers interested in developing own methods. We will demonstrate the usage of MD*ReX in the context of financial statistics with applica- tions of copula based Value-at-Risk, methods of quantifying implied volatilities and non-linear time series analysis.

Implied Trinomial Trees and their Implementation with XploRe

Options pricing has become one of the most commonly used machinery for portfolio manage- ment and hedging. The very first approach of Black and Scholes, assuming a constant volatility, has been overcome as implausible. Especially since the 1987 crash, the markets implied Black- Scholes volatilities have shown a negative relationship between implied volatilities and strike prices. This skew structure of the volatilities is commonly called ”the volatility smile”. The

implied multinomial trees are one of the great deal of methods which enable pricing of deriva- tive securities consistent with the existing market. There is a variety of ways how to construct implied trees. While the binomial trees have just the minimal number of parameters to fit the smile, multinomial trees have some degrees of freedom. However, any kind of an implied tree is converging to the same theoretical results. Derman, Kani and Chriss (1996) in the Quan- titative Research group of Goldman & Sachs devised an algorithm of constructing the Implied Trinomial Trees from the current market option data. They used the additional parameters to conveniently choose the state space of all node prices, only the transition probabilities are constrained by market options. The flexibility can be advantageous in matching trees to smiles. This algorithmic approach is presented in this paper. No new theoretical aspects were added, but the way how to implement such a model into XploRe have been shown. This software seems to be very suitable for solving this type of problems. The whole procedure is then illustrated on real market data.

Nonparametric Moment Estimation for Short Term Interest Rate

Options

The main feature of the options is their forward-looking character. The development of the underlying asset process is uncertain, therefore the holder of the options has to have some expectations of the future value of the options. Researchers try to extract the information from the market option prices. There exist various approaches to recover the state price den- sity at one particular time to maturity. For example, the most famous Black-Scholes model, while researchers have found this model cannot de-scribe the underlying process precisely, the newer parametric method, a mixture of two log-normal density model, and some nonparamet- ric meth-ods, such as A¨´yt-Sahalia (1996) and Rookley(1997)’s method, and implied binomial trees (see e.g. Rubistein (1994), Derman and Kani (1994), Barle and Cakici (1998)) method. In this paper, we try to estimate variance, skewness and kurtosis of the state price density function nonparametrically for short term interest rate options, where their movements with the time found to be associated with the interventions of the European Central Bank, such as the adjustment of the interest rates and intervention in the exchange markets.