Pages

Since Michael Aichinger mentioned the famous Hofstadter butterfly in his recent post, I cannot resist to jump on the bandwagon - what is this famous butterfly?

In the 1970s, Douglas Hofstadter worked on a model to describe electrons moving in a periodic lattice under the influence of a perpendicular magnetic field, as part of his PhD thesis supervised by G. Wannier. While conceptually simple, this system gives rise to stunningly complex physical behaviour - its energy spectrum as a function of the magnetic field is a fractal object. Hofstadter published the spectrum under the name "Gplot" in a 1976 paper. The term "fractal", which was coined by Benoit Mandelbrot, only entered english language texts a few years later. The figure became known to a wider audience through Hofstadter's 1979 book "Gödel, Escher, Bach" and nowadays goes under the name of "Hofstadter's butterfly". Some people have even gone as far as calling the fractal "a picture of god".

I don't want to put you on the rack any longer: here's how the "butterfly" looks like:

At the time of its inception, people were of course wondering whether the self-similar, recursive structure of the spectrum was just an artefact of the model, or whether it could really be realised in nature. When the x-axis of the plot is scaled in "natural", dimensionless units ("flux quanta per unit cell" in the figure above), one can notice a peculiar grouping of the black areas in the graph: at 1/2 flux quanta per unit cell, the spectrum splits in 2 "bands" (dashed red line in the figure). At, say, 2/5 flux quanta per unit cell, it splits in 5 bands, which are grouped in 2 bands on top and bottom, plus one in the middle (blue dashed line). At 4/9 flux quanta, it is 9 bands, grouped in 4 bands at top/bottom, plus one in the middle. For irrational numbers, things are quite a bit more complicated - but does nature really "know" about the difference between rational and irrational numbers, and does it like to do fraction arithmetic?

First glimpses of the recursive spectrum were found only in 2001, and since a very recent (2013) Nature article things seem to be definite - Hofstadter's elusive butterfly is not just a beautiful mathematical figure, but is something that can really be measured in a laboratory.

But how does this peculiar fraction arithmetic come about? Stay tuned...

P.S.: Hofstadter's model was deliberately kept very simple, due to the computer power available at that time. Together with Eduardo Hernandez, Michael Aichinger and I recently tried to crack the "full problem" of solving Schrödinger's equation for that system, using a highly efficient method for certain types of eigenvalue problems.

The models we discussed so far in our blog posts did not include the possibility of jumps, i.e., discontinuous asset price processes. In real markets, however, such jumps do occur. While small jumps can be satisfactorily explained by diffusion, this is no longer possible for the more pronounced breaks in market behaviour that have happened several times in history (Black Monday, 9/11, Lehman Brothers,...). There are two different categories of financial models with jumps: The first are so-called jump diffusion models, where a diffusion process responsible for the "normal" evolution of prices is augmented by additional jumps added at random intervals. In these models, jumps represent rare events, such as crashes and large drawdowns.

This kind of evolution of the asset price can be represented by modelling the (log)-price as a Levy process (A Levy process is a stochastic process with independent, stationary increments.) with a non-zero Gaussian component and a jump part. The jump part is a compound Poisson process with finitely many jumps in every time interval. Depending on the distribution used for jump sizes, different jump diffusion models exist.

The second category of jump models uses infinite activity Levy processes for

propagating the state variable. Examples for models in this category are the Variance

The explanations are driven by the field people come from. This is the explanation from a macroeconomist through the lens of money creation and policy - Dirk Bezemer: Debt a Great Invention (first episode). I like this for its simplicity.

Those, who are interested in complex systems, like myself, may highlight its principle characteristic as universal system: solid enough to be stored (storage for value) liquid enough for transformations (media for economic transactions). The money is also a numeraire.

Universal systems are programmable. The money system is programmable, but programming constructs are still low level?!

Buying a hotel night at the hotel is different from buying an option for a hotel night at "booking". It is an option, if cancellation is unconstrained (it has an option price).

Cryptocurrencies may support such derivative transactions, but the understanding of the derivative nature needs deeper insight. For quant finance thinkers this is "mother milk".

But what I find interesting: we all have our individual valuation of money. Based on stories, we tell ourselves about it? How we've got it? How we use it? What does this tell about us?

So as on a macro level there is debt before money, on the personal level there are stories before money.

The Fench-Swiss film director Jean-Luc Godard said once

It does not matter where it comes from it matters where it goes to

and he meant creative ideas, innovation, … advocating the open innovation system of arts.

As a company this its true for investments - the Modigliani Miller theorem states (simplified): if you finance them by raising stock or selling debt does not matter (intuitive, because you always sell future business).

Where it goes to?
We at UnRisk reinvest margins dominantly in development. We do barely invest in existing assets (not at all in prestigious goods) … The simple story behind this: we like independence (no UnRisk is not for you), doing exciting things, disrupt and reinvent ourselves, …….

If we changed the story the money would change too.

This post is inspired by this post in Seth Godin's Blog. Thinking about money is red-blooded - it drives our business principle for years.

In contrary to Herbert, who is an avid cross-country skier, I try to avoid those damgerous activities (UnRisk them, so to say.)

Anyway, in the late 1980s, the world production of cross-country skis was almost exclusively in Austria, and we (i.e. the Industrial Mathematics Institute, where I did my PhD) had a project with one of the Austrian ski producers.

At that time, cross-country skis were frequently produced from wood. In the above figure, the ski consists of three layers, which, in the course of the production process, are laid upon the adjustable metal basis shape (here: the bottom sine wave) an then glued together. The glue hardens at a certain temperatue (around 90 centigrade) which is achieved during a heating and pressing procedure.

As the different layers have different thermal elongation properties, the final shape of the ski differs from the basis shape (this is the same effect utilized in bimatallic strip thermometers). The question to us was "How can we adjust the basis shape in such a way that the final ski (at snow temperature) matches a prescribed bending shape?" A classical inverse problem.

The forward problem would then be: Given the basis shape, calculate the bending of the ski.

My solution approach for the forward problem was the following:
Assume that each layer is a (thermoelastic) beam. Put layer 1 (the yellow one) on the basis shape. Calculate the top curve of layer 1 by doing some enveloping. The top curve of layer one is then the basis shape for the red layer 2. And so on.

Close the press and heat it. As long as the glue has not hardened, assume that the layers can slide against each other without friction. Each layer should assume the position with minmal deformation energy. At gluing temperature fix the layers immediately against their neighbours.

Remove temperature and press and find the snow shape of the ski by minimizing the combined deformation energy of the glued layers.

My next post will discuss the success of this approach.

For those who cannot wait:
Andreas Binder: Adjustment of a ski--pressing--machine, ECMI Newsletter, No. 7, 1990

With no fewer than 70.00 pages of regulation, and some record fines, 2013 will be a year to remember for many financial service professionals. Regulator's fines are getting bigger, but their targets are getting smaller, regulation came to the buy side, regulation is an iterative process, regulators get serious about data, ...

and key themes of 2014

With new arrivals of big names 2014 is set to be another eventful year in the regulatory space. Regulators will get technical, regulators are going to get personal, we will hear the word "growth" a lot, for the buy side the combined impact of MiFID, EMIR and AIFMID will mean a restructuring of the industry …...

UnRisk was also asked (as a technology provider). We took our 2014 Agenda and elaborated motivations and plans in more detail. With the key message: even more openness.

Summarizing our agenda: apply and unleash the new foundations for new complex applications (partially forced by regulation); explain who they work and why they are selected.

If regulators, as well as auditors, want to get more technical, we are pleased to partner. But in our core business model is: arming Davids by providing know-how packages that help them to optimize their risk and work in compliance with the regulatory architecture.

Hi, I am Sascha Kratky. I work as a software engineer on the UnRisk team. Most of my development efforts go towards our flagship product UnRisk FACTORY, but I also contributed some code to our other products, the UnRisk PRICING ENGINE and UnRisk-Q.

The UnRisk FACTORY project was started in 2006 with 4 developers. We decided on using version control from the very beginning of our development and settled on using Subversion which was the de-facto standard for version control in software development in 2006. Over time around a dozen developers and two dogs have contributed to the project. At the time of this writing, our Subversion repository holds more than 27000 change sets.

Since version control is now standard in software development, visualization tools have appeared that turn the history of a project into a graphical animation. One such tool is Gource. Gource displays software projects as an animated tree with the root directory of the project at its centre. Directories are shown as branches with files as leaves. Files and directories appear in the animation when they are checked into the repository and disappear when the haven't been touched by a user for some time.Here is the resulting Gource animation for the UnRisk FACTORY development repository. 8 years of development time compressed to 8 minutes.

If video viewing does not work in your browser, you can download the mp4 video from here.

This week I made holiday from computational finance. Some of the regular readers might remember my blog post entry about artificial grapheme. We did a lot of calculations, analysed the data and found surprisingly good agreements with the experiment (a group in Stanford). Our collaborator from the university of Tampere visited us last week and we finalised the paper.

Examples of model potentials for flakes of artificial graphene. Electrons are confined in the red (dark gray) regions between circular scattering centers represented by Fermi functions. (a-c) show the three smallest flakes (L1, L2, L3, see text) and (d) corresponds to the largest flake (L9). Note that the figures are not in scale: the size of the scatterers and their mutual distance are constants.

Although the artificial grapheme flake is a finite size system it shows properties of its periodic counterpart (graphene) when increasing its size, namely the formation of a Hofstadter butterfly and the formation of a Dirac point and according to the experiment a splitting of the point when applying a magnetic field.

Density of states (integration over eigenstates) in the largest flake of artificial graphene (L9) as a function of the magnetic flux. A clear Hofstadter butterfly -type pattern can be seen.

Density of states for different flakes at various magnetic fields. At zero flux the increasing flake size shows the formation of the Dirac point at E ≈ 16 a.u. The magnetic field splits the point in accordance with experimental data [K. K. Gomes, W. Mar, W. Ko, F. Guinea, and H. C. Manoharan, Nature 483, 306 (2012)]

Now and then I sit together with Andreas, drinking a little glass of wine (or two) and let our thoughts fly - recalling happy and unexpected "wow"events, home runs, but also disruptions, unusual approaches, failures and what have you. Not always restricted to our work.

Recently, I read the April-14 UK issue of Wired and a review of the book: Secrets of Mental Maths.
I have not read the book yet but in the Wired review there are a few examples that show what the book is about: get ready to amaze your friends by calculating blazingly fast in your head.

Multiply by taking a shortcut - 53*11? 53*(10+1), again after a little manipulation its 500+(5+3)*10 + 3: 583. The shortcut: write "5(5+3)3". If 58*11 apply the overflowing 1 ("5138" goes to 638).

There are other examples that work with cutting the calculation problem into small bits, using easy numbers, complements and what have you.

Behind all these things are the polynomial functions that describe numbers:
z:=a_0+a_1*10+…. a_n*10 and the application of their ring features (expand, factor, rearrange, simplify …). You always can find forms that are of good nature for quick calculations.

If you have a symbolic computation system, like Mathematica, you can explore such schemes without cumbersome hand transformations. Kids could use that for explorative learning of some of the principles of mathematical thinking.

But such problem transformations also work on a much higher level. Take cutting in small bits.

Asymptotic Maths does exactly this: decompose the problem domain into partitions where closed form solutions are available and recombine the results in a clever way.

But even more general, if we test new numerical schemes for instrument-model calculations we decompose the domain into pieces where we know that solvers are accurate and robust and recombine in a way that we have enough reasoning that the will be the cease for the whole domain.

We call this umbrella testing - if we want to test say a new QMC scheme, we test them across the frames where solutions of existing schemes (say, Adaptive Integration) are verified and validated.

And all this is made available to quants, who buy UnRisk-Q. It provides not only a vast variety of deal types, models and methods (that are bank-proof), the UnRisk Financial Language inherits all symbolic computation capabilities and more from Mathematica.

So quants can use the instant derivatives and risk universe and test and optimize their own schemes across them.

This is an underestimated benefit: a symbolic domain specific language that does quant finance and maths.

We develop an test a lot of UnRisk in UnRisk. And we become swifter and swifter at less cost.

Some weeks ago, I attended the meeting of the ECMI council in Limerick. I will represent Linz for the next 4 years in this council.

(It was my first time to travel to Ireland, and I came to the prejudice that the beer in Ireland is much better than the weather.)

King John's castle in Limerick. Image Source: Wikimedia Commons.

ECMI, the European consortium for mathematics in industry was founded in the mid-80's of the last century by a handful of university institutes doing industrial mathematics, among them: Oxford, Eindhoven, Kaiserslautern, Bari, Linz. Currently, ECMI members come from 18 European countries.

ECMI promotes mobility of students and researchers and the cooperation on research projects with industry. In my personal CV, ECMI plays a prominent role:

I attended the first ECMI modelling week (Bari, Italy, 1988.)

My very first publication appeared in the ECMI newsletter (1990).

The good contacts between ECMI members were the informal foundation for my visiting researcher position at the Oxford Centre for Industrial and Applied Mathematics (OCIAM) in 1991 on the basis of a Kurt Gödel grant.

Next: The adjustment of a ski-pressing machine and the ideas I had for this topic in 1990.

I have been invited by a renowned machine tool maker to attend their strategic product development workshop - tomorrow. It is about machine intelligence of metal forming machines. I am quite proud that they trust I can contribute.

Hi, I am Michael Schwaiger, product manager of the UnRisk product family
and so responsible for our software products UnRisk FACTORY,
UnRisk PRICING ENGINE and UnRisk-Q.

This job is very exciting: It means

organizing workshops with our customers - at such workshophs we
discuss about the customer's needs and requirements as well as about our
planned developments

discussing with the UnRisk development team about possible ways to
satisfy the requirements of our customers

deciding which of these requirements will be implemented in the
oncoming versions of our software products

presenting the detailed specifications to our customers and asking
them for their feedback

performing some of the implementations myself (especially the ones
which connect UnRisk-Q with the UnRisk FACTORY - I like to write code in the
UnRisk Financial Language)

presenting the newly implemented features to our customers

But within the UnRisk world being the product manager means more: I am
also responsible for our customer support. This means that if a customer
(having a question / problem) calls the UnRisk support hotline he /
she dials my telephone number (not a number somewhere).

Most of the times I am able to help the customer immediately - if
necessary I am forwarding his / her problem to the responsible people within our
development team (so the customer always talks to experts - again: we do not
and will never have an outsourced call center).

From my point of view it is very important for a product manager to
answer customer support questions. It helps

to detect wekanesses of our software products - if the same questions
are asked many times then the usage of the corresponding feature has not been
implemented / documented good enough

to see how our customers use our software products and thus helps us
to detect possibilities to ease the usage for our customers

to get information about the requirements coming from the market: questions
like "we have to report these numers - do you have experience?" help
us to quickly identify market requirements. Implementations satisfying these
needs make our products more attractive

to stay in contact with our customers

to make sure that the product manager knows in detail each implemented
feature

For me it is very important that good product management is only
possible if one is also involved in customer support. To guarantee the quality
of our software the contact to the users is the most important part of my work.

The challenge of being product manager and head of the customer support
is a big one but one which I really love (especially since I think that we
offer really good software ;-) ).

In my oncoming blogs (the first will be on April 14) I will describe the
most challenging UnRisk support cases which I have experienced within the last
13 years.

The key insight of this method lies in the close relation of the characteristic function with the series coefficients of the Fourier-cosine expansion of the density function. According to Fang, in most cases, the convergence rate of the COS method is exponential and the computational complexity is linear. Its range of application covers different underlying dynamics, including Levy processes and the Heston stochastic volatility model, and various types of option contracts.

The Fourier cosine expansion is an alternative to the numerical integration via FFT presented in my last blog post. The main idea of the method is to solve the inverse Fourier integral by reconstructing the whole integral, not just the integrand, from its Fourier-cosine series expansion. The series coefficients are extracted directly from the integrand. For mathematical details see the paper of Fang.

A criterion for the computational efficiency of the method is the number of summands in the expansion needed to obtain a reliable and accurate option value. As the option value is not known one can either fix this number of summands N or by checking whether the absolute of the characteristic function is smaller than a predefined value |Φ(ω)| < ε. The following table shows the relation ship between ε and N.

In the table the price of a European call option (S0 = 100, K = 100, r = 0.02, T = 365d) under a Heston model with parameters Θ= 0.05, κ= 1.5, σ= 0.1, ρ= 0.9, v0 = 0.03. For comparison, the analytical value of the call option, obtained with the pricing formula of the original paper , is V = 8.84849

In when three rights make a wrong I wrote about the risky horror of complicated models: a more complicated model may carry greater risk than a cruder one if they are abused or the users are not qualified to understand or solve them. In short, knowingly or not.

When their underlyings are driven by stochastic processes with known characteristic functions, options can be priced by applying Fourier inversion methods. Two main trends are found in the literature, applying Fourier inversion either to the cumulative distribution function, which leads to Black-Scholes-style formulas, or to the probability density function. Apart from using quadrature rules to directly evaluate the inversion of the Fourier integral, the integration can be performed by Fast Fourier Transform algorithms or series expansions. An advantage both methods, thee FFT method and the Fourier Cosine Expansion method, offer is their good paralleliseability on CPUs as well as on graphics processing units.

Basically one considers a truncated stock price domain and discretizes this domain to obtain equidistantly spaced grid points with corresponding instrument values V(y,T). To propagate the value of the option from t(m) to t(m-1) we apply a FFT to V(:,t(m)) multiply with the characteristic function and finally apply the inverse FFT to obtain V(:,t(m-1)).
For Fast Fourier Transformation (FFT), one transformation can be performed with a computational complexity of O(N log2 N). A major drawback of the the FFT algorithm is the restriction to an equidistantly spaced grid in the price domain. In addition, the pricing of American/Bermudan and other exotic, in particular path dependent, options can require long computation times.
In our friday blog post we will focus on the before mentioned Fourier Cosine Expansion technique.

In our model blog post series we are currently discussing the Heston model. After a short description of the model and its characteristic function we will examine today how the model parameters relate to the volatility surface.

My tool of choice to visualise this relationship is Mathematica with its interactive functionality. The linked cdc is form the Wolframs Demonstration Project. For the calculations characteristic function methods are used.

You can download the app here. The Wolfram CDF player to run the app can be downloaded here.

Working with young people is a priceless gift (beware, being over 65, I find 35 still young). In our environment those young people have usually decided to swap an academic career with a commercial job.

They are energized by the motivation to explore new things and take actions pushing quantitative finance into a better direction. They are mathematicians, physicists and computer scientists.

We start with a short survey of methods for option pricing. The conditional expectation of the value of a contract payoff function under the risk neutral measure can be linked to the solution of a partial (integro-) differential equation (PIDE). This PIDE can then be solved using discretisation schemes, such as Finite Differences (FD) and Finite Elements (FEM), or by Wavelet-based methods, together with appropriate boundary and terminal conditions. A direct discretization of the underly- ing stochastic differential equation, on the other hand, leads to (Quasi)Monte Carlo (QMC) methods. Both groups of numerical techniques – discretization of the P(I)DE as well as dis- cretisation of the SDE – are well known and widely used in quantitative finance. A third group of methods, directly applies numerical integration techniques to the the risk neutral valuation formula for European options.

Direct integration techniques have often been limited to the valuation of vanilla options, but their efficiency makes them particularly suitable for calibration purposes.
A large part of state of the art numerical integration techniques relies on a transformation to the Fourier domain, the probability density function f(y|x) appears in the integrand in the original pricing domain (for example the price or the log-price), but is not known analytically for many important pricing processes. The characteristic functions of these processes, on the other hand, can often be expressed analytically, where the characteristic function of a real valued random variable X is the Fourier transform of its distribution.

The probability density function and its corresponding characteristic function thus form a Fourier pair,

Many probabilistic properties of random variables correspond to analytical properties of their characteristic functions, making them a very useful concept for studying random variables.

The characteristic function of the Heston model is given by

where

In the next blog post on friday we will discuss in more detail, how the Heston parameters effect the form and properties of its characteristic functions.

It began 1997 with a workshop for a London based trading desk of a large American bank.

The problem: they did not find an adequate pricing tool for convertible bonds with exotic contract features at the market. At this workshop Andreas Binder proposed a method known from complex technical system solving (Adaptive Integration) on the fly.
They trusted in our abilities and asked us to conduct a project starting with an experimental prototype. From that point we built the UnRiskVerse.