Pages

In To Tree or Not To Tree, Andreas has pointed out that binomial trees are good, because they are insightful, but they have a really bad computational behavior. So bad, that they may destroy delta-hedging (the heart of the option theory), when applied to the analytics of options of more complexity.

Real options are different

A real option is the right to perform certain business initiatives in a capital investment project. There are real options referring to project size (expand or contract), life and timing (defer, abandon, sequence or parallelize), and operation (product or process flexibility). Those determine the option characteristics.

Real options are usually distinguished from financial options in that they, and their underlying, are typically not tradable. Most real options have a value but not a price. At the other hand option holders (management) can influence the underlying project.

Whilst financial options can help optimizing the risk of a portfolio, real options can help maximizing the value of a project by managing uncertainty and create value through flexibility.

Real option valuation

Real options can be the underlying principle of agile practices. With this objective insight is more important than computational quality. In real option valuation you have to deal with a co-evolution. You need to take in account the uncertain development of the parameters that determine the value of your project and the management of decisions that influence them.

So, in general real options, are more complex than financial options. Consequently, they are more challenging in valuing inputs (factors) and define the option characteristics.

To Tree
You can model them by PDEs, apply forward techniques with (Least Square) Monte Carlo methods, but, most of the practitioners use binomial trees as they allow for implementing rules (up and down probabilities under conditions, ..) at each node. Although, they cannot handle problems with higher dimension.

Real option analytics as sparring partner

One can see investment projects as cash conversion cycle - with many decision points. Real option analytics can be a kind of sparring partner, telling the management, what if ...

The return is characterized by the investment, time and the distribution of the cash flows. You need to know the cash drivers, volatilities, formulate possible actions (their options) and know their influences on the cash flows.

If your project is innovative you don't have a history. So, you might need to simulate the project to get insight into quantitative aspects of possible decisions. Trees are of good nature for this purpose, right?

Optionality
You can buy antifragility. In finance, antifragility needs fragility, because hedgers need speculators as counter party, who accept the fragile side of a contract. The difficult thing about this is transparency: who has what fragile/antifragile position and how do those positions cross-connect to fragility concentration or fragility diffusion and buffering. "Correct" pricing, valuation and risk analytics is vital to make the market a "fair" play.

Real options usually pertain to tangible assets such as capital equipment, rather than financial instruments. They are not a derivative instrument, but an actual option - that have a value and you gain from knowing which and when. In a tree of possible decisions in an uncertain world. If you compare this to a traditional Discounted Cashflow Method, you cannot lose ...But

the real economy could learn from the innovations of the financial systems. They could maybe adopt the "fiction" that the option and the underlying are tradable, or "replicate" the cash flows on the option by a risk free bond and proportions of the underlying?

But this is another story. Trees would be devalued as firewood again?

I am not a RO expert, so I have compiled info from Wikipedia and long discussions with Hermann Fuchs, a RO expert, running a financial controlling advisory firm here. But, I understand that we offer attractive options for a build-an-advanced-risk-management-system project.

Today I will show how this Web Service may be used to
combine our two products, UnRisk FACTORY and UnRisk-Q. The necessary steps in
this example are:

Set up a financial instrument in the UnRisk
FACTORY (this can be done in a very convinient way)

Load this instrument by the use of the UnRisk
Web Service into Mathematica

Set up an interest rate curve in Mathematica by
the use of UnRisk-Q

Price this instrument in
Mathematica

Step 1: Setting up the financial instrument (in our example
it is a simple fixed rate bond) in the UnRisk FACTORY

Steps 2 to 4 are explained in the following code of
Mathematica

Conclusion: By the use of the UnRisk FACTORY the user can
set up financial instruments in a very intuitive and convinient way. The UnRisk
Web Service enables the user to import these financial instruments into the world
of Mathematica. By the use of UnRisk-Q the user can perform valuations or analyze the behaviour of instrument prices under market
scenarios which can be set up in a very flexible way within Mathematica.

Once I've lived to 75, may approach to my health care will completely change. I won't actively end my life. But I won't try to prolong it either.

I am 70 and obviously not neutral to this topic. BTW, when I was 12, people at 35 looked so old, and old fashioned in their behavior, that I thought: "I hope to die before becoming that old". Now, I want to live long - but, I mean long and full, rich, exciting, mobile, recognized, loved, …

Death is a loss. But is living too long also a loss?

Loss is overrated

Beyond philosophical, psychological, socioeconomic and cultural aspects, this sounds to me like a question of prediction and risk management. It's about loss aversion (in the sense of Kahneman). Paradoxically, people, who hate to realize a loss often take more risk when losses increase. Is this what Ezekiel Emanuel wants to avoid?

Optimal Risk

It is difficult to optimize risk, if you don't have enough quantitative information. In Optimal Risk I have briefly described my try to find the optimal risk when I skate on cross country ski trails. But life is more complex than cross country skiing. It "grants" more unexpected events.

Long, but boring?

But, let me take the roulette metaphor. You know you can't win on the long run. The Kelly Criterion (the Kelly bet on "red" was -1/19) tells you not to bet. But you can use a small fraction of your current payroll to stay "long" at the casino playing (just betting on "red"). Boring, isn't it? And the longer you play the payroll will go to zero and a fraction of it may become really too small to continue …

A complete life?

I have most probably celebrated more wine and dine events, slept, exercised, trained my brain, applied preventive medications… less than I should, to prolong life as long as possible. And maybe, I risked to become slower, less creative and less productive earlier as necessary.

My statistical life expectation is 78. But statistics also says: most probably, I will suffer from this and that "long" before. To NN Taleb "long in history" means "long in the future, but future is unknown. You can't really predict it, but build it.

Logarithmic loss (LogLoss) vs 0/1 loss?

It's not quantifiable. But maybe I was lucky having found a kind of an optimal risk for a full, rich, … life. Mayby, I have intuitively used a kind of LogLoss pay-off instead of a crisp loss (of living long) function, Ezekiel Emanuel seems to "apply"? LogLoss penalizes the extremes (confident or wrong) and "predictions" under its regime are not 0/1.

Inquisitiveness wins

However, it's too many things I haven't seen, understood, managed, … yet. Too many corners I haven't looked around. Life is never "complete". I still work. Good partners and friends will tell me, when I should stop.

But what I certainly know: I do not want to live infinitely long. There is an individual age x_1 … But 75? Really?

This summer, I went to an opera event at the Salzburg Festival. Driving into Salzburg we came into a really heavy rain and hailstorm. I even feared some damage by hailstones (fortunately, my little Fiat 500 stayed unscathed). Bound in a heavy traffic, I had no chance to escape without causing a (further) chaos. No chance to just park in one of the underpasses, no gas station in sight, ... For motorcyclists it must have been much worse.

A few minutes ago, I found this at "Slate". The umbrella sign guides to designated covered areas where motorcyclists can safely wait.

We have a lot of "forbidden" road signs - to our benefit (I guess). So, I find this one pretty cool.

This were the thicker books I read in the first half of this summer. The second half of the summer happened on a Thursday, so to say. It was chilly, raining, … most of the time. Time for longer novels again, but I read only one:

Americanah, Chimamanda Ngozi Adichie - this is the (love) story of a Nigerian woman, Ifemenu, who left Africa for America and her school friend Obinze, who only made it to the UK (illegally) … It's their plan to return together, but things do not go according to their plans. Where Obinze failed (he is deported), Ifemu thrives. Back in Nigeria, Obinze finds a lucrative job and marries a beautiful wife …

When we first meet Ifemu, she is getting her hair braided at an African saloon 13 years after coming to America. We read that she won a prestigious fellowship at Princeton and writes a popular blog: observations about American Blacks by a Non-American Black. Yet she decided to throw this away and return home. And she returned. In America she was black - In Nigeria, she's an Americanah.

The hairdresser asks why? So did I …. but Chimamanda makes it clear through her characters.

Americanah is the first novel I read from the award-winning Nigerian writer (it's her third). How did I discover it? It was selected as One of The 10 Best Books of 2013, by the NY Times … I read it in German.

And I really, really enjoyed reading it. A great story, a great analysis of complicated real life situations (race and identity, love, ..), a virtuously written text.

The trial between the city of Linz and Bawag PSK concerning the swap 4175 is still ongoing.

On June 30, 2014, the city filed a claim (the full text (in German) here) in which the judge was claimed to be biased. A senate (consisting of three judges) of the commercial court in Vienna had to decide if to allow this appeal or to dismiss it.

On Sept. 12, the appeal was dimissed (in German here ). The senate found no evidence that the judge would be biased.

The city of Linz will not appeal against this decision.

Therefore, the next step will be to decide if Uwe Wystup, who was named as one of two expert witnesses, is biased.

No doubt, without standardization we'd have no powerful interfaces that transform one world into another.

My industrial socialization, factory automation, was characterized by standardization. If you don't have standard machine elements, function complexes, mechanisms, ... you need to make everything yourself from scratch. Even languages that are understood by machine, robot, ... controls need to be standardized. Our high level task-oriented offline programming languages were compiled to standardized control code.

But we realized quickly: complex automated manufacturing tasks cannot be centrally superviced, they need to be organized as interplay of systems with local intelligence. The bottom-up fashion.

This is where I come from. And I am still for standardization, but I've reservations about strict supervision and centralization.

In banking, standards consolidate transactions, rationalize accounts. integrate payment environments and what have you, but

Big regulatory wave

Regulatory bodies often use "(international) standards" to designate principles and rules of financial regulation and supervision for the general and detailed business processes in the financial sector.

After the financial crisis it seems regulation has become a synonym for centralization? It comes like a big wave and causes big changes of financial business principles, far beyond core capital rules, risk management requirements, …

It redefines game rules even in pricing.

ISO standards are voluntary
ISO standards are written international agreements on the use of technologies, methods and processes adopted to the consensus of partners concerned - support consistent technical implementation.

To me this is vital: it is suggesting an orthogonal engineering, implementation and management of technologies and solutions. Decentralized implementation does not restrict a systemic use.

Consequently, standardized platforms do not kill innovation.

Central counter party - Unintended consequences?

My view on central clearing, ....
On a higher-level view central clearing is reducing counter party exposure but may be resulting in an increase in liquidity risk. Such kind of centralization may drive a marginal cost regime with margin compression (OTC revenue reduction) ….

Technology providers, will not be able to influence the rules, but

Individualize with UnRisk

We will put our best efforts to support our small and medium-sized clients to evaluate their revenue impact and maybe refine product and sales strategies to their business strength.

Quants will become even more important as our partners. Instruments will become less complex but the valuation space will become massive. Market dynamics will change basic rates more frequently. Consequently, the methodology to price a simple swap changes fundamentally, portfolio optimization gets another meaning, …

We will soon offer the methodologies for these new regimes to be managed in our UnRisk Financial Language in combination with the UnRisk FACTORY Data Framework supporting the corresponding financial objects and data.

Designed to enable quants to build systems for better trading decisions and risk-informed sales strategies under a new (regulatory) regime.

In fall of 2010 we decided to go cross platform with our quantitative finance tool UnRisk-Q. The library was initially developed for Windows only, but the ongoing shift in platform popularity made us consider also offering it for Linux and Mac OS X. Mathematica, which forms the basis of UnRisk Financial Language, is also available for these three platforms.

When we started, the whole build process of UnRisk-Q was based on manually maintained Visual Studio C++ projects. We looked at different cross platform build tools and finally settled on using CMake as our build tool for the following reasons:

A CMake installation is fully self contained and does not depend on a third-party scripting language.

Once the build system was chosen, the existing C++ code needed to be made cross platform. This is a straight-forward process, which requires replacing platform specific code with platform agnostic one where possible and insulating the platform specific code that remains. In doing that, we often had to make changes to widely used project header files, which triggered a rebuild of the whole project. Since UnRisk-Q’s code base consists of about a half a million lines of C++ code, this meant that we had to wait almost half an hour for a build to finish.

The Preprocessor Takes the Blame

A short C++ program, which consists of about 100 lines of source code, is turned into a 40000 line compilation unit by the preprocessor which handles the inclusion of standard headers. So all a C++ compiler does these days is to continually parse massive compilation units. Since any complex C++ project consists of dozens of C++ source code files and many of the source files use the same standard headers, the C++ compiler has to do a lot of redundant work.

The downsides of the preprocessor have been known for a long time. In his book The Design and Evolution of C++ Bjarne Stroustrup made the following statement about the preprocessor (Cpp): “Furthermore I am of the opinion that Cpp must be destroyed.” The book was released in 1994. 20 years later the preprocessor is still alive and kicking in the world of C++ programming.

The preprocessor is here to stay, so two different techniques have been developed to speed up preprocessing. The first one is precompiled header (PCH), the other one is single computation unit which is more commonly known as “unity builds”. Both techniques are good ideas in principle, they however failed to gain wide use in many C++ projects for the following reasons:

Precompiled headers require the creation and continuous manual maintenance of a prefix header.

Unity builds break the use of many C++ language features. They may cause unintended collisions of global variable and macro definitions. Thus unity builds rarely work without source code modifications.

Most C++ projects start out small and grow over time. When the need for adding PCH or unity build support is felt, it is too much work to incorporate it into the existing build system.

Given the modern build infrastructure that CMake provides, I thought that adding support for precompiled headers and unity builds should be as easy as stealing candy from a baby. I couldn’t be more wrong. The existing solutions at that time only were hacks divorced from software engineering reality. So this was clearly a case, where Jean-Baptiste Emanuel Zorg’s rule applies. On top of that it was an interesting weekend project to take on.

Designing the Interface

Interface wise I wanted to be able to speed up the compilation of a CMake project by using one of the simplest technical interfaces known to man:

In programming terms, this means that if you have a CMake project which creates an executable:

add_executable(example main.cpp example.cpp log.cpp)

you just call a function with the corresponding CMake target:

cotire(example)

cotire is an acronym for compile time reducer. The function then should do its magic of speeding up the build process. It should hide all the nasty details of setting up the necessary build processes and should work seamlessly for the big four C++ compiler vendors, Clang, GCC, Intel and MSVC.

Once you have designed an interface that you think succinctly solves your problem, it is extremely important to fight the urge to make the interface more complicated than it needs to be just to make it cope with some edge cases. Giving in to that urge too early is the reason why software developers have to deal with subpar tools and libraries on a daily basis.

Implementation

A well designed interface should give you a crystal-clear view of the technical problems that need to be solved in order to make the interface work in reality. For cotire, the following problems needed to be solved:

Generate a unity build source file.

Add a new CMake target that lets you build the original target as a unity build.

Generate a prefix header.

Precompile the prefix header.

Apply the precompiled prefix header to the CMake target to make it compile faster.

Using CMake custom build rules, cotire sets up rules to have the build system generate the following files at build time:

The unity build source file is generated from the information in the CMake list file by querying the target’s SOURCES property. It consists of preprocessor include directives for each of the target source files. The files
are included in the same order that is used in the CMake add_executable or add_library call.

This is the unity source generated for the example project under Linux:

The prefix header is then produced from the unity source file by running the unity source file through the
preprocessor and keeping track of each header file used. Cotire will automatically choose headers that are outside of the project root directory and thus are likely to change only infrequently.

For a complex CMake target, the prefix header may contain system headers from many different software packages, as can be seen in the example prefix header below generated for one of UnRisk-Q’s core libraries under Linux:

The precompiled header, which is a binary file, is then produced from the generated prefix header by using a proprietary precompiling mechanism depending on the compiler used. For the precompiled header compilation, the compile options (flags, include directories and preprocessor defines) must match the target’s compile options exactly. Cotire extract the necessary information automatically from the target’s build properties that CMake provides.

As a final step cotire then modifies the COMPILE_FLAGS property of the CMake target to force the inclusion of the precompiled header.

Speedup

With cotire we were able to cut the build time of the Windows version of UnRisk-Q by 40 percent:

With tools that are developed with a special in-house purpose in mind, it’s all too easy to fall into it works on my machine trap. Therefore we also applied cotire to some popular open source projects in order to test its general-purpose applicability. One project we tested it on is LLVM. LLVM is a huge C++ project with close to a million lines of code, yet the change set that is needed to apply cotire to it is just 100 lines of code. A cotire PCH build reduces the build time for LLVM 3.4 by about 20 percent:

One project where unity builds work out of the box without having to make changes to the source code is an example text editor application for Qt5. Applying a cotire generated precompiled headers to this project reduces compile time by the usual 20 percent, but doing a cotire unity build results in a reduction of 70 percent:

Other users of cotire have reported even larger speedups with cotire unity builds.

Conclusion

As described in the book The Cathedral and the Bazaar, one of the lessons for creating good open source software is that every good work of software starts by scratching a developer’s personal itch. Cotire has been released as an open source project in March 2012. Since then it has beed adopted by hundreds of open and closed source projects that use CMake as a build system. Among those are projects from Facebook and Netflix.

I am currently furnishing my new office and for my book shelf I decided to buy some special edition of my favourite books. During my studies of physics I had a lot of different text books for the basic physics courses, like Berkeley physics course or Tiplers book. And although many of them were great none of them impressed me like The Feynman Lectures on Physics. Therefore I decided to buy the millennium edition of this book(s)

Between 1963 and 1965 Richard Feynman taught lectures to Caltech freshmen and sophomores - out of these lectures the three volumes of the book has been created by him and his coauthors. Volume I concentrates on mechanics, radiation, and heat; Volume II on electromagnetism and matter; and Volume III on quantum mechanics.

I want to end today's blog post with a cite of Mark Kac:

"There are two kinds of geniuses: the 'ordinary' and the 'magicians'. An ordinary genius is a fellow whom you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what they've done, we feel certain that we, too, could have done it. It is different with the magicians. Even after we understand what they have done it is completely dark. Richard Feynman is a magician of the highest calibre."

BTW, the Austrian electronics company abatec offers a Local Position Measurement systems that is used in sports. In soccer, it measures the motions and positions of the players (and the ball) and combines them with their biometric data. A great tool to test different strategies in practice … It's used by top teams, like the national team of the Netherlands (that played a great world championship 2014).

So, using such techniques the network theory above could be enriched with quantitative information?

Last week, I gave a talk at the Wolfram Data Summit. Since I am a physicist without any formal education in the way of data science, it has been exciting two days with lots of new input and food for thought. Having heard so much about the "internet of things" and geospatial data gave my motivation the final push to start a little private "data science project" that has been lurking in the back of my mind for quite a while...

If you have, up to now, pictured MathConsult'ers as nerds spending their days (and nights) in front of computers, with pizza boxes stacked to both sides, you couldn't be more wrong. OK - you might have been right about the nerd part, but there are actually lots of good sportsmen among my colleagues; in particular, many of us enjoy running. Of course, most of us own a few of those internet-of-things gadgets that record your running tracks, heart rate, and other running-related data.

What has always irked me about the software behind those gadgets is that I'd really like to know the velocity I have been running at a certain point of the track. Most applications, however, just show wild noise, some show an over-smoothed curve that doesn't seem to have much to do with reality, but none seem to really get it right. The underlying problem actually has already been outlined by Andreas in his post on identifying implied volatility: While GPS-enabled watches can measure time very accurately, GPS coordinates are much, much less accurate (up to 7 meters off in the worst case). That means there is a lot of noise in the positions recorded alongside the running track, and the velocity is the time derivative of the distances between those track points. And taking the derivative of noise yields a hell of a lot of more noise.

Before we can do savvy maths on the recorded data, we of course need to get the stuff into the computer and most of the time clean it a bit - in this respect, data from a running watch is no different to financial data. In this post, I'd like to concentrate on that data science aspect, and show you how easy it is to read, interpret and clean the data with Mathematica. While I'm using the data from my running watch as an example, the general problems encountered here apply to other data as well.

Most devices can export their data als so-called GPX files, which are simple XML file containing, among other data, the information on the GPS locations recorded alongside the running track. Importing the XML is trivial:

xml = Import[dir <> "activity_579570063.gpx", "XML"];

In the next step, we need to extract the relevant data: we want to have the time at which each track point was recorded, the geographic location (latitude and longitude) and also the elevation (I'll take care of the heart rate at a later time). Furthermore, we want to convert the timestamps (they are ISO date strings of the form "2014-08-31T16:14:45.000Z") to seconds elapsed since the start. We also need to clean the data a bit, since for some unknown reason, some track points are contained multiple times, and we need to eliminate the duplicates. All that can be done with this little piece of Wolfram language code:

Ah, right. I have been running alongside the Danube. Now, a naive way of directly calculating the velocity is to calculate the time elapsed between two track points, as well as the distance between them, and take that as the velocity. Here's the Mathematica code (luckily, the built-in function GeoDistance takes care of calculating the distance between two points on the earth's surface for us):

Good god. Seems I have not been running, but rather "oscillating". If you'd plot the Fourier spectrum, you'd notice a distinct peak at a frequency of about 0.33 Hz - this is the frequency the GPS watch takes measurements (every three seconds). A naive way to get a smoother velocity would be to kill off the high frequencies by a kind of low-Pass filter. That's simple to do in Mathematica:

The orange curve is the smoothed velocity. It is much smoother, but I'm not really satisfied with it: it shows a slowly oscillating velocity (because the high-frequency oscillations have been killed), which does not really match reality. In reality, runners move at almost constant speed for some time, then probably switch to higher speed for a few kilometers, ... and so on.

To do better, we need a kind of model for how the runner moves, and fit the model to the available data. I'll probably show this in another blog post some time, but would like to end this one on a more philosophical note:

More often than not, the problem is not being able to store or otherwise handle huge amounts of data - the evolution of computer hardware has already taken care of a wide range of such problems. The real problem often is to make sense of the data, to extract the relevant information. In the case of the data from the running watch, the situation seems to be simple enough at first sight: the watch actually records the data we want right away. Still, inevitable noise in the data forces us to make assumptions about the behavior of the runner (e.g., how fast, and how often does he change his velocity), and these assumptions of course will influence the conclusions we make from the data. Financial data, for instance, is incredibly noisy, too. In addition: what actually is the information we want to extract from that data?

A family sage telling us the story about the aftermath of the fall of Granada (the demolish of the Moorish culture). In short it's about the reverse of tolerance and intolerance. The Muslim community has been shaken by the burning of their books including the great Muslim writings on science, mathematics, optics, medicine, … It's end of 15th Century, final stages of the Reconquista. The start of a dark age of Christianity (Inquisition …).

Dark ages - War on Science
Ironically enough, the fall of Granada was centuries after the Islamic golden age (that lasted from approx. 800 to 1200). In Lost Enlightenment, Frederick Starr chronicles the long tradition of scientists, mathematicians, engineers, .. in Central Asia (Iranian- and Turkish speaking regions). BTW, at that time the essentials of applied mathematics were invented.

But after this 4 centuries all went wrong. The region went into a downturn and the Islamic science with it. Maybe the conquest of the Mongols? Not really.

Starr argues that there were anti-science movements much earlier. Part of the Muslim rulers told the rest: we see more and know it better - only faith, not rationalism and science, can offer insight into the truths of the world.

It's only one example of the war for Tribal Reality (putting positions of rulers, preachers, culture leaders, .. in concrete), and a "War on Science" in the large. There is more, also in the Western cultures - I am really worried about that.

War on science in the small?

Scientific Method? There is a better way?
About 25 years ago, I conducted a project for a renowned Austrian ski maker. The objective: create models that describe the dependencies of ski test scores on ski properties. Tests that are made on icy or powder snow piste, … for ladies, men, .... We took in particular geometric and related physical properties.

We applied multi-method machine learning, including the decision tree method ID3. From the set up it is a simple supervised learning scheme. We achieved some good results, but our methods at that time did not extract that a parabolic sidecut is essential for turns. Why? The system dis not "see" any ski samples that had significantly curved edges - they had only some slight sidecuts.

3 years later we had methods (like fuzzy ID3) that may have extracted the relation: precise in turns - significant sidecuts. A carving ski invented? How? Fuzzy decision trees are computational and you can calibrate fuzzy sets and membership functions ....

But to introduce fuzzy machine learning methods, you need fuzzy ordering and order relations. And finding a general framework for it you need to apply scientific method: have a hypothesis and test it.

The dark of the data salt mines

There are many contributions suggesting that scientific method is outdated - in the exabyte age data contain every information required?

But even with the best machine learning methods, can we really transform them into knowledge? With my over 25 years experience, no. Theories need models that ride the data waves.

In my blog post two weeks ago I posted some figures with paths and exposures. Today I want to give a short example how Monte Carlo simulation can be used for the calculation of the CVA of an option. To keep things simple we will assume that only the counterparty can default (we will calculate the unilateral CVA).

The formula for unilateral CVA is given by

where LGD is the loss given default, i.e. one minus the recovery rate, DF is the discount factor (we assume a deterministic yield) and PD is the default probability of our counterparty. The term EE(t) stands for the expected exposure at time t. Exposure describes the amount of money that the counterparty owes us if it defaults at time t. As we only want to calculate a single instrument CVA we do not need to take into account netting. Furthermore we assume that o collateral agreements are in place. We can then calculate the positive exposure

where V(t) is the mark to market value of the

instrument a time t. As we model the underlying risk factor of the instrument by a stochastic process, V(t) and therefore also E(t) are random variables.

Equity paths: Black Scholes model with constant volatility.

We can calculate the expected exposure EE(t) by calculating the expectation value over all realisations of our random variable E(t). For an option the value of a trade at time t is simply given by

where K is the strike price.

E(t) for the call option with strike K=100

EE(t)

Having calculated the exposure we put everything together with a discretised version of formula one

Yesterday, I experienced the Apple keynote 2014 on my iMac. It is amazing, how perfect its was staged again.

Apple wants to make things for those, who see things differently. Again and again and again …In search of perfection?
And this made me brooding. Industrial work does what it did yesterday tomorrow, but faster, preciser, thinner, smaller, bigger, ... cheaper. Only at the lab we are striving to find a breakthrough, new ways to solve new problems, and do new things.

The "iPhone 6" is bigger and thinner? Is this important for those who see things differently? To be honest, although expecting it, I was bit disappointed (at the beginning).

A Watch is not a tool
"Apple Watch". I don't wear a watch, because there are watches all around.

I own the WATCH by Femming Bo Hansen. It's minimalist. It's represented at the Museum of Modern Art, NY. And, what a shame, I have put it into a drawer (and nearly forgot it).

A watch is not a tool - it clocks our life.

A computer is not a tool either
Hold on, Apple Watch is a computer on a wrist, and a computer is not a tool either - it is a universal machine to build tools for many usages.

I started my professional work at the stone age of IT. Since then people said subsequently:

"Only a mainframe is a computer" - and then the mini computers entered the market
"Only a mini computer is a computer" - and then the micro computers entered the market
"Only a micro computer is a computer" - and then the workstations entered the market
"Only a workstation is a computer" - and then the PC entered the market
"Only a PC is a computer" - and then the tablet computers entered the market
"Only a tablet computer is a computer - and then ….?

My first professional work challenge was the migration of an mainframe based system for offline CNC machine programming ("APT") to the first mini computer ("General Automation"). But I quickly realized: this is crazy - and I built a completely new system instead. Exploiting the capabilities of a mini computer. Like the first computer graphics tools … I built a new CNC programming language and all the algorithms to implement it.

And this kind of reinvention's happened again and again and again.

When things became smaller, connectivity became more important. "Sun" said: the network is the computer. And now we have entered the "cloudy" days.

But still front-end devices become smaller and more intelligent. In a not so far future we may have beer glasses, ... that are computers?

However, one of the challenges for a "wrist computer" is the interaction paradigm … that at the other hand - if solved adequately - enable

New uses
It seams Apple wants to take the watch to occupy new market segments, like Health and Fitness. By making things computational - transforming information into knowledge - they can provide better real time advice. If you are running, bicycling, climbing, … you don't want to wear a tablet computer (a bigger than bigger smartphone), in a race you want to optimize your risk: go to the limit but not across.

Summarizing, I think, this Apple announcement has been underestimated. It may be the begin of making massive information computational - first on a smaller scale.

Maybe, it was not a keynote that shook the tech industry, like those with the announcements of iMac, iPod, iPhone, iPad, …

What has this all to do with quant finance?

IMO, the quant finance platform and system providers have developed powerful technologies that all are tied together to more or less comprehensive valuation-, portfolio- and risk management solutions or development systems. From professionals for professionals. Following the regulatory framework that reduces the optionality and introduces immense complexity to the valuation and data management space.

Less effort is applied to explain, say, complex financial concepts to non experts. Using new description and visualization techniques explaining when (and why) an investment will become unhealthy. In an offline (or semi-online) simulator to get more insight into the possible optimization of risk and in the real investment and capital management cycles.

It's not so much about the innovativeness of the smart devices, but their paradigms of interaction …

I confess, this is also challenge for us UnRiskers. Health and Fitness of the financial system … Nothing for the wrist, but for a tablet compute and a bigger than bigger smart phone. We have implemented quite complex mathematical schemes on them … and we have clients and partner who use them for tailored solutions, but ….

p.s. I did not take in account the marketing here: is Apple Watch placed as a form of identity? I would not do it that way. It was producing too less volume and no change. However, the reaction on the announcement was quite surprising ….

To obtain such results, you need some serious mathematics. And this is exactly what we did.

In an abstract setting, we want to solve a nonlinear equation F (x) = y between Hilbert spaces X and Y. (Here: x is the parameter function of local volatility acting on two-dimenstional space (S,t). F is the operator that maps any local volatility function to the two-dimensional surface of call prices (K,T). The right hand side y itself (the call price function for the true local vol) cannot be measured but it is noisy (with noise level delta), and available only for certain points.

Condition (b) might look unfamiliar. The reason for which we have to use the weak topology is that in infinite-dimensional settings, bounded sets in the strong topology need not have convergent subsequences.

For any parameter identification problem, where the solution is theoretically inifinite-dimensional, we have to analyse if the above conditions on the operator are satisfied. For local volaitility, this was carried out by Egger and Engl (Tikhonov Regularization Applied to the Inverse Problem of Option Pricing: Convergence Analysis and Rates, Inverse Problems, 2005).

At UnRisk, we do the math and the numerical implementation. This makes us deliver robust solutions. Can you say this for your system as well?

The myth of Pygmalion is about the Greek sculptor who fell in love with a statue he has carved.

The Pygmalion Effect

is about expectationsthat drive performance. In a workplace it should lead to a kind of co-evolution: leader expectations of team members may alter leader behavior. Like, the more a team member is engaged in learning the more the leader expects. In turn, the team member participates in more learning activities … and leader set learning goals and allow for more learning opportunities.

Flip motivation
Instead of doing-what-we-love, we-love-what-we-do. Yes, we want to do things we are good at, but this is not enough. It's all about the results. Create values, solve problems, leverage skills, … do things that matter for and with those who care.

(In the ancient Greek mythology the sculpture came to life - Aphrodite had granted Pygmalion's wish)

What drives us

Years ago, we have unleashed the programming power of UnRisk enabling quants to love what they do, without needing to do all the plumbing. With UnRisk, quants can programmatically manipulate deal types with hybrid contract features, models and data and intelligently aggregate the massive risk data the UnRisk FACTORY calculates.

In 2001, when we looked at the available quant finance technologies, we found products that were too black-box, too rigid to introduce new features, too service dependent to build our ideas atop.

Because we saw a need for a more flexible technology, we developed UnRisk. We wanted to provide solutions that are development systems in one. To make them bank-proof, we provided solutions first and then offered the technology behind.

We've much more ideas than we can build
This is one reason why we love partnering. We believe in quant-powered innovation on a higher level. An we offer all type of programs that make partnerships profitable to our partners - and us.

The
movements of the term structure of interest rates can be explained quite well
using only a few basic factors, like a parallel movement of the curve (shift),
a flattening or steepening or of the curve (twist) and a reshaping of the curve
(butterfly). The factors, which can be calculated as the eigenvectors of the Gramian matrix of interest rate movements, provide a very powerful framework, since they accelerate
the calculation of the historical VaR for interest rate derivatives (which is a
very time consuming task, if a full valuation approach is used) significantly.

The following questions occur:

How good is the explaination of interest rate movements using only a few factors?

How many factors do we have to use to get a good approximation?

How stable are the principal components over time?

Does the shape of the principal components depend on the currency?

How large is the approximation error of a historical VaR calculation using PCA compared to a full valuation approach?

Performing a principal component analysis (PCA) on the weekly changes of historical EUR yield curve data (between 2000 and 2007) given on 16 curve points (1W,3M,6M,9M,1Y,2Y,3Y,4Y, 5Y,7Y,10Y,15Y, 20Y,25Y, 30Y,50Y), the principal components have the following form

As expected, the first three factors describe a shift, twist and butterfly movement of the curve.

In my next blog(s), I will try to find the answers to the questions above....

What is cloud computing?

Cloud computing lets people use the internet to tap into hardware, software and a range ofrelated services on demand from powerful computers usually based in remote locations.

Successfully enabling the widespread adoption of cloud computing could add EUR 250 billion to European GDP by 2020, thanks to greater innovation and productivity, according to research conducted by International Data Corporation on behalf of the EC. Nearly four million new jobs could be created as a result.

This would amount to more than 3.8 million new jobs, although this number does not include jobs lost to cloud-related business reorganisations and productivity gains.

How can one make such a forecast?

Is the cloud what risk managers should be thankful for?
Over the years regulatory and business requirements have driven financial institutions to take leading positions in technology, implementing new sophisticated features and automated analytics solutions that are widely accessible.

UnRisk has all technologies to run at a cloud - for development we offer webUnRisk, and the UnRisk FACTORY has a web front-end. Both can be seamlessly combined. But, which clouds run our advanced technologies to make our business shining?

But also due to the data and security issues, I rather believe advanced risk management systems will run on "on-site of financial institutions clouds", providing access via the intranet and selected clients, business partners, … There we take care that our technologies run.

Two weaks ago, I applied some data smoothing techniques to improve the results of Dupire's formula when the data are noisy. The quality of the results improved slightly, but not completely satisfying.

What would we like to obtain?
On the one hand, the model prices for the forward local volatility problem (e.g. by finite elements) should fit the (noisy) market prices reasonably, on the other hand, the local volatility surface should not exhibit severe oscilations. It should be as smooth as possible and as unsmooth as necessary to fit the data.

Identifying local volatility is, at least in infinite dimensional setting (you would know the call price for any (K, T)-combination (K is strike, T is expiry)) an ill-posed problem. This means that arbitrarily small noise (remark; if of sufficient high frequency) can lead to arbitrarily large errors in the solution. And it is a nonliner problem, which makes it more complicated than, e.g. curvefitting in a Hull-White-setting.

The technique to overcome these stability prblems (or: one of the techniques) is Tikhonov regularisation. The following plot shows the result we obtain by regularization on the same data as we used in the presmoothing example.

Next week, I will have a closer look on the theory behind this improvement.

Feb-2008 I reviewed an innovation project under the sixth framework program of the European Commission. Three days at the Research Institute of Martigny, CH.

It's a beautiful high-alpine region, so I decided to pack my cross country skis into the car and stayed at a nice small Swiss Lodge hotel in Champex-Lac.

The views into the highest part of the Alps are stunning, the cross country ski tracks go through the natural beauty and are well diversified for all risk levels, ...

As usual, when selecting a hotel, I went through the menus and wine lists of the restaurants. At the selected I found some outstanding entries of wines from the Valais region. But I knew, there are this rare wines from Visperterminen. The highest vineyards in Europe. So high that only a 300-days-sun-a-year microclimate makes it possible to grow wine. I find Chanton the top makers. Their rarities include Heida, Lafnetscha, Himbertscha, … whites of autochthonous grapes. They are not only rare, but outstanding.

I called the restaurant owner and asked, whether she could help me getting those Chanton specialities. "Let's see". At the first evening when I took place at the large window of the restaurant with the most stunning view ... "this is our special wine list for you", she said smiling. It contained all the Chanton rarities … Consequently, I sat there every evening of my stay …. and I packed a selection of the remaining bottles into my car for take away (to Austria).

This is Switzerland - they like to do business, serve clients (who care) and do it with understatement.

Here is a great blog post: Daniel Davies about Switzerland. It's a "foreign view" - he got to know the Swiss when working for Credit Suisse for five years. The sum of all my visits were less long (fortunately, quite a lot UnRisk related), but I agree.

In Switzerland law is changeable and it has a long direct democratic tradition to do that by relying on the wisdom of the crowds. But, Switzerland has also a long tradition of hospitality to original thinkers. To NN Taleb "long in history" means (most probably) "long in the future". An important sign that a political or socio-economic system belongs to the "Anifragile".

p.s. Shhh, now I can order Chanton's rarities online … and get info about more Swiss wine specialities from Chandra Kurt ….

Last week, at the IPTA 2014 conference in Bristol (where I tried to speak about math without formulas), there was additional mathematical meat put on the bones by colleagues from the Industrial Mathematics Institute.

Daniela Saxenhuber won a best poster award for her conference poster "A fast reconstruction method for complex adaptive optics systems". Congratulations to her!

In the first part of this blog episode, we gave an overview of the two interfacing technologies that the Wolfram Language provides for calling C++ programs, MathLink and LibraryLink. MathLink’s main advantage is its robustness, whereas LibraryLink offers high-speed and memory efficient execution. Providing support for both technologies at the same time, which is desirable from a software engineering perspective, requires considerable work from developers.

And now the conclusion

It turns out there is a way to have the best of both worlds with little effort. The idea is to replace the two separate wrapper functions that are needed for MathLink and LibraryLink with a single universal wrapper function, which can be easily adapted to both interfacing technologies. In this blog post we’ll rewrite the add_scalar_to_vector integration example from the first part using such a universal wrapper function.

First let’s recap the LibraryLink based wrapper function from the first part:

The wrapper function conforms the MArgument signature required by LibraryLink. The dynamic library generated from that wrapper function was loaded into the Mathematica kernel session with LibraryFunctionLoad:

Your culture will adapt to service ours

LibraryLink also supports an alternate signature for wrapper functions that is based on the data type MLINK. This type represents a MathLink connection that lets you send any Wolfram Language expression to your library and get back any Wolfram Language expression as a result. Let us rewrite AddScalarToVectorLL with the MLINK signature:

The implementation of the universal wrapper function is straight forward. It uses the MathLink API to read the a vector and a scalar from the underlying link and also writes back the result on the link. A std::unique_ptr with a user-supplied deleter lambda function ensures that the vector read with MLGetReal64List is cleaned up automatically with MLReleaseReal64List when the function returns.

The LibraryLink MLINK signature wrapper function must be loaded into the Mathematica kernel session with a slightly different command:

LibraryFunctionLoad now uses the Mathematica expression LinkObject as a data type, which corresponds to MLINK on the C++ side. To ensure that the native function is invoked with the correct arguments (a vector and a scalar), we setup a regular Mathematica function pattern which calls the native function loaded from the dynamic library.

To use the universal wrapper function from the MathLink executable, we have to make some changes to the MathLink template file we presented in the first part:

In the data type mapping we now declare the :ArgumentTypes: as Manual, because the universal wrapper function takes care of reading all the arguments from the link. The MathLink wrapper function AddScalarToVectorML simply invokes the universal wrapper with the underlying MathLink connection stdlink. To make MathLink error handling work correctly, we have to return a $Failed symbol if the universal wrapper returns an error.

We wish to improve ourselves

When a new C++ function needs to be integrated into the Wolfram Language, the LibraryLink and MathLink wrapper functions can be copied almost verbatim. Development effort only goes into writing a new universal wrapper function.

What concerns the amount of code we had to write, for the simple function add_scalar_to_vector it does make much of a difference. However, if you have to integrate dozens of C++ functions with long parameter lists, as is the case with UnRisk-Q, a great deal of tedious work can be saved with a universal wrapper function.

Show me the code

You can download a self-contained CMake project which demonstrates how to build a MathLink executable and a LibraryLink dynamic library using a universal wrapper function. You need the following third-party software packages:

Mathematica ≥ 8

CMake ≥ 2.8.9

Clang ≥ 3.1 or GCC ≥ 4.6 or Visual Studio C++ ≥ 2010

The file CMakeLists.txt in the zip archive contains instructions on how to build and run the tests for Windows, Linux and OS X.

Performance

We’ll now evaluate the performance of the different linking technologies presented in this blog episode:

MathLink using the universal wrapper function

MLINK based LibraryLink using the universal wrapper function

MArgument based LibraryLink with the custom wrapper function from part one

The actual test code for the different linking technologies can be seen in the file CMakeLists.txt file in the zip archive.

The following chart shows the execution times of the different linking technologies. The results were obtained with Mathematica 10 on a MacBook Pro Mid 2010 running OS X 10.9. The code was compiled with Clang 3.4. The results for Linux and Windows are similar.

Summing up, we can draw the following conclusions, what concerns the communication overhead for integrating a native C++ function into the Wolfram Language:

Using a universal wrapper function, you a get a speedup of about 50 percent by moving from MathLink to LibraryLink with almost no additional development effort.

An additional speedup of about 50 percent is possible, if you go to the extra effort of writing an MArgument based wrapper function.