No known species of reindeer can fly. But there are at least 300 000 species of living organisms yet to be classified, and while most of these are insects and germs, this does not completely rule out flying reindeer which only Santa has ever seen.

There are 2 billion children (persons under 18) in the world. But since Santa doesn't (appear) to handle the Muslim, Hindu, Jewish and Buddhist children, that reduces the workload to 15% of the total - 378 million according to Population Reference Bureau. At an average (census) rate of 3.5 children per household, that's 108 million homes. One presumes there is at least one good child in each.

Santa has 31 hours of Christmas to work with, thanks to the different time zones and the rotation of the earth, assuming he travels east to west (which seems logical). This works out to 967.7 visits per second. This is to say that for each Christian household with good children, Santa has 1/1000th of a second to park, hop out of the sleigh, jump down the chimney, fill the stockings, distribute the remaining presents under the tree, eat whatever snacks have been left, get back up the chimney, get back into the sleigh and move on to the next house. Assuming that each of these 108 million stops are evenly distributed around the earth (which, of course we know to be false but for the purpose of our calculation we will accept), we are now talking about 1.2 miles per household, a total trip of 129 million miles, not counting stops to do what most of us must do at least once every 31 hours, plus feeding and so on. This means that Santa's sleigh is moving at about 1200 miles per second, 6 000 times the speed of sound. For purposes of comparison, the fastest man-made vehicle on earth, the Ulysses space probe, moves at a pokey of 27.4 miles per second - a conventional reindeer can run, tops, 15 miles per hour.

The payload on the sleigh adds another interesting element. Assuming that each child gets nothing more than a medium-sized Lego set (2 pounds), the sleigh is carrying 321 300 tons (assuming not all children are good), not counting Santa, who is invariably described as overweight. On land, conventional reindeer can pull no more than 300 pounds. Even granting that "flying reindeer" could pull ten times the normal amount, we cannot do the job with eight, or even nine. We need 107 100 reindeer. This increases the payload - not even counting the weight of the sleigh - to 329 868 tons. Again, for comparison - this is four times the weight of the Queen Elizabeth.

329 868 tons traveling at 1200 miles per second create enormous air resistance - this will heat the reindeer up in the same fashion as space crafts reentering the earth's atmosphere. The lead pair of reindeer will absorb 14.3 quintillion joules of energy. Per second. Each. In short, they will burst into flame almost instantaneously, exposing the reindeer behind them, and create deafening sonic booms in their wake. The entire reindeer team will be vaporized within 4.26 thousandth of a second. Santa, meanwhile, will be subjected to centrifugal forces 17 500 times greater than gravity. A 250 pound Santa (which seems ludicrously slim) would be pinned to the back of his sleigh by 4 315 015 pounds of force.

Another 2014 story is credit / debt valuation adjustment. When regulators go the limit (and sometimes beyond) of reasonability, when computational requirements get higher and higher, when millions of scenario values have to be calculated, then the UnRisk option is worthwhile to have a closer look.

In UnRisk's CVA project, cofunded by the Austrian Research Promotion agency, we have been working (and work is still ungoing) on bringing the xVA challenges to the ground.

UnRisk is not only about clever math, but also about stable and up-to-date realisations in modern software environments. Being a stone-age Fortran programmer myself, I enjoyed Sascha's post on the goats, wolves and lions problem very much.

There were more targest achieved by the UnRisk team in 2014: the releases of the UnRisk Engine version 8 and of the UnRisk FACTORY versions 5.1 and 5.2, the implementation of an HDF5 file format as a bsis for the CVA calculations and more things to come.

Starting from r(0), simulate the paths according to the formula above for k=1..M.

Calculate the cash flows CF of the Instrument at the corresponding cash flow dates

Using the generated paths, calculate the discount factors DF to the cash flow dates and discount the cash flows to time t0

Calculate the fair value of the interest rate derivative in each path by summing the discounted cashflows from step 5.:

Calculate the fair value of the interest rate derivative as the arithmetic mean of the simulated fair values of each path, i.e.

The only difference of QMC simulation is to use deterministic low discrepancy points, instead of the random points used in step 2 of the Monte Carlo algorithm. These points are chosen to be better euidistributed in a given domain by avoiding large gaps between the points. The advantage of QMC Simulation is, that it can result in better accuracy and faster convergence compared to the Monte Carlo Simulation technique.

The following picture shows the dependence of the MC/QMC valuation result on the number of chosen pahts for a vanilla floater, which matures in 30 years, pays annually the Euribor 12M reference rate. The time steps in the simulation method is chosen to be 1 day. One can see that using QMC, a much lower number of paths is needed to achieve an accurate price.

In Berlin, one of the agenda points was to formulate the next steps for the Mathematics for Industry network which will be funded by the European Union within the COST framework.

To be more specific, the trans-domain action 1409 (TD1409) is named Mathematics for industry network (MI-NET) with the objectives to encourage interaction between mathematicians and industrialists, in particular through (1) industry-driven problem solving workshops, and (2) academia-driven training and secondment.

I was nominated by the national COST coordinator to become a member of the management committee of this cost actions and I am looking forward to the interactions with my colleagues.

I've joined the team decisions, but when I read Michael's post on HDF5 yesterday, it made me brooding again. A correct and in time decision made with a careful view into the most probable future of data management. But what about the regulatory wishes and ability of technology providers?

No data silos banks!

The regulatory bodies do not like fluid data - they want them solid…they want evidence of every transaction. And we created the UnRisk FACTORY data base that stores every information of every detail of each valuation transaction forever. Every! And clearly, they are strictly SQL compliant and far beyond we provide functions in our UnRisk Financial Language (UnRisk-Q) that enable to manipulate its objects and data programmatically.

….selecting momentary technologies blindly may make it impossible to achieve the ambitious goals. Data and valuation management needs to be integrated carefully and an exposure modeling engine needs to work event driven.With this respect we are in the middle of the VA project. Manage the valuation side first - and do it the UnRisk way: build a sound fundament for a really tall bullding

And this is what we did.

The new regime needs trust
Of course, we'll make inputs, results and important (meta)information available. But, what was still possible with our VaR Universe...store every detail...like VaR deltas…in SQL retrievable form...may be impossible under the new regime.

But, UnRisk Financial Language users will have the required access and much more…functions to aggregate and evaluate risk, margin...data and what have you.

So, ironically regulatory bodies may have boycotted a part of their transparency requests?

However, IMO, it needs more trust of all parties from the beginning…and the view behind the curtain will become even more important. You can't keep millions of valuations to get a single price…evident? But we can explain what we do and how our interim data are calculated.

With out pioneer clients we go already through the programs…and courses and workouts will become part of our know-how packages.

The options of future data management?
The world of data management is changing. Analytical data platforms, NoSQL databases…are hot topics. But, what I see in the core: new computing muscles do not only crunch numbers lightning fast, they will come with very large RAM memory.

This affects software architectures, functionality and scalability. Those RAM memories may become bases for NoSQL databases…however, ending up with disk-less databases.

There may be many avenues to pursue…but it's no mistake to think of a NoSQL world.

It's unprecedented fast again
Many years ago we've turned UnRisk into gridUnRisk performing single valuations on computational kernels in parallel. Then we started making things inherently parallel. Now we accelerate the data management immensely.

Prepared for any future. Luckily we've chosen the right architectures and technologies from the beginning.

Right now the UnRisk team is working on the development of a powerful xVA Engine (the corresponding new UnRisk products will be released in 2015). In order to being able to handle the huge amounts of generated data

positive exposures

negative exposures

realizations of all underlying risk factors in all considered Monte Carlo paths

we decided to choose HDF5 to be the best "Language" to connect the User Interface with the underlying Numeric Engines.

To be honest, HDF5 is not a language it is a file format.

An HDF5 file has the advantage that it may become very big without the loss of speed in accessing the data within this file. By the use of special programs (e.g. HDFView) one can easily take a look at the contents of such a file.

The UnRisk xVA calculation workflow consists of the following steps:

Read in the User Input containing the usual UnRisk objects

Transform this Input into the HDF5 Dialect and create the HDF5 file

Call the xVA Engine, which

reads in the contents of the file

calls the numeric engine

writes the output of the numeric engine into the HDF5 file

Transform the output of the xVA Engine back into the UnRisk Language

Return the results to the User

Here is a screenshot of the contents of such an HDF5 file (containing the xVA calculations for a portfolio consisting of several netting sets and instruments):

But why did we choose this workflow and do not use a simple function call?

The reasons are the following:

Reason 1: We are able to split our development team into two groups: the first one is responsible for steps 1, 2, 4 and 5 , the second one is responsible for step 3. Both groups simply have to use the same "UnRisk HDF Dictionary".

Reason 2: The different steps may be performed asynchronously - meaning that the workflow could look as follows:

Create many HDF5 files on machine 1 (i.e. perform steps 1 and 2 from above for a list of user inputs)

Call the xVA Engine for each of these files on machine 2 at any time afterwards

Extract the calculation output on machine 3 at any time afterwards

Reason 3: Users may only want to use our xVA Engine - i.e. they want to perform steps 1, 2, 4 and 5 themselves. The only Thing, they have to learn is the UnRisk HDF5 dialect (we may support such customers by the use of our Java functionality to write to and read from HDF5 files).

Reason 4: For debugging purposes it is very convinient that a customer only has to send his / her HDF5 file to us - we immediately can use this file to debug our xVA Engine

Reason 5: If the user wants to add a new Instrument to his / her portfolio, he / she simply has to add the Instrument description to the HDF5 file (may be done also by the use of our user interfaces) of this Portfolio. By the use of the already existing results (of the calculations for the portfolio without this Instrument) the performance of the calculations may be increased immensely.

We keep you informed on our xVA developments and let you know as soon as our corresponding products are available. If you have any questions beforehand, feel free to contact us.

An introduction of the new Mathematica online courses that uni software plus GmbH provides for free for its customers

Around 80 people followed the invitation and got an impression of the new technologies Wolfram provides. These new technologies will also help us to further improve UnRisk and to define new ways of deployment.

It may be innovative, but innovation needs decelerators. Acceleration is great for many systems, but if you are in a fog of possibilities, you need to the think a little more. Insight comes from inquiry and radical experimentation.

It's Sunday, so I think of cooking. There is fast cooking - ingredients cooked in the flame - and slow cooking - cook in a way allowing flavors to mix in complex ways. Great chefs are good at slow cooking. They test creative new dishes thoroughly. And they promote the results to get eaters hooked to their innovations and get them time to adapt...

Why all the haste? It's a dramatic regime switch - why not implementing a test phase?

Regulators get inevitably captured?
Is it the problem? The fear of being blamed for another great recession?

Acadamics call it "regulatory capture", the process by which regulators who are out in place to tame the wild beasts of business instead become tools of the corporations they should regulate, especially large incubents.

Models and reasons are reviewed here. A few selected: regulators need information from the regulated, consequently interaction, cooperation…but there is also lobbying and there are career issues...

Only a scene in a big picture?

Big Business Capture Economists?
Beyond regulation…what if big business has also managed to bend the thinking of economists? An idea they are is published in the Mark Buchanan's article has big business captured the economists?

Are they [economists] free authors of their ideas to are the, like regulators, significantly influenced in their thinking by their interaction with business interests?

There is empirical evidence that this happens…

Beware strict centralization?

I've only poor knowledge in social and economic sciences, but I understand: capture is not a risk, but a danger (it can't be optimized).

I know that the bigger mistakes are often fixed later and the only thing we can do is helping the small and medium sized financial market participants to not only meet the regulatory requirements, but stabilize the core of their businesses...in competition with the big player who were selected to "save" it.

I've worked in the field, that is called Artificial Intelligence, for nearly 30 years now. At the frog level first, the bird level then. From 1990, I emphasized on machine learning...before it was kind of expert systems…

What we strived for were models and systems that were understandable and computational. This led us to multi-strategy and multi-model approaches implemented in our machine learning framework enabling us to do complex projects swifter. It has all types of statistics, fuzzy logic based machine learning, kernel methods (SVM), ANNs and more.

The idea has a long tradition that computerized systems are people. Programs were tested (Turing test…) whether they behave like a person. The ideas were promoted that there's a strong relation between algorithms and life and that the computerized systems needs all of our knowledge, expertise… to become intelligent…it was the expert system thinking.

It's easier to automate a university professor than a caterpillar driver…we said in the 80s.

Artificial Life

The expert system thinking was strictly top down. And it "died" because of its false promises.

Christopher Langton, Santa Fe Institute of Complex Systems, named the discipline that examines systems related to life, its processes and evolution, Artificial Life. The AL community applied genetic programming, a great technique for optimization and other uses, cellular automata...But the "creatures" that were created were not very intelligent.

(Later the field was extended to the logic of living systems in artificial environments - understanding complex information processing. Implemented as agent based systems).

We can create many, enough intelligent, collaborating systems by fast evolution…we said in the 90ies

Thinking like humans?
Now, companies as Google, Amazon…want to create a channel between people and algorithms. Rather than applying AI to improve search that use better search to improve its AI.

Our brain has an enormous capacity - so we just need to rebuild it? Do three break throughs unleash the long-awaited arrival of AI?

Massive inherent parallelism - the new hybrid CPU/GPU muscles able to replicate powerful ANNs?Massive data - learning from examplesBetter algorithms - ANNs have an enormous combinatorial complexity, so they need to be structured.

Make AI consciousness-free

AI that is driven by this technologies in large nets will cognitize things, as things have been electrified. It will transform the internet. Our thinking will be extended with some extra intelligence. As freestyle chase, where players will use chess programs, people and systems will do tasks together.

I have written about the Good Use of Computers, starting with Polanyi's paradox and advocating the use of computers in difficult situations. IMO, this should be true for AI.

We can learn how to manage those difficulties and even learn more about intelligence. But in such a kind of co-evolution AI must be consciousness-free.

Make knowledge computational and behavior quantifiable

I talk about AI as a set of techniques, from mathematics, engineering, science…not a post-human species. And I believe in the intelligent combination of modeling, calibration, simulation…with an intelligent identification of parameters. On the individual, as well as on the systemic level. The storm of parallelism, bigger data and deeper ANNs alone will not be able to replicate complex real behavior.

We need to continue making knowledge computational and behavior quantifiable.

It will be as short, because I've had similar thoughts that I put into various posts...about quant work under exogenous or endogenous influences, here, here, here, here.

Think like an entrepreneur. Think about delegating tasks. Think whether you could grow by partnering. What about, finding the correct and robust numerical scheme, programming it…? When you delegate every job somebody else can do, you'll most probably find the most profitable job only you can do...and you've got the time to do it. Validate models…use the right market data for the model calibration…create the most advanced risk management process…aggregate risk data...prepare dynamic reports that behaves like a program.

We will be pleased helping to leverage this important job - build the decision support for optimal-risk taking.

When the Doves Disappeared, Sofi Oksanen - this is the story about two men of Estonia...spanning a time from 1941 to 1963. A story about the fiercely principled freedom fighter Roland, and his slippery cousin Edgar, who always stays close to those in power.

In 1941 they've deserted the Red Army, When the Nazi regime occupied Estonia, Roland goes into hiding and Edgar took a new identity as their loyal supporter. 1963 Estonia is again under control…Edgar is now a Soviet apparatchik…

This is an artistically written book about a dark time. It's a historic novel, a crime story, a romance, a war story...

Outlaws, Javier Cercas - the story of the 16 years-old disaffected middle-class youth Ignacio who falls in with the gang of the teenage criminal El Zarco and his gorgeous girl, Tere. He crosses the border into their dangerous world, joining their crimes that escalate swiftly.

25 years later, Ignacio became a successful defense lawyer, he was asked by Tere to defend Zarco…

The setting is the Catalan City of Girona in the late 70s, after Franco's death. Ignacio described all this 30 years later in a series of interview with an unnamed writer.

This book surveys the borders between right and wrong. respectability and crime…its take is brilliantly plotted and I love the style.

Caused by the continuous developement of new features and functionalities, the complexity of the UnRisk software package has grown rapidly over the last years. In many situations, additional features, which are needed by our customers, are integrated by modifications of existing and tested source code.

The main problem is, that one has to be very careful to guarantee, that changes of the application do not break something that used to work before. To avoid these undesirable side effects of new implementations, a large portion of the UnRisk functionality is tested on different platforms (Windows, Linux and Mac OS X) via automatically scheduled unit tests. Within these tests, valuation and calibration results are compared to reference numbers, and if deviations are not within a given tolerance level, the corresponding test is flagged as failed.

The developers, which have modified the sources since the last build, are notified by email about the status of the test results.

Here is an example of the test summary, for the case where all unit tests succeeded:

Since the UnRisk package is built and tested on a daily basis, it helps a lot to immediatly detect problems, which occur in the software development process.

Our compass for 2012 was the all-new UnRisk, for2013 accelerate and for 2014 package and disseminate know how.

2014 - we released UnRisk FACTORY with a Bloomberg and an Excel Link and UnRisk 8 (yesterday) the new pricing and calibration engines providing a multi curve framework and eminent practical functions transforming lognormal distributed into normal distributed data spaces for interest-rate-model calibration and valuation. newUnRisk kernels are used for regime changes as required by the xVA project.

2015 - tie technologies intelligently together

With our mission to deliver the promise of serving individual requirements whilst driving generic technologies, we offer an expanding portfolio of products. They all have in common being solutions and development systems in one.

For years, we have built a technology stack that enables us and quant developers to create individual products swiftly Carefully choosing the mathematics and mapping every practical detail.

Our technology stack combines the UnRisk Financial Language implemented in UnRisk gridEngines for pricing and calibration, a portfolio across scenario FACTORY, a VaR Universe, the UnRisk FACTORY Data Framework, UnRisk web and deployment services and an Excel Link...End of 2014 an xVA engine with emphasis on central counter party risk valuation will be available.

The product portfolio will include UnRisk Quant, UnRisk Capital Manager, UnRisk Bank, UnRisk Auditor...and focus on technologies that are required for the purpose in the adequate deployment environments related to functionality, performance and usage.

What will drive us in 2015? Meet the individual requirements even more precisely by configuring our technology stack intelligently.

20-Nov-14 - we have released UnRisk PRICING ENGINE and UnRisk-Q version 8, introduced as UnRisk 8. This release is free for all UnRisk Premium Service Customers and will be shipped to all new customers immediately. UnRisk has been introduced 2001. Now UnRisk 8 is the 21st release.

What's new in UnRisk 8 has been compiled in Andreas' pre-announcement yesterday.

There's one thing: UnRisk-Q is the core of our technology stack. UnRisk PRICING ENGINE is a solution, but remains a technology, because our proprietary Excel Link provides a second front-end, Excel, but the UnRisk Financial Language front end remains available.

It's perfect for quants, who want to build validation and test books in Excel, but develop new functionality atop UnRisk or, say, front office practitioners who want to run dynamic work books, but develop post processors aggregating results in a beyond-Excel way. Even better if both collaborate closely.

Tomorrow, we will release Version 8 of UnRisk. UnRisk 8 includes, as key features, the valuation of moderately structured fixed income instruments under a multi curve model, and the Bachelier model.

The multicruve model allows to use (in the same currency) different interest rate curves for discounting, e.g., the EONIA curve, and for determining variable cashflows, e.g. Libor3m or Libor 6m.

The Bachelier model for caps, floors, swaptions can replace the Black76 model, when interest rates are low. In Black vs Bachelier revisited, I pointed out the difficulties with Black 76, when interest rates approach zero. In such cases, (Black) volatilties explode, and orders of magnitude of several 1000 percent for Black volatilities are quite common. With the Bachelier model and its data, which may be used as calibration input, negative interest rates may occur without nasty instabilities.

In no CEQ on board? I have suggested the promotion of quantitative managers for the C level pointedly. But this was the provocation phase. My strong belief is that an emergence of quantitative theories and methods will kill the tradition of strictly boss-driven organizations.

Traditional companies are "incremental". Strangely, only a few C level members tackle the challenge of innovation. They're trained for operational efficiency. Even in a crisis there are few organizing a bottom-up renewal?

Antipyramidal

I grew up in organizations where strategies were built at the top, big leaders controlled little leaders, team members competes for promotion…Tasks were assigned, rules defined actions. It was the perfect form of "plan-and-control": a pyramid. Only little space for change.

In an organizational pyramid the yesterday overweights the tomorrow. In a pyramid you can't enhance innovation, agility or engagement.

It is indispensable to reshape the organizational form.

Anticonform
Traditional managers want conformance to specifications, rules, deadlines, budgets, standards and principles. They declare "controlism" as the driving force of the organization. They hate failures and would never agree to "gain from disorder".

Not to make a mistake, control is important but freedom is important as well.

Management needs to deal with the known and unknown, ruled and chaotic, (little) losses for (bigger) gains…

Antibureaucratic
Bureaucracy is the formal representation of the pyramid and the regime of conformance.

If we want to change the underlying form-and-ideology of management that causes the major problems, we may want to learn a little from the paradigms of modern risk management.

Duality - how to deal with the known and unknownBoundaries - try to find the boundaries between the known and unknownOptimization - optimization only works within the boundariesEvolution - business in a networked world is of the co-evolution typeGame theory - a mathematical study of uncertainty caused by actions of others

This all needs quantitative skills. And if quantitative skills spread management fades.

The program grid

IMO, quants, that are self-esteemed, become stronger and contribute more to a better life if they drive a co-evolution in, what I call, a "program grid": a grid of individuals sharing programs, information and skills, without unleashing the very innovation making their solutions different. Program grids may be intra or inter-organizational.

Technology stacks, know how packages, workouts…destroy cold-blooded bureaucracy? If quants do not strive for getting picked, but choose themselves thy will contribute to the (indispensable) change.

IMO, another example why kids should learn programming early. It's fun and its building "nowists"…creating things quickly and improving constantly, without having permission of the preachers of ideology, rules…driving bottom-up innovation.

You are probably aware the Michael and I are doing some work on artificial graphene, a man-made material that mimics the electronic properties of real graphene - the material and our research project are explained in more detail in this blog post. In a nutshell, the system confines electrons to a hexagonally shaped "flake" with a lattice of so-called scatterers, that is, a lattice of small circular areas that are "forbidden" for the electrons.

I recently made plots of the electronic density (that is, the probability to find an electron at a certain point in the flake) for different eigenstates of the electronic wave functions. I found those plots so nice - from an artistic view point as well as a scientific one - that I thought I'd want to share them with you.

A short explanation for the scientifically minded readers: white means very high electron density, the color scale for decreasing density goes via orange and blueish colors to black, which means no electrons. The color scale is logarithmic, because I was not so much interested in the density as such, but the areas where the density is zero - these areas are called the "nodes" of the wave functions.

The symmetry of these nodes is dictated by a competition between the hexagonal symmetry of the outer confinement and the symmetry of the lattice of scatterers (the wave function is forced to be zero there). This competition (physicists call such a system a "frustrated system") results in the Kaleidoskope-like structure of the the density of electrons in that material.

And a question came to my mind: is there optimal intelligence?Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought.

My simplified definition of intelligence: the capacity of knowledge and the ability to change…the intelligence of knowns and unknowns.This suggests two-sidedeness and consequently subject of optimization. If you have no knowledge, everything is change - if you knoweverything, why would you change?

Intelligent people want to change the underlying systems that are causing major problems of our life. Some call this integral intelligence,What makes such radical innovation more systemic?

Know the system you want to change - but not too much
Prototype - expect the unknown
Organize a feed back cycle - learn

IMO, an approach of optimal intelligence.

Artificial Intelligence
In The myth of AI, Edge, Jaron Lanier challenges the idea that computers are people. There's no doubt computers burst of knowledge - it's even computational…but...

I like the example of (Google) translation. Although back in the 50s, because of Chomsky's work, there has been a notion of a compact and elegant core to language, it took three decades, the AI community was trying to create ideal translators. It was a reasonable hypothesis, but nobody could do it. The break through came with the idea of statistical translation - from a huge set of examples provided by millions of human translators adding and improving the example stack daily. It's not perfect, not artful…but readable. Great.

We,ve invented zillions of tests (Turing test…) for algorithms to decide whether we want to call the computer that runs it a person. With this view we consequently love it, fear its misbehavior…

My simple question: what are the mechanism to make them partners of an optimal intelligence - changing the underling systems that are causing major problems of our human life.

The Tampere node of the National Doctoral Training Network in Condensed Matter and Material Physicis (CMMP) organized a three-day school on electronic structure methods with recognized speakers from both Finland and abroad. The school has been targeted mainly to postgraduate students in related fields, but also postdocs as well as motivated undergraduate students has been encouraged to participate.

I have been asked to give an overview of the numerical methods the students can use not only in electronic structure theory but also in the (financial) industry. So I tried to cover many different topics from inverse problems, Monte Carlo methods to PDEs. It was a nice experience to speak there and motivate young people that the methods they learn for their master or PhD thesis will also be valuable for their live after university.

In such different domains as statistical physics and spin glasses, neurosciences, social science,
economics and finance, large ensemble of interacting individuals taking their decisions either in
accordance (mainstream) or against (hipsters) the majority are ubiquitous. Yet, trying hard to be
different often ends up in hipsters consistently taking the same decisions, in other words all looking
alike

In this case, I am not sure whether mathematics is required to predict the emergent dynamics.

It seems to be quite obvious to me: if you only listen to mainstream, you create mainstream. To create trends, mainstreams are usually acting focussed and simple. To fight the mainstream hipsters need to align and synchronize. To strengthen their non-conformity they act conform in their system.

I refer to Sascha's plea for the usage of instant functions of great tools in a tree's a tree.

This is a passionate plea for the proof:

Forever, in infinite many cases

A proof is the lazy brain's best friend - it prevents it from the need to test a theorem, a transformation, a change, a program…in finite many, but many!, cases. A proof says: correct in infinite many cases. From "now" on the semantics is functional not necessarily operational.

A proven theorem can be pushed into the knowledge base and used as black box. It becomes a validated building block of the innovative mathematical spiral.

Mathematical thinking
When we want to solve 3*x=5, we may be aware that rational numbers, under multiplication, (Q,*) form an Abelian group and each number is a basis for all others. To solve the equation, we use an equivalence transformation to get a basis that has nice properties for finding the solution:
3*x=5-->3/3*x=5/3-->x=5/3.

We apply the same principle if we want to solve a system of linear equations. Again, it's a helpful view to see the "unknowns" as weights for column vectors linearly combing the "goal vector".

Provided m=n and the column vectors build a basis (they are linearly independent) a unique solution exists. Again, we use equivalence transformations to get a basis with nice properties…In the matrix language it's a triangular matrix…The core of the proof is the general principle of constructing the bases.

What about a system of multi-variate polynomial equations? The Austrian mathematician Bruno Buchberger has shown that the same principle can be applied…proven by constructing the Gröbner Bases. In short, it transforms the system in a way that one basis element is only univariate.

Deep knowledge in ring theory, ideals…is required.

Algorithm generators
One of Bruno Buchberger's research project is Theorema. In short, automated theorem proving, with a system built atop Mathematica. If the Theorema software can automatically create a constructive proof for the solution of polynomial equations (by constructing Gröbner Bases) it generates the solver.

This is not required here, because GBs are already constructed. But, in general an automated theorem prover can become an algorithm generator.

Symbolic computation

The original objective of this field was

automating mathematics (not only computation)

empowering computer science with mathematical thinking and techniques

Remember, the term abstract data type can be regarded as a generalized approach of algebraic structures (lattices, rings, modules..), if you want it's a cousin of the "universal algebra" - a set of elements, a set of operations and a set of rules.

But, without a concrete model and implementation it remains "abstract nonsense".

What we finally want? A language to program everything? The Wolfram Language and its implementation, Mathematica, are really great, but there's still a lot to do - implementing (correctly) what can be described in the language.

The first thing I did (in a small team) in my first job after my technical mathematics studies in the mid 70s: migrate an APT system from the IBM mainframe to a General Automation mini computer - the first world-wide.

APT stands for Automatically Programmed Tools ahigh-level programming tool used to generate instructions for numerically controlled machine tools. APT has been created at the MIT in 1956.

It was clear that each APT program needed to run on the GA as on the IBM - no subset. APT compilers, interpreters and post processors (generating the control code for the concrete machine tools) were written in FORTRAN and different from those used in the IBM APT (because of limited resources on the GA, but also to apply own ideas of geometric modeling…).

Was it innovative?

Later, we introduced a new language that was much more feature and task oriented…however, it created the same constructor…the control programs carrying out the task.

The paradox of copying

Jorge Louis Borges, wrote a great short story "Pierre Menard, Author of the Quixote". Menard, a fictive character, did not compose another "Quixote", he produced a version that is re-written word by word. In this story, irony and paradox generate ambivalence. Menard's copy is not a mechanical transcription, it coincides with Miguel Cervantes' Quixote…

(Borges, Quixote) is different from (Menard, Quixote), because of "knowledge". I think Borges states through the paradox of Menard: all texts are a kind of rewriting other texts. Literature is composed of versions? The paradox of Menard is pushing the limits to the absurd and impossible, but it is about the principles of writing...

However, Menard's version would become more "different" if he offered a complete thorough Quixote course, a reading tour, a blog, a magazine, "how to write Quixote alike books" workouts…

The Workout in Computational Finance
What if someone rewrote Adreas' and Michael's great book that explains that and why a thorough grounding in numerics is indispensable for evaluating the pricing and risk models correctly and implement them in high quality. The rewritten would be be different.

The book represents knowledge of the UnRisk Academy that was established to disseminate this knowledge. It offers online and life seminars, workouts…and the real transformations made in response to the feed back of hundreds of practitioners who use UnRisk to carry out their tasks.

It's knowledge and its dissemination forms that innovativeness. Constructed and Packaged.

In a recent blog post, Andreas asked the question if the harmonic series of prime numbers converges. In a later blog post he sketched a proof. Do mathematicians ever move beyond the sketch stage with their proofs?

Andreas could have saved a lot of time by just typing in the following code into Mathematica. It uses the Prime function which gives the nth prime:

Case closed.

In that vein, all series problems can be answered with a quote by Ronald Reagan: “A tree’s a tree. How many more do you need to look at?”

You may have observers that there is not so much physics in UnRisk Insight currently. The simple reason: Michael works hard to transform some new ideas into programs swiftly. For counterparts risk valuation. They are of eminent practical relevance, and were triggered at a recent workshop with Solventis, here in Linz.

There is no such thing as an abstract program is one of the basic insights of a new fundamental theory of physics the constructor theory, developed by David Deutsch, Chiara Marietto…Oxford Univerity.

I am a lousy physicist, but I dare to write a little about this theory, because I found one example that I (hopefully) understand.

Robot control
You can write an offline, task oriented robot program but its constructor is the robot control. It's the entity that carries out the given task ("..pick a part of the box, put it on the palette...") repeatedly. The robot control is the foundational element - the constructor.

The robot control uses models that are calibrated and constantly re-calibrated to the real working space and situation. It may need sophisticated feature recognition...

Tasks
In constructor theory, a transformation or change is described as a task. A constructor is a physical entity which is able to carry out the task repeatedly. A task is only possible if a constructor is capable of carrying it out.

It sounds so simple, but it goes beyond Popper's science theory of falsification, because it touches information, computation and knowledge on a fundamental level. If we, for example, think of the idea of entropy in a thermodynamic system the link to information is strong…(oh, I'm already on icy terrain)

I take the practical view: there is no such thing as an abstract program…

In mathematics, BTW, there is no theorem without a model and a theorem comes to life only based on its operational semantics - the evaluation, the computation...

However one try: If I understand it right knowledge can be instantiated in our physical world and the better the instantiation the better the knowledge. This sounds quite evolutionary?

The evolution of the option theory
In the introductory book Quantitative Methods for Financial Markets (for students!), Andreas wrote: "the principles of risk-neutral valuation transforms the market into a 'fair game'". This rule has been instantiated by the BS formula by 1987. But, with the introduction of far out of the money options the smile was explored.

In the following, increasingly complex option models were/are introduced - among them models that cannot be validated (impossible to calibrate and re-calibrate..).

In the sense of the constructor theory, the task they represent is "impossible". Too complex models are a fundamental trap.

How to avoid them is not easy, because you need to know in depth, where the computational limits are. And the borders are moving...

Archplot is the classic story structure. It features a single protagonist. The lead character pursues an object of desire (an advanced risk management process?), confronting external forces (a strategy, project roles, a management principles…). The story ends with an irreversible change in the life of the protagonist. It's causal, real and linear..

Archplot is human life story. As humans we may find radical change to be difficult, but we want the protagonist to change from the beginning to the end. We want characters taking myriads of challenges...

Miniplot characters struggle with their inner demons, move through the world avoiding external confrontations. They're passive not active. Inside they fight for their life. Miniplot usually offers "open" ends.

Antiplot fights the story itself, it breaks all rules. No requirement for causality, nor a constant reality, no time constraints and the protagonists are the same at the end of the story. They never fight any forces. They just remain as they ever were.

Choose the Archplot form

Pursue the objective of becoming a CEQ, saving the life of your financial institution, managing the transformation of your knowledge into margin.

I am a marketer at heart. But my trick is to speak for the UnRisk project.

Projects…things to be created, financed and shipped. Sometimes they influence a life, other times, they fade. UnRisk influences my business life.

In the later 90s I helped getting a contract from a London based trading desk of an American bank: pricing of sophisticated convertible bonds.

Lucky for me, cooperation shifted from a one time cooperation to long-term affiliation in an exciting project. UnRisk.

It's different building a consortium from conducting someone else's project - you jointly get the idea, see an outcome, share a vision, build the technology, build the tools, plant the seeds for growth, are selected or rejected, your clients shape you and your ideas, the tools build you, you identify your "dream client"and "dream partner", you refine your brand promise, you stop listening to focus groups only, you know the financial impact of your decisions, you get the cash flow right…you reinvent your technologies and tools…

UnRisk has quite a few different faces: UnRisk FACTORY with its web interface, UnRisk ENGINE with Excel, UnRisk Q with Mathematica and the the UnRisk library that lies behind all these and which is used by the UnRisk development team.

The UnRisk user community is quite heterogeneous: There are UnRisk users coming from accounting or controlling, there are quantative analysts, risk managers, treasurers, traders.

And they all have their preferred ways to work.

With UnRisk FACTORY 5.2. and the UnRisk Excel link for the UnRisk FACTORY, we have closed the gap between two widely used interfaces and thus reduced possible sources of errors in communication.

Not with people like you, may some business professionals say…you are mathematicians, but we need to do real business…

Not with people liker you, may some mathematicians say…you are transforming mathematics, a culture technique, into margins…

With people like you, say those, who care…you provide know how packages and respond to our requirements swiftly…

It was never so easy to connect globally and find the right partners for learning, developing, marketing...but traditional thinking let us still cling to preferences for neighborhood or major places, known cultural background…or scale?

I've maybe said it too often, but unleashing the programming power behind UnRisk is our chosen path for growth. It's result of long term (mathematical) thinking. It's our approach to risk optimization. Moderate growth in a constant feed back loop.

Quantsourcing empowered by UnRisk technology stack

Our offer does not only include a technology license and development partnership, it includes a brand name, a marketing and sales approach, a promotion mix…a business development partnership.

Our technology stack combines the Unrisk Financial Language implemented in UnRisk gridEngines for pricing and calibration, a portfolio across scenario FACTORY, a VaR Universe, the UnRisk FACTORY Data Framework, UnRisk Web and deployment services and since yesterday an Excel Link that does not only link to the PRICING ENGINE, but the FACTORY. End of 2014 an engine with emphasis on counter party risk valuation will be available.

Don't start from scratch, our technology stack and products are amazing…and working with us is not too bad.

VaR Results of Portfolios including the contribution VaRs of the underlying instruments

In addition this new functionality allows us to offer the following service to our customers:

If the customers wants to see information in a certain way, which is not part of the basis functionality of this, let's call it, "UnRisk FACTORY for Excel Link", we can easily implement this new functionality within days and in most of the cases without extra costs for the customers.

At the end of this post I give a small example of such an extension:

The user wants to know

"What are the aggregated expected cashflows of the instruments from my portfolio for a list of given date intervals."

The implementation, which consits of 12 lines of Mathematica code, of the corresponding function took me around 45 minutes. The main steps are:

Extract the valuation results of the portfolio

Loop over all underlying instruments and extract the expected cashflows

This question is raised by Freaconomics in its newest radio podcast - the transcription can be read here.

Immortal, unconstrained mobile and absolutely wise

Freakonomics likes the idea of timelessness, unconstrained mobility and the absolute wisdom. But then they question the economic side…

Jim Jarmusch made a great film about vampires: Only Lovers Left Alive. A quiet and dark film with a feeling of timelessness…giving the impression that any world is important. The two immortal lovers show us the highest culture of absolute wisdom, connectedness… But, how immortal and wise vampires ever are, they are caught to live at night, buy (at dark markets) or steal blood…whatever constraints they have removed, they are stuck to one rare resource…to get it they even risk to transform others into vampires and create competition...In the film the lovers are close to die of hunger…not enough energy left to do what they need…in the last second they found the perfect victim…Only the imperfect diversify…and live? The spotlight of the Nov-14 issue of the HBR Magazine is "Internet of Everything"We strive for understanding and knowing everything. The phrase "internet of things" has arisen to highlight new opportunities exploiting new smart, connected products transforming data into knowledge.But isn't absolute wisdom also absolute boredom? Isn't 'uncertainty good? Remember, we only learn from turbulences and gain from disorder. What are we going to do, if the data tell us everything? Will data become to us then the blood of the vampires? Will the vampires ever get a free market of real blood...will we get a free market of informative data?Co-eveolution in the programming gridThe internet of everything will help to establish a co-evolution of, say, weather forecasting and energy optimization...but for finance and economics we should not forget modeling, parameter identification, simulation…speculation and verification.IMO, we need co-evolution at another level: co-program for new insight. Let our breakthrough explore new problems at a higher level…let us find abstractions from applying examples…and share ideas and skills.

In my Merlot post I announced to write about this indigenous Friulian wine. No sorry...the story how I got another outstanding rare wine.

No, I'm not an elitist. I don't like rare wines because they are rare. But, my wine preferences include wines from autochthonous grapes...and their outstanding exemplars are often rare.

Time to out my wine preferences.

Wine genres?
I like reading and love music (from John Adams to John Zorn). Literature and music are categorized by genres. And this inspired me to think of wine genres - without naming them. Even more, I understand a wine as a story. Genre is a difficult foundation of story to wrap my mind around. And so it is for wine.

I borrowed the concrete idea from Shawn Coyne's great blog the story grid (Genre's five leaf clover).

Honestly, when I read some of the tasting notes, I need to smile about the creativity…when the wine "sings in the glass", or (a Chambertin!) "shows a nuanced smell of a wet dog pelt" (which dog - has it a name?)…

And, especially why they fit so well to this and that dish. I do not care much about this. I eat the dish and then I drink the wine. So, is it then concluding the last dish or preparing for the next? Food companion is not a genre criteria for me.

Example from one of my favorite regions, Rhone: I give preference to North Rhone wines over Chateauneuf du Pape and the white over the red… This leads to the non theatrical white Hermitage (like the affordable Ferraton Miaux) or the expensive Condrieu Chateau Grillet.

How to get the Pignolo that fits for my favorite genre?
The first time I came to Cormons I had nothing than the wine books and the drinking experience of, what I call, the big label wines...Jerman, Vintage Tunina a prototype.

But, I was lucky to select Aquila D'Oro at Castello di Trussio for a wine and dine evening at this first visit. The owner of the restaurant (and castle), Giorgio Tuti, introduced us to the indigenous wines from great vintners: Ribolla from Gravner and Radikon, Tocai (now Friulano) from Vie de Romans…

But at the beginning, Giorgio Tuti and us were mutually risk averse and exchanged only the "secure" opinions. Later, when we knew each other better, he rolled his eyes imperceptible when I asked for a dramatic Pinot Grigio from Ronco del Gelso…and served a documentary clear, onion-colored Pinot Grigio from Pierpaolo Pecorari instead.

My preference for Borgo del Tiglio has its roots at that time - result of guided exploration, wine by wine.

The first Pignolo was from Dorigo. At a later visit he served a Pignolo magnum of another cult property: Moschioni. We knew already Moschioni's Refosco and Schioppettino and were surprised how clear and floral the Pignolo was.

2004, Jerman's Pignolo Special Edition for Gorgio TutiGiorgio Tuti has sold some land around the Castello di Trussio to Jerman. And they came to an agreement that Jerman will plant Pignolo at the most qualified corner…and Giorgio will get a special Edition of a selected year.

What I've suppressed: in less favorable years the Pignolo can be a bit rough…2004 was perfect (Pignolo's quality is volatile).

Last week, Giorgio Tuti sold me three bottles of his special edition. I will wait a few years to open it…I hope.

PureTech, Giving Life to Science, is a science and technology development and commercialization company tackling tomorrow's biggest healthcare problems

Their purpose is

radical innovation in health

and PureTech

has a thematic, problem-driven approach to starting companies, proposing non-obvious solutions rooted in academic research and developing them together with a brilliant group of cross-disciplinary experts

In short, PureTech focuses on taking science and engineering, primarily in the healthcare area, and developing innovative products and companies. Yet another incubator? No, much more…

PureProd?

In my factory automation time (25 years ago) I dreamed of establishing a "walk in center" for complicated discrete manufacturing problems. With industrial scale flexible manufacturing islands, and labs…with researchers from distinguished academic institutions and industry and practitioners from manufacturers...finding new operation plans, creating new tools, set ups...and running concrete experiments on the most complicated parts.

It never materialized, because I failed to convince the authorities to make it happen (manufacturers and manufacturing system providers were enthusiastic)…but the idea was a kind of "PureProd".

PureIndMath

MathConsult, Andreas Binder CEO, is a spin off company of the Industrial Mathematics Institute of the Johannes Kepler University of Linz. They also partner with the Radon Institute for Computational and Applied Mathematics (RICAM) of the Autrian Academy of Sciences. 100+ mathematicians and physicistswork in this IndMath center in Linz - with 25 at MathConsult.MathConsult transforms their core competencies - Numerical Simulation and Inverse Problems - into complex systems and products for concrete industrial partners in the areas of Metallurgy/Chemical Engineering, Multi-physics problems, plastic deformation, dynamical multi body systems, Adaptive Optics…Their key technologies embraces hundreds of cross-sectoral mathematics software programs and there is translational research going on within MathConsult to create new systems for their partners. PureQuant?Those libraries were the key to add a new competency: Computational Finance. Some of the approaches are presented here.

We built UnRisk and created the UnRisk consortium for technology development and commercialization. UnRisk concentrates on derivative and risk analytics and we've decided to unleash our technologies and provide the corresponding know how packages. What we haven't done: intensive fund raising (one of the PureTech strengths). But we offer options, like project-for-product cost arrangements…However, we think, we are a new kind of quant finance company. But this is "in the small".In the large, quant finance lacks radical innovation. Consequently, regulators decided to force a kind of bureaucratic regime by standardization and centralization. Nothing to become mad about…what should they do instead?Quantifying behavioral risk?There's the big discussion about state risk and behavioral risk. Fama got the nobel prize for showing that there's only state risk (EMH) and Shiller for emphasizing on behavioral risk.But behavioral risk will never be a topic that goes beyond intellectual discussions at market risk cocktail parties, if it doesn't become computational.And this is really hard work. A mathematical Hercules task. It can't be done by single groups alone. It needs collaboration and antidisciplinary. The mathematics may be influences by game theory, evolution theory…and probably the approach of cellular automata…but also "pure mathematics" will play an important role in the sense that its models may speak out a behavior (not only a state transition…).Don't do it alone - cooperate. cooperate. cooperate.But, this will only happen when the financial circles recognize that strict competition is an innovation killer. If we do not cooperate more, financal markets will be increasingly conducted by regulation and run into the next crisis based on regulatory arbitrage...PureTech is a great company.p.s. as Andreas posted yesterday we've received a research grant for doing the counter party risk valuation "the UnRisk way". If you want to partner, we will be happy to...