Beautiful Curves, Graph Theory and â€œTech Debtâ€

Last week, I attended three great industry collaborative events in London, each providing a diverse and distinct lens on model-based financial technology. Applying Chatham House Rules, I’ll try to summarise each event’s highlights which in turn addressed

As each event theme focussed on unique components encompassing the same financial services analytics applications – business need and opportunity, algorithms, models, software development and the technology stack - I conclude by examining how an ingredient from each event can inform a Solvency II-specified asset management and risk infrastructure.

A significant discussion examined the impact of Solvency II, how regulated actuarial and insurance corporations, subject to solvency-challenges around long duration insurance contracts (annuities and unit-linked investments for example), passed investment and risk requirements to their affiliate asset managers. Curve-dependent swap valuation was deemed challenging to insurers and the buy-side, given Solvency needs, though perhaps more familiar to the sell-side post-Basel and Dodd-Frank. The sell-side were applying curve methodologies, particularly to facilitate Credit Support Annex [CSA] contracts and collateral optimization strategies, and modelling associated counterparty risks, CVA and Fund Value Adjustment [FVA].

This results in an increased focus, among all participants, around good, validated multiple curve calculations. “OIS-based valuation on its own”, to paraphrase one panellist, “is insufficient”. This exacerbates notions of “big data” into “derived data”, offering challenges to mathematicians deriving curves, database experts managing them, and the business in terms of consistent rendering of such methodologies within their risk, valuation, compliance and asset management processes.

Several panellists noted disconnects between actuarial notions of risk in relation to their asset manager peers, in their respective technology stacks and their investment and risk preferences.

Philosophies of good risk management software were discussed, particularly on the sell-side. Expanding desk level bottom-up platforms in some institutions differed to top-down one-size fits all enterprise systems in others. However, one panellist emphasized “a black box won’t cut it; you need to open the box”, curve calculations being one critical case in point.

This next session was run by one of the more established MeetUps in the industry.

Graph Theory and Probabilistic Graphical Modelling [PGM] is emerging in finance, and it is closely related to the world of linear algebra.

As the speaker suggested (a good overview can be found here), its finance origins derive primarily in systemic risk driven by central banks and regulators, concerned to assess risks and stresses between economic participants. Think of the Mandelbrot butterfly effect where one event, say a default, can cause a chain of (adverse) contagious events.

The speaker argued for PGM’s relevance on portfolio management. He posed several fascinating recent instances, for example the potential impact of Scottish independence on a portfolio’s risk sensitivity. PGM-modelled extreme events could thus overlay portfolio theory, through supplying a PGM-oriented utility function into an optimization for example. Very interesting, though I struggle with conceptually using analyst opinion to calibrate the probability and likely sequence of event causality in the case of an event – Scottish independence - that hasn’t happened since the times of Robert the Bruce, Edward Balliol and David II. For sure, independence might impact Scottish and UK debt and then in turn stock performance, but quantifying that extreme event feels subjective.

That said, against this backdrop, I can see how the approach can inform new thinking around asset management and perhaps even its regulation, particularly assessing impacts of predictable and unpredictable events on portfolios while also ensuring quant risk models incorporate sensible human judgment.

Finally, I attended a highly enjoyable, constructive and extremely interactive round table discussion featuring IT technologists from start-ups, consultancies and large banks, brought together by The Realization Group, affiliated to The Trading Mesh forum. This particular round table examined the fascinating world of non-functional requirements.

For novices of the jargon – I realise I am one – functional requirements differ from non-functional requirements approximately as follows. Functional requirements define "what" the system is meant to do, e.g. price instrument X using methodology Y and create Report Z which serves compliance regulation a.5. Non-functional requirements inform the “how”, namely the architecture. For example a system must respond with latency x, carry user load y at peak, and potentially support potential future use case z which will need IT resources a and b. Part of the discussion premise was that business users tend to see functional requirements, while technologists see non-functional. The latter, if implemented badly, can be costly as the original systems fails to scale and evolve to new demands. As a result, the group reinforced the need for IT and business to collaborate better. However, there were also suggestions that the classification of functional and non-functional was artificial. One FX IT lead noted that the systems he managed accommodated both functional and non-functional as normal, but performance for him was business advantage.

The group further examined the semantics of the perceived differences, for example, how the technologist term “tech debt” equates to the business-friendly term “opportunity cost”.

This gave me insights into an analytics server product I work with, one that implements models into web, database, messaging system, reporting and other architectures, crossing research and IT functions. Typically, I see three “non-functional” request types relate to this particular product.

The customer (usually the IT Dept) prescribes specific requirements around, say, anticipated load, security protocols and latency response times amongst other specifications, often with some idea of future scaling ambition. One common example includes, for example, scaling asset allocation methodologies – some standard and some more niche (the functional requirements element) - to wealth managers, sales teams and clients directly, with each group adding particular IT performance challenges. I wonder in the light of this round table who has defined the “non-functional” needs of the requirements we see. Is it IT without the input of the business, as this is deemed IT’s responsibility ? Or perhaps there have indeed been constructive discussions between IT and the sponsor business unit.

The customer specifies scenarios to us along the lines of the above, but want to iterate around those scenarios. Perhaps they are considering computationally difficult large-scale optimization methods over a substantial multi-asset universe for example, and they want to architect to accommodate perceived trade-offs between the “challenging” yet business-differentiating algorithm against accompanying performance impacts and IT capability. This sensitivity towards optimised numerical programming usually involves a quant developer, typically as part of a wider project team.

The customer makes no specification. They want their IP in the market as soon as possible. In the case of a small firm, that might be understandable. We can guide, but they need to be fast and first to market and they rationally (and quickly) assess risk accordingly. For larger organizations who carry risk, we, like many of the consultants at the round table, want to enforce the importance of good architectures as the system in the future potentially scales out, protecting the customer’s – and our – risks and credibility.

So three in their own way unique collaborative events: curves and risk management, graph theory and non-functional requirements including the fascinating concept of “tech debt”.

Can we blend the key tenets of all three events in a single example ? I think we can. Let me try to paint a potential picture.

Say we are asset managers, with a new mandate from a Solvency II-compliant insurer to incorporate long duration liabilities into our asset allocation projections, possibly incorporate bespoke hedging instruments also. Our current risk management system supports multiple portfolio optimization methods, risk models and appropriate benchmarking. However, we neither have a curve infrastructure except what we take from our data provider(s), nor do we have swap valuation capabilities able to calculate CVA and FVA adequately. As our mandate is long-term, decades, we are subject to extreme events and perhaps want to future proof our system to incorporate probabilistic graph theory. In addition, our insurance investor has to report a relatively transparent good risk management process to their regulator. Okay, the following might be somewhat simplistic but hopefully the key themes resonate.

We want to construct a curve object component that allows us to assess portfolio projection against nominal rates, inflation, and market assessment. We want the capability to select between multiple curve methods.

We want a view in our portfolio and risk dashboard to incorporate our curves into a long duration asset projection scenario analysis. This capability should extend into our (automated) reporting system.

We want to construct some dedicated swap pricing and risk sensitivity routines, perhaps in the medium term incorporate into our production analytics suite.

We want to ensure our asset allocation routine can take a PGM-informed projection, to inform our medium term asset allocation and liquidity provision in times of stress.

We want to make our dashboard available to our (insurance) client. The dashboard should be available via an extranet page to which our clients have security privileges, and satisfies peak load (x concurrent sessions, as a batched analysis beyond normal loads) perhaps for compliance reporting, end-of-month. As we take our dashboard to other prospective clients, we anticipate future growth in use of the dashboard, and therefore system load, by an order of magnitude.

We therefore have functional and non-functional requirements to inform our risk-based conversations with our insurer client, perhaps involving our IT team and technology advisors. As they are insurers, they should by default be risk-sensitive and happy to consider “tech debt” concepts.

In conclusion, the three different community events I attended – involving multiple participants from all strands of the industry, absolutely great to see – can inform different perspectives on financial analytics technology. However, only by incorporating advances and best practices from all, specifically developing models to serve business need, embracing innovative algorithms in risk-appropriate cultures, and defining requirements to reduce tech debt and opportunity costs, can we create, scale, deploy and maintain good, powerful systems.

The events I attended last week were terrific. Long live the vendors, communities, forums and participants who sponsored, facilitated and helped make the collaborations constructive. Hopefully the conversations will continue.

How do the approaches above resonate with your experiences ?

What is your approach to curve modelling ?

Have you applied graph theory in finance ?

How do you account for non-functional requirements and tech debt

What financial communities, MeetUps and forums do you participate in ?