Recommender System (RS) provides suggestions for items to be of use to a user. As the definition says we should define three pieces, i.e. item, user and useful suggestion.

Item is a general term that describes everything what RS recommends to users. It might be stocks, books, videos or anything else.

User of RS is usually an individual who doesn’t have enough experience or competence to make a decision of item choice. For instance, popular on-line book store Amazon provides recommendations based on what user bought and viewed in past.

To be useful suggestion (or recommendation) should support some decision-making process, like what book to buy, what news to read or what to do in a spare time. In the simplest form, it might be a ranked list of items. This ranking is way to predict what the most suitable items are for user behaviour.

RS should track how users interact with items. For instance, on the book store one might view a book title, look inside of the book and/or buy it. Viewing a title can be considered as an implicit sign of preference. Thus the usual recommender system has to deal with User, Item and User-to-Item actions (transactions).

For RS implementation to be successful it should achieve one or more business goals:

Sell more items

Sell more diverse items

Increase user satisfaction and fidelity

Understand user behaviour and habits

And those could be approached by implementation of few tasks:

Find some good items: Create a ranked list of items along with predictions of how much user would like them. This is the main task of many RS.

Find all good items: Create a complete ranked list of items. Usually it is required when number of items is small and user can benefit from ranking information. Such RS are quite common in financial application. They usually need to examine and to rank all possible scenarios.

Annotate items in context: Given an existing context, emphasise items based on long-term user preferences. For instance, such RS might emphasise TV shows in EPG based on previous user behaviour.

Recommend a bundle: Suggest a group of items that fits well together. You’ll find such bundles at cable internet providers, travelling agencies etc. For instance, airlines are starting to recommend accommodation and car hire during ticket purchase.

Recommend a sequence: Recommend a sequence of items that is pleasing as a whole. For instance, a recommended track of courses at the university might depend not only on chosen major, but also on the absolved courses.

Browsing: RS should help the user to browse items that are more likely in the user’s interest in this browsing session.

Improve user profile: This task is all-time task of RS. It collects information about user’s actions to provide more personalised recommendations.

That is pretty much what one can expect from such thing as recommender system. In next posts I plan to cover:

Most probably, you’ve noticed the blog was not updated last year. I’ve been writing a book about programming and financial mathematics.

It was not easy for me. When I got a proposal to write a book in this area, I was hesitating if it is something I was able to cope with. For fresh writer it is a daunting task as you should go under the schedule and try to write consistently, every day, at least half page of text. Some parts of book were easy to write as I already wrote about these topics. Some were awful to accomplish as I did not really understand how to explain math and its links to Haskell with a plain and clear language.

I’m quite sure that now I will start again writing this blog, though the new projects are quite far from financial math now but they are still in math and big data projects. Please, also check our new company website to see what is in progress now.

But finally it is out and available in book stores like Amazon, O’Reilly or Safari Book store:

The temperature is one of principal quantities in thermodynamics and it is a macroscopic intensive variable because it is independent of the bulk amount of elementary entities contained inside. Let’s try to move physical definition to trading world. Thermodynamics defines temperature as:
where is entropy and is internal energy of the system.

In statistical mechanics, entropy is a measure of the number of ways in which a system may be arranged, often taken to be a measure of “disorder” (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations (microstates) which could give rise to the observed macroscopic state (macrostate) of the system. For sake of simplicity we assume the constant of proportionality equal to one:

Order book is in fact a set of all buy/sell orders. Let’s denote it as where b (s) is price and B (S) is amount of contracts at given price of buy (or sell) orders. Let’s normalise it by total buy (Tb) and sell (Ts) contracts:
Thus the entropy becomes a sum of entropy of buy and sell sides:

The internal energy is the total energy contained by thermodynamical system. It is the energy needed to create the system, but excludes the energy to displace the system’s surroundings, any energy associated with a move as a whole, or due to external force fields. Thus to create the order book one needs to have all money of buy side and to own securities of sell side. There could be doubts how to price securities of sell side but we’ll take the easiest approach:

Let’s try to derive a formula of temperature under the given assumption. At first, the total differentials of entropy and internal energy should be obtained:

Then we can find derivative of entropy by energy by total derivative definition:
where
And substitution into the total derivative yields the formula for temperature:

In physics, action is an attribute of system dynamics. By definition it is a functional over trajectory or history of the system:
where L is the Lagrangian. The real beauty of such description lies in developed and well studied mechanism of equation solutions.

A Markov chain is a discrete-time random process with Markov property. Its components are states and probability transitions between them. Markov property states that the probability of next states depends only on the current state.

So Markov chain is a set of states and all transition probabilities between states.

Simple chain for drift

Let’s assume that estimation of drift parameter might lead to the following 2 states:

Positive, i.e. drift is greater or equal zero

Negative, i.e. drift is less than zero

So, one could construct Markov chain for these states as it shown below.