The Automated Trader Interview

"There was no budget constraint, just a goal," says Richard Franklin, CEO of Boronia Capital, summarising his brief to start with a 'clean slate' and deliver a fully automated and co-located trading operation. How often do you get to say that, and how often do you achieve such a goal - as Richard Franklin did without significant disruption, while maintaining a significant alpha generating outcome, and within a few years? Andy Webb went to meet Richard Franklin and Boronia Capital's head of research, Chris Mellen.

Richard Franklin

Chris Mellen

Andy Webb:Let's
start with the background. Tell me about Boronia, and also about
your own background, Richard.

Richard Franklin:
Boronia Capital is a CTA - managed futures - that started in 1991
developing a trading model based on the thesis work of our
co-founder, Richard Grinham. What's interesting is that this
model, a long volatility strategy, was developed in Australia
with no influence or direction from what was happening in the
northern hemisphere. At that time it was unique.

The company was previously called Grinham Managed Funds, before
the Directors - Angus & Richard Grinham made the decision to
change the name to Boronia Capital in 2008 to reflect the growth
of the company, removing the family name to acknowledge the fact
that there were now many other talented employees contributing to
the ongoing success of the company.

From its early beginnings this insight has been diversified into
multiple timeframes and multiple markets covering all four asset
classes - Equity Index futures, Interest Rate futures, Commodity
futures and FX Spot - originally, FX futures. The fact that this
one model has traded these asset classes across timeframes
ranging from intraday to long term - three to four weeks - for
the last 20 years is, I think, a testament to its robustness.
Most of the work done over the last 10 years has only been
incremental and primarily for diversification. The market
insights targeted by the model are as relevant today as they have
been over the 20 years and the model is still the core insight
traded today.

My background is in software system development
and specifically in the development of corporate treasury
systems. I started at Boronia seven years ago as head of
technology, and the brief was to develop the computer systems we
needed to fully automate trading in our long volatility strategy.
The directors had also made the decision to invest heavily in
building a research team, and our systems would need to support
their activity as well. It was also apparent to us that the
markets we traded were about to go through a fundamental
technological change, which suited our plans.

Andy Webb:Fundamental technical change?

Richard Franklin: The
major futures markets we traded were already fully electronic.
Commodity markets were moving from floor trading to electronic
trading, and multibank FX ECN-style platforms were gaining
significant traction for the buy side. Platforms like EBS and
Reuters were already used extensively by the sell side. We saw
this as a great opportunity for the buy side to automate, and
there was significant downward pressure on transaction costs.
Electronic trading was much cheaper than pit trading, but also
automated API-based execution was taking humans out of the cycle
completely.

Lower transaction costs meant that higher frequency trading was
no longer marginal, but also, as transaction costs were dropping,
the volume of market data was growing exponentially. We wanted
this data in real time for both our trading and execution
strategies - there was no way that manual systems would be able
to cope.

Also in those days, we were amazed to find that the major futures
exchanges were deleting their full depth-of-book market data. If
they did maintain the data, it was only for a very short period,
and then only five levels of depth. Chris and I did presentations
to these exchanges to convince them that this data was very
valuable and if they kept it we would buy it - and ultimately
increase our trading volumes if it could also be delivered in
real time. They were concerned about bandwidth, which was strange
given that co-location facilities were readily available.

Andy Webb:Chris,
perhaps you could pick that up, but also, I haven't asked you
about your background?

Chris Mellen: I was the
first full-time researcher employed by the company; I started
over ten years ago. At that time we were trading only the core
long volatility strategy and my initial work revolved around
supporting further development of that program. The primary
research objective back then, and still today, was to maximise
the risk-adjusted return of the Boronia program. That goal has
seen us continue to diversify our core long volatility model
across markets and timeframes.

Six years ago we decided to begin work on a non-core portfolio
that would support the core strategy during times when market
conditions were not well suited to a long-vol style of trading.
Essentially, the non-core models are intended to further improve
our risk-adjusted returns. The non-core income stream doesn't
meaningfully dilute our overall long-vol return profile - which
of course is important if we are to prevent style drift.

Today, the non-core models have evolved to be a diversified
collection of trading insights which share the important property
of delivering income streams that are both uncorrelated with
those of the core strategy and also subject to risk factors
different from those of the core strategy. Additionally, these
insights are, by selection, capable of delivering high Sharpe
Ratio outcomes - hence we are able to run them at a relatively
low level of overall risk - perhaps only 10%-15% of total program
risk - and yet still deliver meaningful returns.

We've also focused on continual improvements in the area of
algorithmic execution. Around 90 per cent of our trade execution
is now done algorithmically by a portfolio of execution models.
If we wished, we could be at 100 per cent algorithmic execution,
but we feel that retaining the human component delivers
diversity, and additionally, important and useful stand-alone
execution insights.

We have a research team of 20 people, assembled from scratch over
the past six to seven years. The people in the team have a
diverse range of academic and industry backgrounds with a
majority being PhD-qualified.

Andy Webb:I'm
interested in both your research process and also the evolution
of Boronia's technology and infrastructure. Chris, perhaps we
could talk about the research process first?

Chris Mellen: From the
start we've taken a structured and academic approach to research.
We try to be rigorous around evaluating insights and have a focus
on making evidence-based decisions. As the team has grown, we
have developed a disciplined approach to project management - the
development and evaluation of each research insight being a
project. This helps us manage research risk and achieve research
goals in reasonable timeframes.

Today, our research process - including delivery to clients -
starts with us taking an idea or an insight through the research
work necessary to get to proof of concept [POC]. To meet our POC
requirements an idea must typically demonstrate: robust
walk-forward trading outcomes across markets and timeframes;
robustness in the model parameter space; ability to deliver an
income stream that has low correlation with other existing income
streams; and sufficient scalability. Then we move into model
specification and subsequent implementation in our automated
trading platform, Omega. The specification and implementation is
carried out by research and IT implementation teams.

The testing process involves the verification of platform
implementation through simulated live trading - with realistic
execution - on an in-house exchange. This lets us shake down the
system without risking any capital. The final step to verify
implementation is house-account trading, and the release to
client trading is followed by on-going performance verification,
trade level validation and management reporting.

Andy Webb:Can you
talk about the team and also the tools you use?

Chris Mellen: Our
research team is an umbrella group that comprises the wide
variety of personnel who come together to get a trading system
from idea to product. When we're recruiting researchers, we take
a very open-minded approach. We might, for example, recruit from
academia with the expectation of zero prior experience in the
trading space. Here, we look for PhD-qualified individuals with
strong numerate or quantitative backgrounds and a proven research
potential. Typically they will have backgrounds in physics,
maths, computer science, engineering and, sometimes, finance.

As part of the recruitment process, we ask final-round applicants
to do something original with a set of time-series data that we
supply. They have three to four weeks, and then they give a
presentation on what they've found or constructed. It works very
well for both sides - the candidate gets to have a 'taste' of the
sort of work they are letting themselves in for, while we get to
see how well they are able to apply their skills, perseverance
and imagination. Most candidates seem to really enjoy this task -
and we certainly get a lot out of seeing what people can do.

We also recruit non-PhD-qualified individuals with three to
five-plus years of relevant significant industry experience and
typically a skill-set or knowledge base that is directly relevant
to what we do. Such recruits can hit the ground running and also
typically bring differing research viewpoints and model insights.

We encourage our researchers to use a set of standard languages
and programming environments in their research and development
tasks. Typically, these include Matlab, C++, Python/Ruby and SQL.
With regards to OSs, we are in principle fairly agnostic about
choice. However, for what are probably historical reasons, we
find today that Microsoft OSs tend to dominate our desktop
environment.

It's probably no surprise to learn that over the years we have
used these languages and environments to build and develop a
range of specialised simulation and analysis platforms and
frameworks. It is these platforms that our researchers use to
test and analyse their trading insights. I find it interesting
that the available platforms tend to be tailored to suit
different types of data. For example, our platform for assessing
very-short-term trading insights operates at the tick-data level,
is consequently written in C++ for speed, is parallelised,
attempts to do tick-by-tick simulation of limit order book
queuing and trade filling, and also incorporates a simple
market-impact model. On the other hand, a simulation platform
designed for use with down-sampled data (for example, data
sampled at a daily frequency) might be written in Matlab or
Python/Ruby but might have also the flexibility of dealing with
many different types of data source in addition to simple price
time-series.

Andy Webb:Tell me
about the infrastructure that supports your research team.

Chris Mellen: Ten years
ago, the infrastructure was very low-power - we dealt exclusively
with daily time-series and a workstation with 1Gig of RAM; a 386
processor was more than sufficient for what we had to do. Soon
though, we began to look into the uses of tick data for research
and simulation - and once we started down that path our research
infrastructure had to be grown to meet demand.

Within twelve months of my joining, we were using office
workstations to do simple after-hours distributed computing -
staff had to leave workstations on at night and over weekends. We
then moved to using a single rack of small-form-factor low-cost
PCs so that we had access to dedicated always-available compute
resources. From there we have continued to expand our compute
facilities into blade technology to the point where today we have
over 500 cores available for research compute use. To supply data
we have installed over 20Tb of SAN storage linked via a 10Gb
backplane and developed a proprietary distributable file-system.
Also, supporting our data library we have a dedicated data group
- this full-time team is tasked with data acquisition, cleaning
and delivery - they also assist in preliminary data exploration.

All research and analysis platforms have been developed in-house
using a mix of commercial and open-source software. An important
part of our research infrastructure is our proprietary simulation
platforms. These cover a range of simulation timescales, right
down to tick-by-tick. When operating at the tick-by-tick level we
directly simulate the execution process, including order filling,
market impact and interference between competing execution and
trading systems. Researchers with current responsibility for
models active in our production portfolio also have access to a
range of production-related evaluation and validation tools.

Andy Webb:Richard -
I understand that Boronia was phone-trading from spreadsheets six
years ago and today is fully automated and co-located. Talk me
through the change process.

Richard Franklin: I
started with a clean slate. The goal was to fully automate our
trading - defined by the directors as "no human intervention".
The new system was also required to process market-depth data on
a tick-by-tick basis. Up until that point our systems had only
processed daily data. There was no budget constraint, just a
goal.

We used data-flow diagrams and business-process models to create
a blueprint as to how a fully integrated, fully automated system
might look. At a high level, this translated into a number of
services responsible for order generation, market monitoring,

trade execution and allocation - and also management modules to
monitor the system performance. This blueprint was important
because we were embarking on a two- to three-year development -
with more than ten developers - and we needed a target. The
directors also needed something to agree to!

The other really important aspect of this planning was how we
would transition from the legacy applications to the new systems
while still trading 24/5. Developing a system of this size must
be done in phases, rather than as a big bang at the end, which
means that for about two years we were running on a hybrid of
legacy and new systems as we rolled out each new module. We had
to develop a number of interfaces between the old and new which
were eventually discarded as the project unfolded.

There was always pressure to get to the end on
time as the research team started rolling out new trading models
that were reliant on the new system being completed. The first
decision was the architecture - it needed to be service-oriented
and message-based. We did a cursory evaluation of what was on the
market at the time and basically the products that did what we
needed were priced for banks - way more than we could
justify.

The design of the system was the next most important decision -
we opted for persistence of data rather than speed of processing.
What I mean by this is that for every significant event in the
trading cycle we would write the current state - transaction
status - to disk so that if there was a system crash we could
always recover to an exact point. We were going paperless and
therefore needed this degree of certainty. If we had gone in the
speed direction, we would have done all our processing in-memory,
and written to disk after the fact.

To process large amounts of tick data it must either be stored
in-memory - and accessed from memory - or in a very fast
database. Our minimum acceptable benchmark was 45k ticks per
second. The markets we trade average around 12k ticks per second
and we've seen peaks at 30k ticks per second. We opted for an
in-memory database and stopped testing when our systems could
handle 50k ticks per second without buffering because we had no
requirement for any more than this. We did evaluate a couple of
high-speed databases - excellent products, but once again priced
for banks.

All our execution was required to be API-based and multi-broker.
Single-broker platforms were not an option for us, primarily to
protect against operational risk. Also, we had no room on our
trading desks for single-bank UI-based trading. From the start we
made the decision to standardise on FIX for external messaging -
for futures trading, FX spot price delivery and FX spot trading.
We chose QuickFIX because it was open-source and developed in C++
which we were very comfortable with. At that time, only one of
our execution brokers had a FIX based API - today, it's
commonplace. The architecture around our gateways was also an
important factor, and more so when we ultimately decentralised
our trading. Poor design would definitely have resulted in
unacceptable latency.

More recently, we split the system between trading and
administrative functions. This meant we could co-locate our
trading systems with the execution monitors and gateway controls.
All subsequent trading activity is synchronised back to a
centralised database in real time. Allocations, performance,
slippage and many of the other management functions are linked to
the master database. The major consideration here was the
guaranteed delivery of trading data back to the master - over a
telephone line - and also DR planning if the connection was lost.
Closing down the trading was not an option.

Decentralisation of trading also brought with it other technical
issues that required resolution. We use VAR as a top-down risk
management tool and this means that current VAR positions,
portfolio wide, need to be up to date at all the trading sites in
real time.

Andy Webb:Is there
a role for human traders in the commodity markets?

Richard Franklin:
Earlier I said that our goal was total automation. I also
mentioned that automated execution is preconfigured with an
appropriate mix of algorithms. To give you some colour, we have
three styles of algorithm: fast, where the objective is to take
some price risk off the table without impacting the market;
smart, where we use some of our IP to either accelerate or
decelerate execution in response to a short-term view on price
direction and volume; and VWAP for an average price over a
nominated period. A mix of these algorithms is pre-configured for
each market based on our knowledge of that market.

Now, having said that, we get value by allocating a proportion of
our trade execution to a trader. We found that this allocation
works best, not surprisingly, for the more difficult markets to
trade and especially in our longer term models which generally
trade in lower volumes. How this works is that the execution
algorithm allocates a proportion of each order to be filled by
traders. They decide the quantity of the order to place in the
market each time, the price and also the speed with which they
move the order in response to the movement of the market price.
This is all done on a UI which is generating all the FIX messages
in the background - nothing is done by phone.

To answer your question, over the last six years we have
transitioned from seven to three traders. In the same period we
have gone from no IT support during European/US trading to full
IT support. The three support staff have "Series 3" licences so
can help with execution if necessary although that is not their
primary responsibility. We don't have traders during the Asia
time zone, so the three traders cover from the start of Europe
until end-US. During this time they will execute their allocation
of the trading in addition to fulfilling their other
responsibilities.

We find that if traders are doing some of the execution they are
in tune with what's happening in the market and can make good
judgments as to how the automated trading is travelling. We can
evaluate the automated strategies against the traders and get
some valuable insights for improving the algorithms, especially
in the trickier markets. Also, human trading mixes things up a
bit and disguises the footprint. And it's very easy for the
personal relationships we have with our brokers to break down
when the computers take over all the execution - today we would
only execute trades by phone in a DR scenario. In the futures
space we've had long-term relationships with UBS, DB and NewEdge
and more recently with GS.

The traders observe and report on trading
activity for each of the models that are in production - insights
from the coal-face can be very useful. They also provide insight
into the mix of execution algorithms that have been configured
for each market. They have the tools to observe and analyse both
the actual execution, the algorithmic response to what's
happening in the market, and slippage. In a sense the traders are
managing the execution process rather than simply participating
in it. Our objectives are to maximise capacity, minimise impact,
hide our footprint and minimise slippage, and somewhere in that
mix, we approach what we would define as best execution.

Andy Webb:Interesting. Where do you stand on latency?

Richard Franklin:As I mentioned before, we are co-located but
this doesn't mean that low latency is critical. We derive most
benefit from being close to the data as opposed to close to the
market. Our systems process every tick in real time and this
ultimately translates into about 15 gigabytes of market data
daily just from Chicago. It make no sense to ship this quantity
of data over telephone lines, process it and then send a trade
back to the exchange. It's much smarter to process the data -
delivered over fibre optic connections - close to the exchange,
generate, place and fill the trades and then send the results of
the trading over a telephone line to a master database soon after
the fact.

We take care to be near the front in the latency race, but none
of our models directly exploit latency as a means of gaining
their edge. In fact, we take care during the research process to
understand the effect of latency on model performance and will
tend to avoid trading insights with undue sensitivity in this
area. We feel that the gains from being at the front in the
latency race are too expensive for the likely magnitude of
returns generated - remember, we are a $1+ billion fund. We
prefer to focus on developing robust trading insights driven by
market fundamentals.

Andy Webb:Okay. I
hear what you're saying about latency, but I'm interested in your
experience with co-location - the benefits, the practical
issues.

Richard Franklin:The biggest benefit we derive is that we are now
on a level playing field. Boronia is an Australian company and
all our staff are based in Sydney. Today's technology provides us
with the ability to put a rack of servers in Chicago and that
puts us in the same space as our peers. We are no longer trading
one or two seconds behind everyone else.

We chose to co-locate in a facility where we self-monitor the
performance of the infrastructure. We needed to do this anyway,
and much of our monitoring is built into our software systems. We
pay an hourly rate for "smart hands" but have not used this
service other than for equipment installation. All the equipment
in the rack is IP-addressable, which means we can press the
buttons ourselves remotely. Co-location provides the option for
sponsored direct access to the exchange. We chose not to do this,
preferring to have support for trade execution. Our execution
brokers are also in the same facility and our connection to them
is also by fibre optic cross-connects. We've had no down-time in
two plus years.

We have only had good experiences with co-location. Rack space
rental is excellent value. Our infrastructure consists of eight
servers, two high speed switches and two fibre-optic converters.
There is no single point of failure. We test this infrastructure
each quarter as part of our regular DR testing.

There are no negatives to co-location, but the biggest issue that
we needed to address was DR-related. If we lose visibility in the
co-location site, shutting down the trading is not an option. Our
bottom-up risk management means that we have stop-loss protection
for every position we hold. We can't just shut this down if we
can't see what's going on. We implemented a bi-directional
heartbeat strategy to continuously check the health at both ends
and designed into the systems an appropriate response if things
went wrong, including a means by which we could ship the book
back to base if required. This was a lot of work and it also
needs to be tested regularly.

Andy Webb:If you
were able to go back and start again, what would you do
differently - if anything?

Richard Franklin: Our
trading systems have reached a level of maturity now where work
on the existing modules is generally related to re-factoring for
incremental improvements. We have been very fortunate that the
work coming out of our research team has generally fitted into
the existing design. Having said that, and with full hindsight,
I'm happy to talk about what we would have done differently.

There are a number of generic products that we would seriously
consider using today if we were starting from scratch. Object
Trading and Wombat seem to have great DMA technology, for
example. Cameron and a couple of other systems have excellent
reputations with FIX. Progress Apama, One Tick and StreamBase all
appear to have excellent CEP products, and we really like the
look of Solace's hardware based messaging product. We would have
saved a lot of time using these products out of the box.

The other area where we could have saved a lot of time and effort
was in the area of global infrastructure and specifically
co-location infrastructure, including access to exchanges and
connection bandwidth. I'm not talking about the software here,
just the hardware. We spent time with 7 Ticks, and must have had
a brain-snap not to go with them - opting to do it ourselves
instead. As we expand our co-location initiative we'll revisit
this decision.

Andy Webb:Going
back to research, Chris, you mentioned verification and
validation. Could you expand on that?

Chris Mellen: We
consider it to be pretty important to track and evaluate the
in-production performance of our various trading models. We look
to validate models at two different levels. First, trading
behaviour. Is each model undertaking the mechanical operations
that we expect? We perform a daily comparison between the trading
actions of each production model and those of the offline
research reference model.

Any statistically significant deviations between the two are
flagged as discrepancies and followed up.

Secondly, trading outcomes. Are these matching, in a statistical
sense, our backtested expectations? Significant discrepancies
might suggest that our backtest models need to be re-examined, or
that in some way our production implementation is failing to
match our assumptions and remedial action is required.

Andy Webb:I'd be
keen to hear your take on the market over the last four years
relative to the nearly twenty years that Boronia has been
trading. Has there been a fundamental shift in the trading
paradigm and how have your models evolved over this period?

Chris Mellen: Certainly
the way that market participants interact with the market has
evolved over the past four years. I'm thinking of the increasing
use of algorithmic execution platforms and also the significant
numbers of participants now pursuing HF-style trading strategies.

We don't consider that the basic drivers of market dynamics have
changed in any fundamental way. Speculators and hedgers still
need to put risk on the table in order to achieve desired
outcomes and need to take particular actions to control exposure
to this risk once it is in place. Hence, market participants
still tend to respond in a mandated way to certain types of
market events and price movements. This mandated behaviour leads,
in turn, to deterministic price outcomes that we believe we can
understand and profitably target.

What is unusual about the last four years relative to the
previous twenty has been the level of external forcing of global
financial markets. Debt crises, bank collapses, large-scale
government bailouts, programmes of quantitative easing have all
been sources of exogenous news that have significantly perturbed
market dynamics.

Many classes of quantitative trading strategy -
Long Volatility among them - are designed to exploit endogenously
driven market dynamics. It is probably not surprising that
trading conditions have been challenging, but difficult to
believe that recent levels of government intervention are
sustainable. At some point the level of external forcing of
market dynamics must begin to fall away to something lower and
more sustainable. At this point we would expect market dynamics
to begin to approach something more like we saw pre-2008.

Chris Mellen:
Discussions on model decay do seem to crop up from time to time
when you are operating in the short-term trading space. Usually
when the sector is going through a period of delivering less than
stellar returns!

It's a subject that seems to get substantially less air time in
discussions of longer-term trading insights. In the CTA space
there are substantial trend-following programs that have been for
years attempting to exploit essentially the same insight and yet
only rarely is the continued usefulness of these trend-following
models seriously questioned.

I think there are a few aspects to be remarked upon. The most
robust predictive models will generally be those that attempt to
exploit some fundamental driver of market dynamics or behaviour,
and it is always comforting if this insight can be explained in a
parsimonious way. I think this is why people find so much comfort
in the trend-following insight - it is a concept that is easily
understood and there seems to be the belief that there will
always be trends that can be exploited by a well-tuned trading
program.

So the first goal of model building is to make sure you
understand as well as you can the insight that your model is
exploiting and to gain some confidence that this insight is
unlikely to simply suddenly disappear. Sometimes this
understanding can be couched in terms of behavioural finance, at
other times explained via macro-economic orthodoxies, or else you
can look for basic market-microstructure drivers of the response
you are observing.

Once you have a model you believe is robust it is useful to gain
an understanding of the model's profit set and also the
particular set of risk factors that will tend to lead to poor
trading outcomes. By 'profit set' I mean the set of market states
that correspond to when the model is expected to deliver returns.
The set of model risk factors should include all known items that
can lead to worse than expected outcomes when trading your model.

So we now have an insight that we believe to be in some way
robust. We also feel we have a good understanding of when the
model is expected to deliver returns and what risk factors might
crop up to prevent these returns actually appearing. Only now are
we in a position to assess model decay.

My point is, in assessing model decay it is not enough to observe
that the model hasn't made money for a while. You need also to
understand if opportunities for that model have actually been
present in the marketplace. If these opportunities have not been
present then any expectation of positive returns is probably
unwarranted. On the other hand, it could be that opportunities
were present but that the influence of a particular set of risk
factors overwhelmed the profit-making opportunities.

You would begin to suspect model decay if you began to notice a
preponderance of events from the model's profit set that fail to
be successfully exploited. However, there are other ways in which
a model can cease to be effective. It might be that the
identified model risk factors become more prevalent over time and
somehow begin to dominate the achievable returns. Alternatively,
it might be that the frequency of events that lie in the
opportunity set begins to fall off, and so the profit making
ability of the model diminishes in this way.

Given the care with which we choose and backtest our models and
the importance we place on the strength of the 'insight' behind
the model, our observations have tended to be that it is
transitory periods of unsuitable market conditions that are the
primary driver of poor trading outcomes, rather than long-term
model decay.

So Boronia's position is that is that we have in our portfolio a
fairly robust set of insights that have not obviously been
subject to model decay in the time we have been trading them.
However, any model will go through periods of difficult trading,
driven usually by transitory market conditions or a lack of
particular profit opportunities. Our primary approach to
mitigating the downside effects of difficult market conditions is
to look to build diversity into our trading portfolio - in the
markets we trade but also in the timescales over which we look to
exploit our insights.

Andy Webb:Richard,
Chris, this has been very interesting. Thank you very much.