Meanderful

Pages

Thursday, 8 December 2016

IEX gets far too much attention as it stands, but here is another reluctant meandering.

IEX is a small, expensive dark pool that was improperly licensed as a public exchange by an SEC influenced by the fan boys and girls of Michael Lewis' false and improper "Flash Boys" narrative which was written to support his friends who had invested in Spread Networks and the InvestorSexChange.

Phew, now that I've got that off my chest, most of the problem is not IEX but Reg NMS. The big mistake made by the SEC was giving protected quote status to IEX's delayed quotes. To me, the other main errors in the approval of IEX were not rule breaks but improper facets within Reg NMS. IIROC, the Canadian regulator, has handled things better than the SEC by not giving protected quote status to speed-bumped markets there.

Now there are two main pieces of somewhat old news to cover here:

IEX's filing for its new Primary Peg (PP) order type to join its Discretionary Peg (DPEG) order type in fading, and

Primary Peg

Basically with PP, you sit on the NMS bid or ask, NBBO, unless IEX uses its crumbling quote indicator to look into the future 350 microseconds and decides that you might be at risk of being traded through, or adversely selected, so it moves you one tick to safety. When the crumbling quote indicator says its safe to go back, IEX will put your order back to the NBBO. It is an automatic fade based on IEX's formula and their 350 microsecond look into the future. Remember the order is non-displayed, unlike CHX's LTAD, and thus will not interfere with the SIP feed.

It's a good order type for an HFT market maker. It gives you some adverse selection protection and lets you sit on the NBBO to graft out a living from the spread. Unfortunately for the market maker, your order will be prioritised behind displayed orders, but you don't have to worry too much about that as most of the activity, as you can see in the following chart, is not lit.

(click to enlarge)

IEX has some cunning bullsh*t in their application as it talks about reaching up to the BBO when the market is stable rather than thinking of it as a non-displayed quote fade. Schmarketing. Fading is fading. Go on. Admit it.

Marketing doesn't really matter as the effect is not a lot different to the DPEG in terms of NMS interaction. Thus, I can't see this order type not being approved. The SEC has already fumbled the ball. It's no worse than what already exists at IEX. Do you also think that it remains hypocritical of the "Puzzle Masters" to introduce yet another complex order type despite the "Flash Boys" pledge to only have simple order types? If I was forced to trade on this exchange, I would certainly be using such a PP if their hideously expensive transaction costs do not rule out such an application.

Innovation killing innovation

That brings me to my main problem with the PP. It is a good bit of innovation that kills innovation.

Normally a trader or broker would build their own algorithmic way of avoiding adverse selection. This is the usual activity you see in a market with little micro-structural avalanches of cancellations as traders avoid being the dumb bunny being traded through at best. It might be something as simple as saying, if there is only 10 left on the bid, bale. It could be what I'm used to doing, which is applying a machine learnt algorithm to the security or contract to determine your adverse selection criteria.

The logistic regression formula IEX uses for all stocks cannot possibly be perfect because all stocks don't behave in the same way. One size does not fit all. So their innovation is flawed from the outset. Perhaps good enough for some market participants, but not for others. The IEX speed-bump prevents you innovating yourself as you can't look into the future as IEX does. You're frozen out. You are forced to use their innovation which kills your pathway for any innovation you might have in mind. This usurping of brokers and traders is not healthy. No innovation for you. Like it or lump it. This is what I mean by IEX's innovation killing innovation.

It is kind of ironic that the SEC has asked for innovation and it is getting innovation that kills further innovation. It is the wrong multiplier to harness. You should be seeking to harness the multiplier of innovation from the many brokers and traders. Don't impede widespread innovation.

Parasitic darkness

So be it. I expect this order type will likely act to further darken the trading at IEX. Public price discovery will be increasingly impeded.

At some stage, the SEC needs to consider how much dark and parasitic activity should be allowed at a public exchange. I would argue that zero would be the appropriate number. However, parasitic trading is not necessary bad, as I have pointed out many times. Index funds are likewise parasitic, yet helpful to many. Perhaps a limited number, such as ten or twenty five percent, would be an acceptable threshold. That said, my gut feel is that if three quarters of the flow was parasitic the markets may remain functioning OK. Perhaps even ninety percent. My gut feel is not something I'd like to rely on. I do believe the role of public price discovery is too important to subvert in this manner. Zero dark, not thirty, should be the benchmark from a public policy viewpoint.

If you want dark, go to the dark corner: the ATS corner. Why be public if you don't have public pricing? We have to remember the beneficial role public markets play. IEX is not fulfilling that role.

a Boolean indicator that equals 1 if the last two quotation updates have been quotations of protected markets moving away from the near side of the market on the same side of the market and at the same price; and,

the number of these three (3) venues that moved away from the near side of the market on the same side of the market and at the same price in the prior one (1) millisecond: XNGS, EDGX, BATS.

The quote stability threshold changed from 0.32 to 0.6.

It's a reasonable formula they have extracted from their logistic regression but I suspect most of us could come up with a better one in our sleep. The biggest problem with this approach is that it treats all trading instruments the same. We all know that big caps and small caps trade quite differently. There are many other reasons to differentiate adverse selection criteria beyond IEX's approach. Why not a random forest or deep learning parameter set for each instrument? Tens of thousands of pages of appendices for the numeric weights in SEC filing appendices would be a fun way to send a perverse message to the regulator on the wrongheadedness of this innovation killing innovation.

Anyway, let's spell out the newish formula:
It's interesting IEX picks on XNGS, EDGX, and BATS in their formula. Make of that what you will.

Now that BATS has been purchased, there seems to be a vacancy in the market for a real exchange that supports improving price discovery and efficiency. IEX hampers both price discovery and efficiency. It is time the SEC thought long and hard about this issue. The SEC needs to revise their methodology for approving public exchanges.

Particularly in the comments of that piece. The SIP times are improving dramatically and you may be gamed if you don't plan to have multiple sites beyond the necessary IEX co-location. Yes co-location is required at IEX despite the rubbish you may have been told by IEX staff. SIP gaming is yet another wrinkle to keep in mind when you co-locate to trade at IEX.

Bigger fish

Let's not get too carried away though by all the hypocrisy from IEX. There are much bigger fish to fry. The equity markets work pretty well, despite the need for modification. It is much worse outside equity markets. Why are we still paying outrageous spreads in foreign currency when we do foreign transfers? Some markets still live in the dark ages. We should pause and be thankful that equities are at least a little enlightened.

It looks like US District Court Judge Richard Sullivan is on the ball. His Honour has reportedly cast some significant doubt on aspects of the CFTC's argument. His Honour did admonish both sides in the process but interrupted the CFTC significantly in its closing:"Judge Casts Doubt on CFTC’s Manipulation Case Against Trader Wilson", Alexander Osipovich, WSJ:

“There are multiple elements to market manipulation and it’s not clear to me that you’ve proven a central one, which is artificiality,” Judge Sullivan said in a Manhattan courtroom

...

But on Wednesday the judge’s toughest questions were aimed at the CFTC. He admonished the agency’s lawyers for focusing on what they described as “illegitimate” bids and sidestepping the issue of whether DRW had caused an artificial price.

“You keep using ‘illegitimacy’, which is a very fuzzy term, to somehow be the equivalent of artificiality,” Judge Sullivan said. Arguing that DRW’s bids created an artificial price because they were made with an illicit intent was “so circular as to be nonsensical,” he added.

A New York judge has raised piercing doubts about a US financial regulator’s reasoning as it pursues a high-stakes case against DRW, one of the world’s leading derivatives traders.

His battery of questions aimed at lawyers from the Commodity Futures Trading Commission suggested they may struggle to win their first market-manipulation trial since 2008.

...

US District Judge Richard Sullivan repeatedly interrupted CFTC lawyers as they made closing arguments, his tone caustic at times. As Daniel Ullman of the CFTC tried to explain why DRW’s bids broke the law, Judge Sullivan said: “That’s economics. I don’t think you folks believe in it much.”

Later he told Mr Ullman his logic was “so circular as to be nonsensical”.

But Sullivan repeatedly interrupted her, questioning why no one would take DRW up on its bids if it was offering a higher price in what was an illiquid market. That could mean, he said, that DRW's bids were actually too low to attract a counterparty.

"If that's the case, then it seems to me the entire theory of artificiality goes out the window," Sullivan said.

Sullivan repeatedly interrupted the government lawyers’ closing arguments, peppering them with skeptical questions.

...

Still, Sullivan questioned whether the CFTC had presented enough evidence to prove DRW’s strategy created artificial prices in the market. He said the logical inference from the lack of other bidders for the contract was that DRW’s bidding prices were too low, rather than unjustly high.

He also noted a lack of testimony from witnesses that could have addressed unanswered questions in the case, and generally criticized the regulators’ grasp of economics and view of how financial markets work.

Monday, 5 December 2016

The CFTC does some excellent work. Look here to see some of the many essential enforcement actions the CFTC undertakes. But then you get this complaint.

The exemplar of the currently unfolding case against DRW is not such a piece of excellent work. Let’s meander through why I and others may think this by examining some of my somewhat limited understanding of the details - which is all that is required.

The handling of such cases by the press, the financial press in particular, needs to be examined. Read those articles and you may think DRW is the devil incarnate. Not the best reporting. It is the duty of the press to report properly on obvious regulatory bullying and malfeasance. The fourth estate has an important role in keeping the government and associated regulators in check. The vast majority of the press has dropped the ball on this one with their recent coverage.

I feel the press has reported with unbecoming glee on the trials and tribulations of Don Wilson in these events. Although DRW is a not quite an HFT in my mind, more of a large trader with some occasional HFT characteristics, it seems the press derives some schadenfreude from sticking the knife into the firm despite its well regarded integrity and charitable works. The press has a tendency to sidle up to regulators and buy their stories without a proper critical eye. This may be rational as regulators run many cases, or stories, and if this is your beat, you don't want to be frozen out. Perhaps, thus, the bias, or prejudice, is natural, as harmful as it is. The press needs to do better.

Exceptions to this, in the early daze of the drama, came from the courts of Matt Levine:

Things were a bit more roundabout in the beginning. In the Matt Levine articles referenced above, you can see that the CFTC was threatening a suit. DRW took action to prevent it in Chicago. The CFTC went around this by filing in the Southern District of New York. DRW tries to get it thrown out. DRW fails on dismissal but succeeds in getting a proper definition of a manipulative trade. That clarification was that the CFTC has to show that a trader intended to create an "artificial" price in order to prove attempted market manipulation.Quite a bit of a skirmishing.

Importantly, Bradley Hope reported in his WSJ piece,

"A key piece of evidence the firm said can prove its trading was appropriate is a September 2011 review by the National Futures Association of the timing of DRW’s orders over a period that included the same days the CFTC alleged DRW manipulated the market. The review concluded that the manner in which DRW traded reduced the likelihood that the firm manipulated prices. The National Futures Association is a self-regulatory agency that polices the futures industry. The association declined to comment.

DRW’s lawyers received the document from the CFTC as part of a discovery process late last year in anticipation of the trial"

So even a fellow regulator said, nothing to see here, move along.

As the Streetwise Professor says,

"CFTC apparently believes that the swap futures and the swaps are equivalent, and hence DRW should have been entering quotes equal to swap yields. By entering quotes that differed from swap rates, DRW was distorting the settlement price, in the CFTC’s mind anyways.

Put prosaically, in a way that Gary Gensler (the lover of apple analogies) can understand, CFTC is alleging that apples and oranges are the same, and that if you bid or offer apples at a price different than the market price for oranges, you are manipulating.

Seriously.

The reality, of course, is that apples and oranges are different, and that it would be stupid, and perhaps manipulative, to quote apples at the market price for oranges.

The CFTC is completely confused."

It really is trading 101. Different products. Different cash-flows. Different risks. You should probably expect different pricing, no? Print out the cash-flows on a timeline for the two products; hold them up to the light; and, surprise, they won't perfectly overlap. They're not identical.

Think about one of the simplest of trades: a stock index arb to the futures equivalent. There is a cost of carry to the basket of physical stocks, a question of dividends, and also the margining risk of the daily funding calls on the futures. When you look at the different timings of cash-flows you have to consider your yield curve to get your interest rate calculation right. That is, even such a simple trade, without convexity bias, has quirks you have to carefully calibrate.

DRW simply did their homework. They even published their homework and circulated it. The bids they put into the market were at a more correct price, seeking a convergence to the correct price. This is how markets are meant to work.

The dumb bunnies on the other side of the trade, MF Global, may they rest in peace, and Jefferies, didn't do their homework. They believed the exchange accurately represented the product as being the same as OTC product it was seeking to duplicate. Lazy or dumb?

As recorded in phone call transcripts, when Laurie Ferber, MF Global's general counsel complained, "You guys have been putting up prices at 2:45 the last several days", Don Wilson answered, "We'd be happy to trade on any of those prices. All day long."

Back to the case, the CFTC legal eagle, Aitan Goelman, the same guy that ran against the Oklahoma bomber as a rookie team member - fact is always stranger than fiction - allowed the use of incendiary language, such as "Banging the close", "distorted the market price for their personal benefit", "There is no invisible hand here...It is DRW's hand, pointing to the prices it wanted and setting them up illegally", "brazenly" engaging in market manipulation. Often colourful language in court points to a weak case, as emotion becomes the reliant vehicle, rather than fact. It may work as US District Judge Richard Sullivan reportedly thinks the worst of the trading community. A good prosecutor would understand this and thus just give the judge the line of reasoning he needs to write up if His Honour so honourably chooses. Court facts and real world facts sadly do not always align. Judges are human too.

Can you ever bid a higher price and be the best bid if you think the market is under priced? How would prices ever move if it was improper to put your bid where your mouth is?

The CFTC colourfully refers to DRW inflating bids more than 1,000 times over 118 days with DRW placing bids in the 15 minute settlement window used by Nasdaq's OMX Futures Exchange. To me, the bids were proper and actionable and reasonably priced. In a world where transactions are competed down to the last nanosecond, hundreds of billions of nanoseconds, minutes even, of availability sat in the market bleating for a trade. If you didn't like the price, sell to DRW. Where were you?

You have to be a bit old to remember the NatWest saga from around 1997 (number 41 in this trading loss list) where an element within NatWest smugly thought they had a better way of pricing options. NatWest went after the "suckers" in the market with their new fangled formula. After a tidy $200M loss, they found out that their clever trick was not so clever. The sun does not rise in the west.

If the prices were so fake, why did DRW try to do another billion dollars with MF Global that fell apart due to the unavailability of staff due to a blizzard? Really? Is this the best the CFTC can do?

An important question that needs to be asked and answered is, Why has the CFTC gone so hard on DRW? Especially when the CTFC is so obviously wrong. All I can think is that there is an ego somewhere in the upper echelons of the regulator's organisation that can't be satiated. The lack of logic must point to such a particularly human reason. Perhaps that is the real story. These actions certainly detract from the normally good work of the CFTC. Such ego led, blind bravado is not something a regulator should be known for.

There are many reasons why the CFTC should not exist as an independent entity, simultaneous FX prosecutions, the uncoordination of the SEC's fractured and failing CAT, etc. A long time after delta one desks happened, it is surely time the SEC and CFTC merged and took a holistic view on the regulation of markets. The CFTC certainly needs a better understanding of derivative pricing.

The press needs to look underneath the covers of such cases. Ego driven drivel from regulators shouldn't just be reported as fact. The fourth estate's recent reporting, especially compared to earlier reporting, shows it is failing in its investigative responsibilities which is no surprise in a post-truth era. Truthiness and integrity should matter more.

The bottom line? Not only are there too many cracks in the regulatory framework, there are also too many crackpots at the CFTC. It's time for the CFTC to show some leadership and apologise.

Monday, 28 November 2016

I received something that suggested I may be interested in the US Security and Exchange Commission's (SEC) app for Android.

I was not sure if that would really suit, but let's have a short meander. Here is the link which gives you the details of the app which looks like this:

(US SEC Android app - click to enlarge)

"Installs: 50 - 100"

Wow, that seems low for an official SEC app. Is it really that boring?

I seem to have four Android devices registered in my name. My ancient Samsung tablet that still works albeit slowly; a really old Alcatel phone that is kind of laughably slow; my broken Samsung Note III which no longer works; and, my three week old Xiaomi Redmi Note 3, which has replaced my broken Note III. Surely the SEC's app should work on one of them? No.

Not a good sign?

The SEC's app claims compatibility with Andriod 3.0 and up which should cover my devices. They are a mixture of various Android 4s and an Android 6, but no.

Really? None of the above?

Wow. That is a big fail since the apps release in September 2015.

In general, I'm pretty impressed with the SEC's work, but this is a bit of a sad mistake. If it is indeed true and not some elaborate hoax?

Friday, 25 November 2016

I thought I'd written enough about CHX's LTAD letters to the SEC. So for a new letter from Alex Jacobson, I just tweeted the rebuttal Alex wrote addressing the principal LTAD opposing letter writers:

Citadel is a liquidity provider. As such, Citadel competes with CHX in providing liquidity. If LTAD narrows NBBO spreads, then Citadel’s proﬁt margins as a liquidity provider, both upstairs and on exchange, will narrow. The Commission should recognize that Citadel's profitability will be reduced if LTAD is approved and LTAD achieves its stated goals. Citadel’s comments must be seen as self-serving.

Hudson River Trading is another liquidity provider which competes with CHX. It stands to reason that Hudson River Trading would also oppose LTAD, because LTAD will enhance competition among orders and narrow Hudson River Trading’s proﬁt margins. Hudson River Trading’s comments must also be seen as self-serving.

The New York Stock Exchange competes with CHX. Of course NYSE would oppose something that would give liquidity providers at a competing exchange a competitive advantage over competitors at NYSE or ARCA. Once again, these comments must be seen as self-serving.

along with the twitter commentary, 'interesting' rebuttal of LTAD foes.

That tweet attracted a little more interest than normal. Kipp Rogers pointed out that the Alex Jacobson was likely to be the Alex associated closely with CHX from this LinkedIn profile:

So, I guess we have to discount the support and the anti-anti nature of the commentary in the letter. Alex has every right to speak or shout as he sees fit, but perhaps he should have left this argument to official CHX channels.

David Weisberger suggested on twitter that, "but the issue is their quotes should not be protected- IF they display more size let the market decide their value". I'm not sure the SEC has the latitude to not protect the quotes within Reg NMS, but that would be appropriate if approved. I find David's "show me the data" argument is usually a strong one but often over emphasised. Here it falls flat to my way of thinking. A black swan is not needed to tell you that a drunk, blind person driving a car is dangerous. A less facetious example would be that formal methods are typically more solid than a bunch of unit tests. Theory matters. LTAD may be good for CHX, it may not, but it is unequivocally bad for the NMS .

The negatives to the last-look-like LTAD have two generic classes. Last-look is about slowing down trading. Slower trading is just bad economics. Perhaps not for CHX, but certainly for NMS. If in doubt, please read Kipp Rogers excellent post on the matter. Efficiency of price discovery affects us all. Let's not stuff it up.

Secondly, LTAD makes it easier for bad things to happen such as spoofing, baiting, zero risk execution quoting for SIP revenue, and extending the hall of mirrors to encourage false routing and get to that aforementioned price discovery inefficiency. I won't repeat the arguments here. Links to the previous discussions are below. In summary, although the support for LTAD is weak and the case against is strong, the SEC could use a flawed "de minimis" argument to approve LTAD, perhaps conditionally. They have the power. Let's hope that mistake doesn't eventuate.

At least change the name from the misnomer Liquidity Taking Access Delay. Please. It is somewhat offensive to those parties providing liquidity sought by those parties posting prices on
CHX.

Happy trading,

--Matt.

PS: Think about this nasty LTAD trick: quote inside the NBBO; always cancel your inside quote if the external BBO doesn't come to you - after ~349 microseconds; otherwise be first at the new NBBO; and, trades are forced to route by the LTAD so Chaos ensues more effectively than Maxwell Smart could imagine. Potentially not a spoof or bait as you intend to trade, if and only if the conditions are met, but that is a little tenuous. It is a more subtle abuse than simply quoting and cancelling with zero risk to gain SIP revenue...

Monday, 21 November 2016

This week UBS stay #1 with a share of 17.5% of ATS tier 1 stocks. UBS had the highest absolute volume in shares matched for any week since 29th February 2016. It was a big week.

In terms of positioning in the top 10: Goldman Sachs dropped out of the top 10. Level rose to #7, swapping positions with Bids which fell to #8. Barclays rose a place to sneak into the top 10. Here is a list of the top 10 venues, market shares, and volumes traded from today's Finra release:

UBS : 17.5% 638M

CS : 10.8% 394M

DB : 8.7% 315M

MS : 7.6% 276M

JPM : 7.0% 254M

ML : 5.3% 194M

LVL +1 : 5.3% 192M

BIDS -1 : 4.5% 164M

KCG : 4.5% 163M

BRC +1 : 4.0% 145M

Here is a chart of ATS tier 1 stock volume market shares with further details in the legend:

(click to enlarge)

Post IEX becoming a public exchange, the concentration at the top of the ATS pool has increased, as the math requires. That happens when you take out a significant chunk of volume from one of the top ATS venues by volume.

The following chart shows the increase in concentration in the top ten:

(click to enlarge)

The top 15 ATS's all have average trade sizes less than 1,000 shares with most hovering around the 200 mark. BIDS and CrossStream are the stand out exceptions in the top fifteen.

(click to enlarge)

None of the venues with average trade sizes above 1,000 shares are in the top 15 venues:

DLTA DEALERWEB 381,271

LQNT LIQUIDNET ATS 44,229

LMNX LUMINEX TRADING & ANALYTICS LLC 39,002

LQFI LIQUIFI 18,718

BLKX BLOCKCROSS 11,084

LQNA LIQUIDNET H2O 10,303

AQUA AQUA 8,323

XIST INSTINET CROSSING 6,738

WDNX XE 1,244

Whilst Luminex has a large average trade size of 39k shares, it only traded 8.4M shares for the week, or 0.2% of the ATS tier 1 market share. It barely trades.

ITG has improved slightly, but their struggle continues. ITG is pretty close to regaining 3%. The firm remains well short of its pre-scandal 4.95% May 2015 peak.

No numbers for RCSL RIVERCROSS this week.

Most of IEX's volume is dark and thus parasitic to the market. You have to wonder if it is appropriate IEX became a public exchange. There is nothing fundamentally wrong with dark or hidden volume. Parasitic is often good. For example, index funds. Whilst index funds are parasitic, no one is seriously considering the banning of index funds. Your parasite just has to be careful not to destroy the host.

It is arguable that public exchanges have an important role in open, fair, and transparent price discovery. IEX and its overwhelmingly parasitic darkness suggests its platform may not be fulfilling the important obligations we expect a public exchange to fulfil with its fills.

(click to enlarge)

--Matt.

_________OTC Transparency data is provided via http://www.finra.org/industry/OTC-Transparency and is copyrighted by FINRA 2016.

Friday, 18 November 2016

The Chicago Stock Exchange's (CHX) misnamed Liquidity Taking Access Delay (LTAD) order type has had a further letter of support. This time from Interactive Brokers on Nov 8.

The firm agrees with CTC's supportive letter and emphasises that it should be OK for market makers to laugh at you whilst they pull their orders in response to external stimulus so your effort to route to them, as is required by the Gods of best execution, is for nought but frustration.

The potential 50-50 sharing for market data revenue from CHX for zero execution risk would likely be small enough to not influence IB's encouragement.

Notably, the letters against LTAD still out weigh the letters of support, both numerically and by quality.

However, if the SEC decides the delay complies with its view of contextual "de minimis" and fairness, the order type may still be approved. That is the crux of the IB argument,

Which one is real?

"The CHX proposal is similar to the more broad based, recently SEC approved 350 microsecond delay introduced by IEX. The proposed delay falls squarely within the SEC's own interpretation of Rule 611 under Regulation NMS (Order Protection Rule) allowing for a de minimis intentional delay in order processing."

Let's hope the SEC doesn't fall for that and it recognises the LTAD request does not properly address the many potential concerns.

The house of mirrors being constructed by the SEC does not need another set of panels.

Wednesday, 16 November 2016

On the 15th of November, yesterday, the SEC finalised and released its nine hundred and seventy nine page Order approving the Consolidated Audit Trail (CAT) Plan. An important context is that the CAT plan was "inspired" somewhat by the so called Flash Crash of May 6, 2010. You no doubt know the old, almost true adage, if you can't measure, you can't manage. The CAT plan is about addressing the measuring as otherwise the regulators can't manage. Sounds reasonable.

Let's meander through why someone of my lack of stature may misinterpret such good intentions.

At a high level, would it allow a proper and thorough analysis of the May 2010 Flash Crash? No.

Why not? It doesn't cover the necessary financial instruments of concern that day. Even if the CAT did cover the necessary instruments, the plan does not have enough determinism in the event ordering to be able to reasonably argue about a system where one million events a second are not uncommon.

Generously you may think, well, at least they are trying and it is at least a start? That may be a nice, homely sentiment but the fact is that the predicted costs are kind of OUTRAGEOUS for the lack of return. Back in 2014 Scott Patterson and Bradley Hope, then of the WSJ, reported,

So, for $300M to $1,000M in up front costs, plus recurring costs of $30-60M a year, you get a decrepit, one-eyed king with a drunk stagger, stagger, roll in the land of blind regulators. Sadly, it is your typical racially-neutral, non-specifically gendered elephant designed by a committee which was distractedly thinking about bike sheds. Considering the stupidity and the scale of the proposed plan, the costs are actually not that outrageous. A plan that might actually work and yield beneficial outcomes comes out, on the back of my envelope, at around $50M up front with around $5M a year in recurring costs to be covered. I guess that is why vendors love committees. Meet the spec and bank the money.

So, the SEC thinks $4.1 billion for set-up and the first year of operation. Crazy CAT.

So what is wrong with the plan? I'm not sure I'm qualified to address all the bits and pieces as I haven't read thoroughly the thousands of pages of proposals and submissions, but hey, this is the Internet. Here are three bits I don't like much which the SEC did indeed consider in some detail.

Coverage

Early in the morning of the Flash Crash on 6th of May 2010, there was pressure coming into the market from many fronts. The pressure in the foreign exchange markets was noted by many, including the SEC & CFTC relating the USD and JPY with premium spikes in CDS, Greek sovereign risk in particular. An execution of 75,000 E-minis on CME pinned on Waddell Reed's volume participation algo, plus spoofing by the likes of Navinder Singh Sarao, copped some curious posthumous blame from authorities.

Are you with me? Yeah, you got it. Not a mention of any instruments covered in the NMS CAT plan at all there. CDS, bonds, FX, futures... all not covered. Certainly chaos in equities and option markets ensued. This is yet another reason why the CFTC and SEC should be combined.

"Rule 613 and the CAT NMS Plan do not require the reporting of audit trail data on the trading of futures. One commenter, noting that the CAT NMS Plan does not require any information about stock index futures or options on index futures, stated that incorporating futures data into CAT would “create a more comprehensive audit trail, which would further enhance the SROs’ and Commission’s surveillance programs.”

As noted above, the Participants, within six months of the CAT NMS Plan’s approval by the Commission, will provide the Discussion Document that will include a discussion of how additional securities and transactions could be incorporated into CAT. In their response, the Participants recognized that “the reporting of additional asset classes and types of transactions is important for cross-market surveillance.” Further, the Participants stated their belief that the Commission also recognizes “the importance of gradually expanding the scope of the CAT,” and cited the Adopting Release, wherein the Commission directed the Commission Staff “to work with the SROs, the CFTC staff, and other regulators and market participants to determine how other asset classes, such as futures, might be added to the consolidated audit trail.” Accordingly, the Participants stated that they intend to assess whether it would be appropriate to expand the scope of the CAT to include futures, at a later date."

The SEC concluded, importantly, on page 342,

"The Commission believes that the omission of futures data from the CAT NMS Plan is reasonable, particularly in light of limitations on the Commission’s jurisdiction."

Ugh.

Appropriately time-stamped CFTC venues, for products such as such as listed futures, swaps, option instruments should be captured along with a significant portion of OTC deals, including bonds, to get an appropriate global market vista. A regulator should have the power to either tap the same sources of data as market data vendors, or be able to regulate that time-stamped data needs to be provided to the Plan for non-commercial purposes.

It's not rocket science. Crazy CAT.

Error rate

The SEC decided an error rate of 5% was a good enough target. If you were having a coffee with your data guy and she reported to you that only 5% of your data was corrupt, you'd be in the unfortunate position of paying for the dry cleaning of her shirt that you just spat your coffee on.

From the press release,

"The CAT NMS plan would set an initial maximum error rate of five percent for data reported to the central repository, subject to quality assurance testing, adjustments at each initial launch date for CAT reporters and periodic review by the operating committee. The CAT NMS plan also discusses a phased approach to lowering the maximum error rate for data reported to the central repository."

The mind boggles. The insomniacs amongst you can read some of the discussion around this from page 342 in the SEC order such as page 347's,

"The Commission believes that the proposed 5% initial maximum Error Rate is reasonable
and strikes an appropriate balance between: (1) ensuring that the initial submissions to the
Central Repository by CAT Reporters are sufficiently accurate for regulatory use; and (2)
providing CAT Reporters with time to adjust to the new more comprehensive regulatory
reporting mechanism. The Commission understands that the Participants considered relevant
historical information related to OATS reporting error rates, particularly when new reporting
requirements were introduced, and believes this is a reasonable basis for setting the initial
maximum Error Rates for CAT Data."

Put your hand up if you think a maximum 5% error rate is sufficient for regulatory use? Yeah, neither do I. The market ticks less than 5% of the time, so errors on every market tick would be OK under the CAT Plan. Crazy CAT.

Determinism through timing

One of the objectives of the CAT is to have some analytical capabilities to examine significant NMS events. This implies a causal relationship being search for. To determine causes, as such, you would prefer causal ordering, otherwise known as virtual synchrony, across the system which is impossible. So you are coming from a point of view that you cannot have systematic determinism but you'd like to get as close to that as is viable with a reasonable cost effectiveness in mind.

One way of doing that is to have accurate time-stamps across the entire system so you can put everything in the right order. That is a bit messier than it seems. Given you might have 1-10M events in a not so unusual second, you may think you can order things if you have a corresponding 1 or 0.1 microsecond time-stamp. That is not really the case. Micro-bursts are a big feature in markets where low latency switches queue up competing packets that arrive in the same nanosecond or multi-nanosecond time-slice. What you do know, however, is the processing order the matching auctions deal with and this gives you the determinism you need if you have a valid order processing model for an exchange.

It is important to consider accuracy and precision. You must draw on the distinction between the two concepts and note that neither is the same as significant digits or granularity in a representation. The precision within a venue will allow you to order events better at that venue. At a little tech start-up I used to run, the team developed a timing precision of around 1-20 picoseconds, less than 0.0002 microseconds with relatively inexpensive off the shelf FPGA hardware. Part of that was the Picosecond Over Ethernet Timing (POET) project which was demo'd on 1G Ethernet. It was fun stuff where we could pour cold water over an optical fibre and reliably record the corresponding physical shrinkage via timing differences. This is similar in many ways to the independent CERN White Rabbit project. The point being it is not so expensive to do extremely precise timing anymore. The CAT doesn't need it. A relatively inexpensive off the shelf solution lets you timestamp many comms links with 1 nanosecond precision with only a five nanosecond bump in the wire. 5 nanoseconds is not too much to impose in overhead. Technology for precision has moved, and it is not outrageously expensive.

Accuracy and precision

Accuracy is a harder proposition. The specification for GPS is 100ns of accuracy. This is a bound with reality being a lot better in practice. Even sub nanosecond accuracy is possible with careful statistical GPS interpretation. Such statistical interpretation is the kind of algorithmic approach that is useful for measuring precise distances for infrastructure movement surveys, such as dam walls. Xiaoguang Luo's excellent PhD research, "GPS Stochastic Modelling: Signal Quality Measures and ARMA Processes" uses ARMA modelling to effectively handle site-specific multipath effects, satellite geometry, and variable atmospheric conditions and is also a good reference for many existing state of the art GPS tuning adjustments - well worth a read.

Endrun CDMA timer - accuracy ~10 micros

If you can't use GPS directly, then you can pilfer a GPS measurement embedded in a mobile phone network's Code Division Multiple Access (CDMA) signals in various countries, such as the US, Japan, and Korea. The CDMA phone protocol uses an embedded GPS time signal for timing magic. End-run Technologies has a device, I have used in a few countries, to get accurate enough time stamps. No, you don't need to pay to subscribe to the network. The GPS numbers are floating about in the Ether for you. The CDMA standard says things should be good for seven microseconds of accuracy,

In Korea one microsecond was achieved for me and it was around six microseconds in Canada from my hazy memory. This is very convenient if your mobile phone works in a data centre. Especially if roof or window access for GPS is impossible, or prohibitively expensive, for the additional accuracy.

The point to this meandering about timing is that it is pretty easy to time-stamp packets to 10 microseconds of accuracy in the US. If you mandate GPS and hardware then 100ns of accuracy is trivial and not outrageously expensive. You can go below 1ns of accuracy but that is not trivial.

Precision wise, with hardware, 20 nanoseconds of precision in time-stamping is normal, with one nanosecond precision being good (such as with a Metamako MetaConnect/Mux/App), and less then 0.02 nanoseconds of precision is not so hard, and a lot of fun, if you want to set yourself a stretch goal.

For the CAT, the SEC talks somewhat imprecisely about timing with three numbers, 100 microseconds of accuracy for the CAT participants, think stock and option venues, and 50,000 microseconds for brokers and dealers, with 1,000,000 microseconds, yeah a whole second, for manually entered orders. The SEC refers to precision of the time-stamps from participants being at least 1,000 microseconds for participants or better if it is convenient to co-opt better internal precision. That is kind of ass about relative to accuracy with 100,000 nanoseconds of accuracy and 1,000,000 nanoseconds of precision. It shows an unfortunate lack of understanding of the problem domain. You should probably do better if you're thinking of spending a billion dollars.

If your impressive stamina has led you to still be reading, you can read the details of the SEC's thinking from page 360 onward,

"The Participants, however, represented that they all currently operate pursuant to a clock synchronization standard that is within 100 microseconds of the time maintained by NIST, at least with respect to their electronic systems. Accordingly, the Participants recommended that the Commission amend the Plan to require that Participants adhere to the 100 microsecond standard of clock synchronization with regard to their electronic systems, but not their manual systems, such as the manual systems operated on the trading floor, manual order entry devices, and certain other systems."

100 microseconds of accuracy is not unreasonable but a little disappointing for stock and option exchanges given than 100 nanoseconds of accuracy is reasonably trivial. As this famous clip from Grace Hopper below shows, microseconds and nanoseconds are a little different to each other. Even an Admiral or General could understand the difference with a little grace from Grace. So should an SEC decision maker.

However, continuing onto page 365,

"For the initial implementation of the CAT, however, the Commission believes a 50millisecond clock synchronization standard for Industry Members is reasonable at this time. While the Commission believes that regulators’ ability to sequence orders accurately in certain cases could improve if the clock synchronization for Industry Members were finer, the Commission is sensitive to the costs associated with requiring a finer clock synchronization for Industry Members at this time, and believes that a standard of 50 milliseconds for Industry Members will allow regulators to sequence orders and events with a level of accuracy that is acceptable for the initial phases of CAT reporting."

That is, for Industry Members 50 milliseconds, or 50,000,000 nanoseconds is OK. Such a long time seems a cruel joke. A message can go from Boston to LA and back in less time. Good luck with your NMS determinism now. Fifty milliseconds may represent an order of a million reportable events for you to untangle. The SEC's confidence in sequence ordering seems a little misplaced. That said, large swathes of such would be reasonably ordered as they will not only fall under the 100 microseconds of accuracy banner but the precision and expected venue oriented monotonic symbol ordering should provide some descrambling comfort. Especially with respect to exchanges that use Nasdaq systems that tend to have nanosecond precision on their time-stamps. So, it is pretty bad, though not quite as diabolic as it seems. Just a bit diabolical.

The other point is those manual orders with one second accuracy should also be caught in electronic transactions for the pitiless pit less communities. So, that is not so bad either, if they can be linked up.

The SEC plan refers to something approximating but not quite precision as "Timestamp Granularity" from page 369,

"Specifically, the Plan requires CAT Reporters to record and report the time of each Reportable Event using timestamps reflecting current industry standards (which must be at least to the millisecond) or, if a CAT Reporter uses timestamps in increments finer than milliseconds, such provides that such events must be recorded in increments up to and including one second, provided that CAT Reporters record and report the time the event is captured electronically in an order handling and execution system (“Electronic Capture Time”) in milliseconds (“Manual Order Event Approach”)."

Resolution of representation and precision is being confused here. Precision of one millisecond, or 1,000,000 nanoseconds is a bit of a joke, though some exchanges, such as Nasdaq will be obliged to report improved time-stamps, except when they might not have to if they can argue an undue burden by waving their hands in the direction of their order handling and execution systems. Here is the clarification from page 374,

"In response to the commenters that stated it would be costly for CAT Reporters to report using timestamps to the same granularity they use in their normal practice, the Commission believes it is appropriate to make a clarifying change to the Plan. The CAT NMS Plan provides that to the extent that any CAT Reporter utilizes timestamps in increments finer than one millisecond such CAT Reporter must utilize such finer increment when reporting CAT Data to the Central Repository. Rule 613(d)(3), however, required that a finer increment must be used only to the extent that “the relevant order handling and execution systems of any CAT Reporter utilizes timestamps finer that a millisecond.” Accordingly, the Commission isamending Section 6.8(b) of the Plan to limit the circumstances in which a CAT Reporter must report using an increment finer than a millisecond to when a CAT Reporter utilizes a finer increment for its order handling and execution systems. The Commission finds that, this modification is appropriate in light of the increased burdens placed on CAT Reporters by the additional systems changes that would otherwise be required in order to report in finer increments. With this modification, reporting in a finer increment than a millisecond would not be a costly undertaking, and the Commission therefore believes that this approach will improve
the accuracy of order event records, particularly those occurring rapidly across multiple markets,
without imposing undue burdens on market participants."

This is all a bit silly. It would not have been unreasonable to stick with UTC time-stamps to the nanosecond of granularity and a precision of 20 nanoseconds with venue monotonic ordering guarantees for at least up to Tier #2, but it should be at least Tier #4. The accuracy for such venues or institutions should be GPS-like and be of order of 100ns, which is relatively trivial. Just put a hardware time-stamper with GPS sync in your processing stream on the network and post-process it into the correct CAT Plan formatted file for transfer overnight.

It would have also been useful to add embedded timing traces to the inter-party protocols, over time, so that ordering between sources and destinations could be better managed leading to a much improved picture of determinism. In that regard, the CAT should at least capture one second oriented timing bounces between parties, including the CAT data centre, so time-stamp reconciliations and error corrections are possible along with clock accuracy and precision audits.

For the CAT Plan, the focus should have been on determinism in the NMS processes to be best extent. Much of that may have been solved by just adding timing at the trade matching pools or processes rather than bothering brokers or traders with additional costs. Embedded event sequencing on client messages would be nice to tie together client event sequencing for NMS purposes quite simply. That is, I'm unconvinced that adding timing to all the NMS clients is indeed useful for NMS purposes. It is undoubtedly useful for broker or institutional client handling audits, but the two different audit concepts should not be confounded.

There is so much improvement that could have been achieved with a more thoughtful approach. If you have deterministic ordering at a venue and appropriate modelling of the order handling then the outputs of the models, the quotes and transactions flying out, should be reproducible. Only excellent exchanges would do this, so many exchanges will not have such determinism baked in to their systems, but it should be a goal for the SEC to drive that modelling through as much as possible for the best possible outcome. Perhaps the SEC should even mandate a simple algorithmic model for new order types to be available publicly? That would be a real boon to the industry, especially in understanding the hundreds of obscure order types and their intricate rules. Personally I'd prefer it if they'd ban everything expect simple limit orders ;-) Mortal human beings have a hard time understanding the couple of hundred order types that exist at all trading venues. If you think this article is long and boring, go read all the order types specifications and I'll visit you in your padded market structure institutional cell.

Crazy CAT.

Conclusion

The bottom line is that the SEC seems to have lacked the expertise to push forward a proper mandate to build a truly useful CAT Plan. I certainly feel they have been bushwhacked a little by an industry committed to making the CAT contain as little value as they can get away with. The CAT Plan could have been much cheaper, simpler, and better. As it stands, it would have helped a little, but not much, for the 2010 Flash Crash and it is of questionable value to the industry given its expected costs.

As an HFT type, I'm glad there will be less understanding as that means profitable opportunities are more likely to persist for longer. Understanding comes to those that pay attention to the details. A lack of understanding in a crisis is the unfortunate, cynical reality of this CAT Plan. For the industry, and the good of the USA, it seems that a great opportunity for NMS insight and improvement has been missed.

Wednesday, 9 November 2016

Metamako posted new results for their MetaMux 48, achieving 69 nanoseconds for multiplexing multiple 10G Ethernet packets onto one outgoing 10G Ethernet line. There is nothing faster for trading - apart from not switching or multiplexing, of course. It looks great for hooking up multiple trading connections to an exchange.

They also announced a firmware app for their platforms, MetaFilter, that filters stuff, unsurprisingly. Good for market data. Metamako is claiming filtering at around 95ns which is pretty nice,

"MetaFilter allows providers of market data and venue connectivity to select the messages that each of their clients’ applications wish to receive. Each of our devices can have up to 31 individual custom configurations of market data"

Though it's not clear how well it may handle arbitrary layer 3 protocols for symbol based filtering. I suspect it may not be able to stretch that far. I'll ask and find out:

Update: basic MAC and IP address filtering only right now; with UDP and TCP port filtering likely to come later. If you want symbol - or other complex MD - filtering then try Enyx or Algo Logic, who partner with Metamako, with more sophisticated market data logic.

Still, the best feature remains one nanosecond accurate timestamping on layer one line replicas. That is a killer and cost effective feature.

CTC make points including that spreads may be tighter with more liquidity thanks to the LTAD's effort to stave off adverse selection. CTC doesn't address the issue of ephemeral non-tradeable quotes turning the NMS into a hall of mirrors. It doesn't deal with the SIP revenue advantages for CHX and those organisations, such as CTC, CHX may share such revenue with. It doesn't address the potential for abuse by market participants. So, it is a little light on addressing the problems but it is probably the best individual letter in support and worth a read.

One small point, that I've made before, CTC makes the same tired mistake of suggesting the SEC tied "de minimis" to one millisecond. I understand this to be wrong. When the SEC dropped this interpretation, it applied a more geographical, travel-time context. This geographical interpretation may even imply delays of coast-to-coast times of twenty or forty milliseconds being justifiable in an appropriate context. That interpretation came out on the same day as IEX's approval. It is here:

"For example, intentional access delays that are well within the geographic and technological latencies experienced by market participants when routing orders are de minimis to the extent they would not impair a market participant’s ability to access a displayed quotation consistent with the goals of Rule 611."

In this context, it is worth considering an example given by the SEC of an existing NMS delay on pages 15-16,

"..any market participant co-located with the major exchanges’ data centers in northern New Jersey necessarily encounters delays of 3-4 milliseconds – due to geography alone – in accessing the protected quotations of securities traded on the Chicago Stock Exchange’s matching engine in Chicago."

It is 4,129 kilometres from Boston to Los Angeles. This is a round trip - by standard single mode optical fibre - of at least 41.29 milliseconds - or 27.3 ms by microwave, millmetre wave, laser, or hollow core fewer mode fibre. Hence, a justifiably reasonable interpretation of "geographic" delay may include a forty millisecond delay. 40,000 microseconds is a long, long time in a world where exchange transactions may be measured in tens of microseconds.

In the CHX LTAD context, it is noteworthy the SEC interpreted that enabling malicious activity via such delays is not determinative for opposing such a delay introduction. Malicious remains malicious regardless of delay. The SEC does indicate the rule application should include the consideration of such potential,

"The Commission notes that, pursuant to Section 19(b) and Rule 19b-4, the proposing exchange would be required to consider and address in its rule change filing the potential for abuse of any proposed access delay, which would then be subject to notice, comment, and Commission review. Further, even after the rule change became effective, the Commission believes it would be incumbent on the exchange to remain vigilant in surveilling for abuses and violative conduct of its access delay rule, and consider amending its access delay if necessary, among other considerations, for the protection of investors and the public interest."

Back on the subject of de minimis: here is the tail end of the SEC document,

"At this time, theCommission is not adopting the proposed guidance under this interpretation that delays of less than one millisecond are de minimis. The Commission believes that, in light of the evolving nature of technology and the markets, and the need to assess the impact of intentional access delays on the markets, establishing a bright line de minimis threshold is not appropriate at this time. Rather, the Commission believes that the interpretation is best focused on whether an intentional delay is so short as to not frustrate the purposes of Rule 611 by impairing fair and efficient access to an exchange’s quotations. As it makes findings as to whether particular access delays are de minimis in the context of individual exchange proposals, the Commission recognizes that such findings create common standards that must be applied fairly and consistently to all market participants."

So, no explicit one millisecond is de minimis.It could be more. It could be less. Context and details matter. Let's stop the talk about one millisecond being de minimis. Pretty please?