Skip to main content

What Drives the Complexity and Speed of our Markets?

Gregg E. Berman

Associate Director, Office of Analytics and Research
Division of Trading and Markets
U.S. Securities and Exchange Commission

North American Trading Architecture Summit <br>New York

April 15, 2014

Good afternoon.  I’d like to begin by thanking WatersTechnology for the invitation to speak with you this afternoon about technology and market structure.  It’s been nine months since my last presentation on these topics --- since then, a lot has happened in the markets, and even more has happened at the SEC. 

My talk today will revolve around two related themes: market complexity, and market speed.  Of late there has been considerable public debate regarding the net effects that certain technological advances, especially those related to speed and complexity have had on our markets --- and whether things somehow have gone too far.  Technology mishaps at various equity and options exchanges, at large broker-dealers, and even at one of the consolidated tape systems, only serve to bolster these concerns further.

So it is with the events of the past year or so as a backdrop that I find myself asking the following questions:  Is our current market structure “too complex?” Is it “too fast?” And if so, why?

Fortunately, there are a lot of facts that can be brought to bear on answering these questions.  However, before proceeding, I must remind you that the SEC, as a matter of policy, disclaims responsibility for any private statement by any of its employees and the views I express today do not necessarily reflect the views of the Commission or any of my colleagues at the Commission.  And unless otherwise noted, my use of the word “we” generally refers to SEC staff, and not the Commission itself.

Let’s begin by parsing my questions.  Note I am not asking if our market structure is just “complex” or “fast.”  Of course it is complex.  How could it not be?  Everything else in the world is complex --- making phone calls, navigating a car, choosing what movie to record, and, apparently, picking the perfect bracket.

But what I want to know is whether or not our market structure is too complex or too fast. What does it mean for something to be too complex?  I consider a system to be too complex if it has more complexity, more moving parts, more…stuff, than required to meet the desires of its users.  Note I did not say the “needs” of its users.  It’s very difficult to ascertain what users truly need --- and I certainly would not want to attempt to define what I think investors and other market participants actually need.  It is however much easier to determine what market participants desire because we can observe what they say and do.

For example, my mobile phone is, by any measure, ridiculously complex.  It contains microprocessors, memory, wireless connectivity, a GPS receiver, and a color touch screen.  My home phone has absolutely none of these things but yet I use it quite successfully to make and receive calls all the time.  So does that mean my mobile phone is actually too complex?  Absolutely not.  That’s because I also use my mobile phone to listen to Green Day, watch the Avengers, find my way around New York City, and, yes, play Candy Crush.  Do I need to do any of these things?  Of course not --- mobile telecommunication worked just fine prior to the invention of the smart phone.  But I have a desire to do these things, and therefore my phone needs to be as complex as it is in order to allow me to do so.

So now that I’ve explained what I mean by my question, let’s pull some observations together that bear on the markets.  I’ll begin by noting the obvious: technological advances presently allow market participants to submit many thousands of quotes in less than a second to our national equity and options exchanges.  More so, these same technologies allow quotes to be canceled in milliseconds and even microseconds.  If that sounds fast, it’s because it is fast.  In fact it is very fast.  But is it too fast?  

As many of you probably know, at the beginning of 2013 the SEC implemented some of its own advanced technologies that allow us to analyze vast amounts of exchange-based equity data in an unprecedented fashion.  It is called MIDAS, which is an acronym for Market Information and Data Analytics System.[1]  There are a lot of market structure topics we can directly address using MIDAS, and among one of the first questions to explore under the category of market speed is whether or not quote cancelations were indeed too fast.

We started by trying to define what we mean in this context by “too fast,” and came up with the following idea:  Let’s measure the speed at which quotes are canceled, as well as the speed at which market participants can lift quotes before they are canceled.  If the speed of cancellation is much quicker than the speed at which those quotes can be accessed, then I would say quote cancellations are not only fast, but perhaps they are too fast.  However, if market participants can lift quotes just as quickly as others can cancel them, I would say that the cancellations might be fast, but not necessarily too fast.

This type of data-driven analysis can help inform the policy debate.  If quote cancellations are indeed too fast for the rest of the market to keep up, it might make sense to slow down this particular aspect of the markets, perhaps with some sort of minimum quote-life requirement.  But it the data shows that at least some market participants can access quotes just as quickly as they can be canceled, this suggest that both sides of the market are very fast --- and if you want to slow down the market in a way that does not bias one side, you would need to not only address the speed of quote cancellations, but also the speed at which liquidity is taken.

We began our research last summer by measuring the lifetime of every individually-identifiable quote displayed on the proprietary feeds offered by our national equity exchanges.  We then created two distributions of these quote lifetimes --- one for quotes that were canceled, and one for quotes that were accessed by another market participant and resulted in a trade execution.  Our results were published last October [2013] as part of the SEC’s new market structure web site.[2]  And here’s what the data showed:

For the second quarter of 2013, more than one third of all displayed quotes and orders in corporate stocks stayed in force for at least 5 seconds before they were canceled.  And over 60% stayed in force for at least one half of a second, which is probably the lower bound of human-interaction time.   This is an important result.  It shows that the market may not yet be as dominated by nothing but millisecond and microsecond quotes as one would have thought.

But that still leaves 40% of all quotes being canceled in less than one half of a second, including some that are indeed canceled within a few milliseconds and even a few microseconds.  So to help put these numbers into perspective we lined up the quote cancellation distributions with the “trading life” distributions and here’s what we found:   Though about 39% of all canceled orders were in force for half of one second or less, about 27% of executed trades were the result of some participant hitting posted orders within that same time period.  Similarly, though 23% of all cancellations occur within 50 milliseconds, approximately 19% of all trades occur within that same 50 millisecond window.

These results suggest that at this time, the speed of systems that take liquidity by accessing displayed quotes seems to be keeping up with the speed at which those quotes can be canceled.  Thus, if you would like to slow the market down, you have to address both liquidity takers as well as liquidity providers.   Solutions that simply attempt to address the speed of cancellations are likely missing half of the speed story.

For those of you in the audience today, I would hope that these results are not a surprise.  This morning we heard from panelists covering the next wave of innovation in trading technologies, including advances in automated trading, smart algorithms, and machine learning, as well as how traders can automatically select and implement the optimal algorithmic strategy.   In the afternoon we heard from panelists covering latency, high-speed data, and co-location.  To me, the most interesting thing about the panels was not just the topics but the participants themselves.   Panelists ranged from quote makers to quote takers; from firms that work in the high-frequency trading space, to those that provide algorithms to asset managers.

I think that these observations about today’s conference, coupled with our data analyses, suggest that there may be a lot more to the debate about market structure, speed, and complexity, than just high-speed cancellations. So let’s head back to the data to see what else we can learn.

Our next set of speed-related measurements was in part based on questions about the data we published in October.  Numerous market participants, noting that our quote lifetime distributions included quotes at all levels of the book, asked if we could provide distinct distributions for quotes at or near the top of the book instead of anywhere in the depth of book.  Fortunately, we had planned for this in our initial analyses and adding these extra dimensions was readily accomplished.

Fresh out of the lab using data from the fourth quarter of 2013, we published our findings just last month [March, 2013].[3]  I find the results to be very interesting.  First, the data show that indeed there are differences in the speed distributions when comparing quotes entered at the best bid or offer, inside the spread, outside the spread, or very far outside the spread.  In general, quote lifetimes, whether canceled or traded against, are faster when originally posted at or inside the prevailing spread, and slower when posted away from the spread.  In fact, for quotes posted inside the spread, the speed of those seeking to hit those quotes not only keeps up with the speed of cancellations, but actually surpasses it below the 50 millisecond level.

Results for quotes displayed at the prevailing spread show the opposite trend.  Slightly more than half of all canceled quotes have a lifetime of one half-second or less.  But only one quarter of quotes that result in executed trades have a lifetime of one half-second or less.  At 50 milliseconds the pattern is similar --- 38% of all canceled quotes have lifetimes of 50 milliseconds or less, but only 17% of trades occur at those same speeds.

However, even though market participants asked us to perform this analysis so that they could see the lifetime distributions of just those quotes posted at or inside the spread, it turns out that the most interesting results are related to quotes away from the spread.  And here, believe it or not, is the result:

First, nearly two thirds of all quote cancellations are for quotes that are originally posted outside of the best bid or ask.  Second, only 16% of all trades are the result of market participants interacting with these away quotes.  And third, the lifetimes of quotes posted away from the prevailing spread are much longer, and hence the speed of cancelations is much slower, than for quotes posted at or inside the best bid or offer.

I think this very straightforward measurement speaks volumes about the way markets participants actually trade: the data show that the majority of all of displayed quoting activities occur in the depth-of-book, away from the inside spread.  But these quotes are only accessed a minority of the time by any market participants.  And it’s not because of speed --- the data clearly show that these quotes last a lot longer before they are canceled than quotes posted at or inside the spread.

So now I have to ask, what is the point of having a market structure that supports a depth of book if it is not typically accessed?  We all know that size and spread are supposed to be related.  If you need to trade more shares than are posted at the prevailing spread, you can if you are willing to pay a penny or two more.  But apparently, this is far from the norm.

Frankly, I find these results rather unsettling.  They suggest that modern market structure has evolved to the point where liquidity takers, including buy-side participants, focus their trading efforts on nothing more than what’s available at the NBBO[4].  But that’s not necessarily how market makers are posting their liquidity.  I’m starting to wonder whether there is some fundamental mismatch between the nature of liquidity takers and liquidity makers.

And that’s what I think may be one of the driving forces underlying how technology has changed market structure.  It’s much more than just the automation of quotes and cancels, in spite of the seemingly exclusive fixation on this topic by much of the media and various outspoken market pundits.   It’s also about the automated algorithms used by asset managers to trade in size.  It’s about the way large orders are chopped up into small blocks to avoid moving the spread.  It’s about trading only at the NBBO even if there may be much more size just a penny or two away.

Here’s an interesting point to note: if you send an ISO[5] order to a market center it will sweep the book all the way to your limit price, lifting every single share posted one, two, or three cents away from the NBBO.  You are guaranteed to be filled at depth since quotes cannot be canceled once such an order is received by the exchange.  Ironically, this is very clear in the types of mini flash-crashes that are caused by outsized market orders.  They literally eat through the depth of book lifting every displayed and non-displayed quote without the possibility that those quotes can be canceled.  But yet it seems that hardly anyone bothers to purposely access the depth of book in such a straightforward manner.

For me, these types of observations raise a specific concern about the current public debate on market structure:  I worry that it may be too narrowly focused and myopic.  This is because trades are, of course, only executed on an exchange when a liquidity-taker meets a liquidity maker.  As they say, “it takes two to tango.” Some party needs to provide liquidity, and another needs to take it.  If there are things we want to change about our market structure we must look at both sides of this equation, including why and how market participants on both sides interact with the markets.  Focusing separately on just one or the other misses the entire point of how buyers and sellers are brought together.

But why has the market evolved in such an NBBO-focused fashion?  Many would say that buy-side algos need to do what they do because of the nature and speed of those who quote and cancel their quotes across many different exchanges. That’s certainly a fair point, and in fact many would say that it is common knowledge.  But is it the only reason?  Is the development of ever-more sophisticated buy-side algos and technology simply a necessary response to the technologies used by algorithmic market makers and those who may quote and cancel very quickly?  Or does the use of buy-side algos influence market structure, the speed of the market, and the complexity of the markets themselves just as much?

Consider the fact that many asset managers often utilize off-exchange venues to avoid, for example, high-frequency traders who quote fast, cancel fast, and build algorithms that respond to the types of patterns a buy-side institution would imprint on the market when trying to trade in size.  Now that seems to make sense --- until you look at the data regarding off-exchange trading.

Last October [2013] we published a white-paper in which SEC economists used FINRA OATS[6] data to reconstruct a week of all the order flow that occurred across registered alternative trading systems (“ATS”), also known as dark pools.[7]   For this exercise they did not review all off-exchange order and trading, just the portion that took place in an ATS.  The results therefore exclude any internalization of order flow by over-the-counter market makers, or any other off-exchange, non-ATS order flow.

And here’s what they found:  In the week examined, about 28% of all market-wide volume was executed off-exchange.  About 40% of that 28% was conducted on an ATS.  That translates into 11% total market volume.  And do you know what average order size was?  Across all ATS, the average order size was only 374 shares.  More so, over 60% of all orders entering into an ATS during the review period were for exactly 100 shares.  Note that I am not referring to the size of trades filled on an ATS, but rather the size of orders sent to ATS.

This data suggests that the buy-side uses the same algorithms in dark pools that they use on lit exchanges to slice up large orders into much smaller pieces to trade at the NBBO.  And that gives me some pause, because it means there must be something more to the use of these algos than simply stating they are a necessary defense against the way quotes and trades are done on an exchange. Is trading in small size at the NBBO the only option, even in a dark pool?

If so, I’d like to know why?  Is this supported by transaction costs analyses?  Or is it simply driven by the historic way that these costs are computed?  Perhaps it is driven by how traders are compensated – don’t cross the spread to get it done; instead, see how much you can tease out of the NBBO before it moves away from you.

And of course all of this is facilitated by technology, without which it would be impossible to create and submit so many child orders to so many different venues at the same time.  But it doesn’t stop there.  Technology has also enabled an entirely different form of off-exchange trading.

As I just mentioned, the October [2013] analysis of off-exchange trading showed that approximately 40% of all off-exchange trades were executed on a registered ATS.  But what about the other 60% of off-exchange trades, which we estimated for the week studied to be a full 17% of total market volume?  To shed light on the nature of these trades our economists returned once again to the FINRA OATS data for a further detailed analysis, the results of which were just published on our web site as part of the March [2014] update.[8]

And here’s what we found:  For the week studied, just over one third of the 17% of non-ATS trade volume can be associated with retail over-the-counter market makers, which equates to about 6.4% of total market volume; furthermore, these firms also handle significant institutional order flow.  We believe the other nearly-two thirds consists primarily of institutional order flow.  More specifically, about 10.6% of total market volume in our sample seems to have been executed not at a public exchange, not in a registered ATS, and does not seem to be associated with traditional retail order flow.  This non-ATS primarily institutional flow is almost as large as the 11% of volume that does flow through a registered ATS.

In other words, almost half of all off-exchange institutional trades during our sample week seem to have been executed outside of a registered ATS platform.  Rather, they were executed by broker-dealers as part of smart-order router networks. 

Is this the wave of the future?  I don’t know.  Does this add another set of complexities to the markets?  Probably.  But asset managers and other buy-side institutions apparently find value in this method of trading since they direct their flow to such smart-order routing systems.  For those interested in more descriptive statistics and comparisons of the different types of off-exchange ATS and non-ATS trading systems, you’ll find the analysis on the Commission’s web site provides a treasure trove of data.

Based on what I’ve discussed so far, it seems that a number of the complexities of market structure may, at least in part, be driven by the complex desires of market participants themselves --- and I hope I’ve been able to show that these desires includes those of buy-side participants, not just the intermediaries.

If we return to my smart-phone analogy, there is still one large piece of the puzzle to discuss before I conclude.  If market structure is akin to the operating system on a smart phone, and market participants are the “users” of these phones, what about the applications themselves?  What is their market analogue?

The answer is of course the products that trade on our markets.  In the world of listed equities, this includes the corporate stocks of large cap companies, the corporate stocks of mid and small cap companies, other instruments such as rights, warrants, and preferred shares, and… exchange-trade products.  You simply cannot have a discussion about market structure without considering the nature of the products themselves.  These are the “apps” that users want to “run.”  And in the same way the smart-phone apps ultimately drive the nature and complexity of the smart phone itself, I believe the products that investors desire to trade on our exchanges create very complex requirements for market structure.

Exchange-traded products, including exchange-traded funds, exchange-traded notes, and other similar vehicles, have grown tremendously over the past decade.  There are currently over 1,000 of these products representing about $1.6 trillion of market capitalization.  Investors of all shapes and sizes, including individual retail investors, large asset managers, hedge funds, and pension plans, often use them to gain access to baskets of financial instruments representing a wide variety of exposures across different sectors, markets, and asset classes.  Alongside of Google, IBM, and Intel, you’ll find exchange-traded products for large-cap US stocks, non-US stocks, corporate bonds, muni-bonds, mortgage-backed securities, currencies, options, futures, and commodities.  All of these products trade on the same markets and go through the same pipes, the same routers, and the same matching engines.

What makes these products so interesting to investors is that they are designed to track specific sets of other financial instruments.  But this tracking does not occur magically.  The price of an exchange-traded product only tracks the value of its underlying holdings if there are market participants who find it profitable to engage in arbitrage strategies that literally force convergence.   Similarly, market makers who provide liquidity for exchange-traded products often hedge their exposures by entering into offsetting positions in the underlying holdings or other correlated products.

These practices create linkages in the markets that are required if investors want to receive fair prices for buying or selling an exchange-traded product.  I’m sure many of you are familiar with the type of arbitrage and hedging that occurs between stock-based exchange-traded funds, such as the heavily traded S&P 500 SPY SPDR (“spider”) that is designed to track the performance of the 500 stocks that compose the S&P 500 index.  You would of course expect that this creates quoting and trading linkages between some subset of these stocks and the fund itself.

And the same is true for commodity-based exchange-traded products.  For example, one would naturally expect that the GLD SPDR, which holds gold bullion, creates quoting and trading linkages between this equity and gold itself.  But did you know that GLD is not the only exchange-traded product that is linked to gold.  I recently checked a popular online database that lists all exchange-trade products and was surprised to find that there were actually 14 separate exchange-traded products with assets greater than $10 million that are tied to gold.  Some are leveraged, some are inverse-leveraged, and some are currency-translated.  All of them trade on the same equity exchanges as do corporate stocks.  And we desire that each of them is priced fairly based on the intraday value gold.

This means that the price of a product that tracks gold should not rise at the same time as the price of a product that tracks the inverse of gold.  If that happened, at least one of these products must be trading at a premium or discount to its underlying value.  What prevents the prices of these products from getting too far out of kilter is that traders and market makers compete to “arbitrage away” differences and earn a profit in doing so.  If my math is correct, the 14 exchange-traded products that are directly related to the price of gold create 91 distinct pairs of arbitrage relationships with each other, plus additional arbitrage relationships with the underlying gold in the cash and futures markets.

Keeping all of these products in line requires a lot of quoting, canceling, and re-quoting, even for those that trade infrequently.   More so, this quoting is often accompanied by related liquidity-taking trades that are used to realize an arbitrage relationship or hedge a new exposure.  Those who can execute on the liquidity-taking side faster can quote tighter since they take on less volatility risk.  Similarly, those who can compute the underlying value of the product faster can re-adjust their quotes faster, and therefore quote tighter.

Now if all this sounds a little theoretical and abstract, I assure you it has real-world implications.  If my argument is correct, the rate of quote cancellations for exchange-traded products should be greater than the rate for corporate stocks.  And that’s something we can directly measure.

Over the 21-month period from April, 2012, through December, 2013, we compute a relatively steady cancel-to-trade message ratio of about 20-to-1 for corporate stocks.[9]  Over that same period the ratio for exchange-traded products is three to four times greater, implying 60-80 cancel messages for each trade message.  The results are even more dramatic when normalized by volume.  For every 1000 shares quoted in corporate stocks about 30 shares are traded.  For 1000 shares quoted in exchange-traded products that number is just 3 shares.

Please recognize that I am not saying these numbers suggest anything problematic or worrisome about exchange-trade products.   What I am saying is that by construction these products require market participants to engage in a lot of active quoting and cancelling if investors want to receive fair prices when they buy or sell these products.

Does this make our markets more complex?  I think it does in the sense that more linkages are created and more speed, and certainly more computing power, is needed to continue to ensure prices stay in line.

Taken together, the observations I’ve shared with you today lead me to the following conclusion:  Since the flash crash of May, 2010, I’ve read many, many articles questioning whether advances in technology have led to more complicated and quicker markets that are now, perhaps, too complex and too fast for their own good.  But this assumes that the exclusive driver of all this complexity and speed is that some set of market participants continuously seek to outperform each other by developing faster and newer technologies.

Though there is definitely truth and merit in this reasoning, I think it is incomplete.  What’s missing from the argument is that as much as technology has driven complexity, I believe the desires of investors and investment managers --- how they want to trade, the products they create, what they want to buy --- requires an unavoidable increase in the complexity of our markets, and in a very real sense is also driving the need for more and faster technologies.

As market participants continue to discuss and debate a wide variety of topics related to market structure my hope is that they will consider the impact that their desires have on market structure as much as market structure has on them.

[1]  See for information on MIDAS and other market structure analyses.

[4]  NBBO =  National Best Bid of Offer

[5]  ISO = Intermarket Sweep Order

[6]  OATS = FINRA’s Order Audit Trail System

[9]  See Data Visualization: Market Activity Overview, presently updated through December 2013

Return to Top