Ciena Webinar with Tabb Group and Spread Networks

August 26, 2010

On September 1, I will be representing Spread Networks in a webinar on“Ultra-Low-Latency Networks & Financial Trading – Fast Networks for Fast Markets” with Ciena and Tabb Group.


Q&A with Ciena

August 26, 2010

I just completed a Q&A with Ciena, posted at: http://blog.cienacommunity.com/n/blogs/blog.aspx?nav=main&webtag=cienablog01&entry=78


Spread Networks emerges from stealth

June 25, 2010

I have been working with Spread Networks, which has executed on a very simple but exciting idea to reduce trading latency:  Find the shortest possible path between the major exchange hubs in NY and Chicago, dig a trench, lay conduit in it, and pull a new state of the art fiber optic cable.

For more detail, take a look at spreadnetworks.com


Building a Low Latency Trading Network, or “Why is my latency so high?”

April 5, 2010

In recent posts I have commented on topics in electronic trading on the one hand, and telecommunications on the other.  It is time to unite those two threads.

I have also been following discussions on LinkedIn regarding low latency networking, and have been working on related projects with clients.

Building a low latency network is not a “black art”, but it does require a solid understanding of how networks are built, as well as an appreciation for the tradeoffs that telecommunications carriers make when building networks.

I will start with some general principles and then focus in more detail on the specifics.

The first thing to understand is that any engineering project involves trading off objectives.  As the old joke goes, “I can build it fast, cheap, or reliable, pick any two.”

Let’s look at some examples for the design of networks:

Most telecom networks (i.e. networks built by AT&T, Verizon, etc.) are designed to scale well, to VERY large numbers of users.  After all, network construction is capital intensive and tends to reward firms that can service a large number of users.  Additionally, networks generally follow Metcalf’s law, which says that the network becomes more valuable as the number of users increases.

But the design choices that enable scale are bad for latency.  A basic design principle to achieve scale is to employ hierarchical designs.  Rather than attempting to connect every user directly to every other user (in a “full mesh”), hierarchical networks collect traffic locally, aggregate it, deliver it to regional concentration centers, and then route traffic back out to local centers and out to the users.  This kind of “hub and spoke” design (looking much like a family tree) scales well, but forces traffic to take paths that are longer than necessary and to pass through routers and switches that add latency.  If you have spent any time transiting ORD, DFW, or ATL, you will know what I mean.  What works well for airline economics (and helps keep fares low) isn’t so great if you are in a hurry.

More generally, telecommunications carriers optimize their networks for:

  • Transporting large amounts of bandwidth.  This is no surprise, as most telecommunications revenue is ultimately denominated in units of bandwidth.
  • Multi-service provisioning, i.e. the ability to carry (and sell) many different services on one network.  Carriers make substantial investments in their networks, and they want to spread those costs over as many customers and services as possible.  T1s, T3s, cell phone backhaul, traditional phone calls, text messages, internet traffic…  The more services, the more revenue.  To achieve this, carriers build networks using layers of routers, switches, muxes, and multi-service access nodes.  These enable carriers to provision everything from T1 to text messaging, but add overhead and latency.
  • Servicing population and business centers.  Willie Sutton robbed banks because that is where the money was.  Telecommunications companies build networks to serve population centers.  When laying fiber from NY to Chicago, telecommunications companies try to pass as many population centers along the way.  So a typical network that carries traffic from NY to Chicago might go up to Bighamton, over to Buffalo, then down to Cleveland before it goes on to Chicago (or down to Philadelphia, over to Pittsburgh, up through Akron and Cleveland, etc..)  Take a look at the cable routing in any country, and the lines are far from straight;  they zig and zag to stop by as many cities as possible.  Who cares if it adds a few milliseconds, your text message will still get there fast enough.
  • Cost.  The cheapest way to build long-haul networks is to use existing right of way, which for the most part means rail lines.  Qwest started as a railroad spinout, so did Sprint.  Conveniently, railroads also zig and zag to visit key population and economic centers too.  Try to take an Amtrak from NY to Chicago without passing through Buffalo, Pittsburgh, or Washington DC (not to mention Albany or Philadelphia);  You can’t do it, and neither do most telecommunications companies.
  • Traffic patterns that are “typical”, e.g. peaks from “American Idol” messaging or Mothers Day.  Since most of the carrier’s money comes from a very broad market, they do not design their networks to consider the latency requirements or traffic patterns (e.g. peaks at US Market Open rather than Mothers Day) of trading.
  • Operational efficiencies… For example, telecom companies routinely “re-groom” circuits to free up capacity on certain links, to facilitate maintenance, etc..  There operational convenience might mean a sudden change to your latency!

None of this is intended as a criticism of the traditional carriers.  It is a simple matter of the engineering and economic tradeoffs that drive the large carriers to build networks optimized for the people who pay them the most money.  And electronic trading is a very niche market for them.

Networks that have been built specifically for financial markets (e.g. BT Radianz, Savvis) do somewhat better.  The engineers who built those networks made tradeoffs differently than telecom companies usually do, optimizing for trading traffic, focusing on the geographic markets that matter to financial services, and focusing on services (e.g. IP multicast) that facilitate market data.  But while these networks do better than general purpose telecom networks, they are still not ideal for the current generation of high frequency, low latency trading.  While they were built for trading, they are still trying to address a larger marketplace than the ultra low latency HFT market, and they are international in scope.  For example, one of the main objectives of RadianzNet was to provide the ability to rapidly connect financial customers to one another, and so a large “community” was key to the success of that objective.  But scalability to large communities gets in the way of reducing latency.

So when BT Radianz developed its low-latency network “Ultra Access”, and when NYSE developed SFTI the engineers removed layers of routing and generally made decisions that favored reduced latency at the cost of scalability.  (For comparison, BT Radianz’ RadianzNet was designed to scale globally to tens or hundreds of thousands of clients, where BT Radianz Ultra was designed only to scale to thousands of clients in localized geographies.)

So how do you build a low-latency network for trading?

Well the first conclusion is that if you REALLY care about latency, you are going to have to build it yourself.   While networks like Radianz and Savvis are great for quickly connecting counterparties and they reduce the IT load relative to firms that manage their own networks, if you really want the lowest in latency a shared network is just not going to cut it.

So where do you start?  First, simplify the topology.  While hierarchical designs are cost effective and enable scale, the lowest latency design is a simple point-to-point network.  In practice for most firms that will mean multiple point-to-point links, one to each venue. That is more expensive than traditional networks, and doesn’t scale well, but there are only a handful of low latency matching engines to consider.

Next, strip out every possible layer of equipment from the design that you can.  The fastest solution is a direct connection from a LAN port on a server (e.g. 1G or 10G Ethernet) directly to the WAN link.

What kind of a WAN link? In almost every case this will be either a “lit” wavelength provided by a telecom company, or a dark fiber that you light yourself (a traditional T3, OC3, OC12, or similar service will add overhead that you don’t want.)  By buying a lit wavelength (1Gig Ethernet or 10G Ethernet wavelength), you are eliminating almost all the carrier overhead (SONET Muxes, Multi-Service Access Nodes, etc..)

You may even want to go further and buy dark fiber.  Simply put, dark fiber is fiber installed by a carrier that you light yourself.  It may seem that there is little difference in having a carrier light the fiber versus lighting it yourself, but there may be a number of benefits to leasing dark fiber and lighting it. All things being equal, dark fiber will provide a lower latency solution:

  • Dark fiber allow you to isolate traffic from different trading strategies and different matching engines onto separate wavelengths at minimal marginal cost (i.e. once fiber and equipment is leased/purchased, cost of adding wavelengths is typically very low.)  This eliminates all possible sources of queuing delay that can occur when multiple trading strategies are using the same wavelength.
  • You can select your own equipment to light the fiber, and there ARE real differences in latency among different equipment vendors and configurations.  (Carriers rarely choose or deploy optical equipment based on latency, focusing much more on the number of wavelengths that can be supported, variety of interfaces supported, operational considerations, etc..)  If you light it yourself you can optimize for latency rather than optimizing for bandwidth and multi-protocol support.

Plus you know you have control and security, since you have dedicated fiber that nobody can tap, and you can have confidence that your network will not be re-routed or re-groomed.

Of course all things are not always equal.  At this level of engineering, speed of light becomes very important.  (The speed of light in fiber is not the number we all learned in high school, i.e. 299,792,458 meters per second.  Depending on the refractive index of the fiber, the speed of light in fiber is roughly 2/3 the speed of light in a vacuum.)  So a “dark fiber” route from NY to Chicago that is 100 miles longer than an otherwise comparable “lit” wavelength could be 1-2msec slower (round trip) because of the increased mileage.  But all things being equal you should be able to get lower latency and less queuing if you light it yourself.

So when designing your wide area links, consider dark fiber, but be very careful to understand the routing of that fiber, the optical mileage (which is longer than the route mileage to allow for some slack in the fiber) and the type of fiber (which determines things like the refractive index).  With lit services, make sure to get actual latency measurements, not just SLA numbers (that are usually padded to minimize risk for the carriers.)

Building an ultra low latency network isn’t a black art.  It just requires an in-depth understanding of how networks are built, right down to the fiber routing, and it isn’t the least expensive way to go.


Trading, Trade-Offs, and What Does Google Privacy Have to do with Flash Orders???

February 16, 2010

Larry Tabb made a comment the other day about my post on Google’s strategy that got me thinking…  Larry said “It used to be that we were supposed to care about privacy – seems more and more that we are willing to give up privacy for functionality, ease of use, and the ability to better understand what I as a consumer want.”

Very true.

The same is true about trading.

When deciding how much personal information to reveal on Google Buzz or Facebook, the correct answer will depend partly on who you are (there is a big generational element to this), how much information you have to protect, and what you want to achieve.

If you are 18, have very little private information, and want to socialize as much with as many people as possible, then you are likely to reveal a lot (maybe too much) on Facebook.

If you are an adult professional (with a lot more at stake) and you just want to know what your former college buddies are up to, you might reveal a lot less.

So with trading.

Every decision about trading tactics (i.e. how to get the trade done, not whether to buy or sell) should be driven by the investment strategy.  For example:

  • If you are buying a small quantity of a highly liquid stock because you anticipate positive earnings, then you might well choose to cross the spread and pay the offering price to get your order filled quickly before (you hope) the stock spikes up.  Because your order is immaterial to daily trading volume, you have little to lose and (you hope) a lot to gain by showing your entire order to the market.
  • On the other hand, if you are a large fund trying to accumulate a substantial portion of a small-cap (and probably illiquid) stock which you intend to hold for 2-3 years, you are more likely to let your order sit in a dark pool, go to a broker for capital commitment, slice your order up using an algorithm, etc..  In general you will be more passive and less likely to show your hand.  You certainly wouldn’t show the entire order on the exchange, because you have a lot to lose and little incentive to rush.

The European regulation that addresses “best execution” is MiFID.  MiFID recognizes that different investors have different strategies and therefore will make different trade offs in their trading tactics… They do not have a single/uniform definition of “best execution”, any more than Facebook has a single definition of “appropriate privacy.”  Instead, MiFID recognizes that best execution is not limited to execution price but also includes cost, speed, likelihood of execution, likelihood of settlement and any other factors deemed relevant.

In contrast, the US regulation that addresses “best execution” (RegNMS) defines it very narrowly as getting an order executed at or between the best bid and the best offer on a publicly displayed market.  Never mind that you may want to do 100,000 shares of a small cap name and there may only be 100 shares at the inside quote.

In practice, traders in the US markets need to make trade offs and answer a variety of questions, including:

  • How important is certainty of execution
  • How important is speed of execution
  • What is an acceptable market impact
  • How much of their hand to show
  • etc..

One of the most important tradeoffs is between how much of your hand to show (resulting in market impact) versus certainty and speed of execution.  i.e. If you show your entire order to the market you have a much higher probability of getting it filled quickly, but you will move the market.

It is instructive to consider some of the recent innovations in the US equities markets (or innovations that, while not strictly recent, have become more popular in the last few years) in light of these trade offs.

Take algorithms.  Algorithms allow a trader to slice a large order into many small orders and spread them out over time, usually to be executed on displayed markets that only offer liquidity in small quantities.  The trader can thus comply with best execution obligations (i.e. each individual “child” order is executed at or inside the NBBO) while minimizing market impact and information leakage.  (If you were to look at the average price achieved by all child orders that make up a single parent order, the parent order may well be executed at an average price better or worse than the NBBO at the time the parent order was created.  So if the client were buying and the market rose dramatically over the life of the parent order, did the client get “best execution” just because each individual child order looks good?)

Dark pools provide another alternative.  In general they offer less certainty or speed of execution, but also lower market impact.  An order may sit for days or weeks in a dark pool until it is matched.  (Your mileage may vary…  On a pure institutional/block system like LiquidNet, the order is likely to sit until it is matched by a contra-side institutional order.  On Millennium, it may be executed incrementally as it is matched against algorithms, but exposure to such smaller orders also creates greater risk of information leakage and gaming.)

Now let’s take a look at flash orders.

In a nutshell, flash orders allow a client to send an order to an exchange, and optionally permit the exchange to briefly “flash” the order to liquidity providers who can fill the order at a better price.  This improves the certainty of execution, and may provide price improvement over the NBBO, but it increases information leakage.  Should the client care?  Well, if he is not concerned with information leakage (maybe he only had that one order) and he wants to “get done” now, then he is probably happy to use a flash order.  If on the other hand, his order is one of many child orders for a larger parent order, he may prefer to be more passive and decrease information leakage.

Some teenagers expose all their photos and personal information to the world (including, ahem, college admissions officers), while others only permit their friends access to their information.

On the face of it, this is good.  Flash orders provide a useful tool that allows a knowledgeable investor to make a reasonable tradeoff.

But consider Google Buzz.  Google Buzz was rolled out last week as Google’s social network, its answer to Facebook and LinkedIn and MySpace.  Cleverly, Google leveraged users Gmail contacts to automatically create “followers” on Google Buzz, and (by default) allowed any user of Buzz to see any other users followers.  Google very quickly found out that many users don’t want the world to know who all of their Gmail contacts are.  Especially if that list includes doctors, lovers, bookies, or other relationships best kept private.

So how does this relate to Flash Orders?

Google’s Buzz appears to be a reasonably clever service, and it actually does allow users to control their privacy.  But Google erred by making the default privacy settings very non-private, and by making the privacy controls too difficult to find.  They made a decision on behalf of their customers to expose those customers information.  They made the trade off.

Likewise, the problem with flash orders is really not flash orders themselves.  The problem is that in many cases a broker chooses to submit an order as a flash order, with the client never realizing (never mind understanding) the trade off that the broker has made on the clients behalf.  Of course this is just a symptom of some more fundamental problem, namely:

  • RegNMS dictates a very narrow definition of best execution, which has had the unintended consequence of encouraging innovation that helps the customer achieve their execution objectives despite this regulatory constraint.  Maybe we should have a broader definition of best execution that incorporates the clients trading objectives.
  • Brokers incentives are not aligned with their client’s objectives.  Which is why brokers receive rebates for order flow, use flash orders, and generally do things that maximize profits for brokers rather than for customers (the phrase that comes to mind is “where are all the customers yachts?”)
  • Even asset managers, who act as the proxy for most investors, have incentives (maximize assets under management) that are at cross-purpose to their client’s objectives (maximize risk adjusted return.)

But those issues are subject for another post.

Brennan


Why is Google getting into the telco business?

February 11, 2010

Recently I have been focusing heavily on trading and market microstructure, but my other hat is in the internet/telecommunications world, and yesterday’s announcement by Google piqued my interest.

Why would Google announce that it is getting into the broadband business (see http://www.google.com/appserve/fiberrfi/public/overview)?

To understand that, you need to understand what is strategic to Google (for background, Chris Dixon sets the table very nicely at http://cdixon.org/2009/12/30/whats-strategic-for-google/)

Microsoft wants you to live in Windows and Office, hence anything that makes those less relevant that (see e.g. Netscape) is a threat.

Apple wants you to live in MacOS and iPhoneOS, buy your media through iTunes, etc…  (Apple is really a lot like Microsoft in that way, they just execute a lot better… and a lot of that has to do with the fact that they never thought of themselves as just a software company like Microsoft, they think of themselves as a total integrated experience company.)  Anything that disrupts or taks you away from that total integrated experience (as insanely greatly designed by Apple) is a threat (see e.g. flash).  This is why Apple bought Quattro Wireless (a mobile ad company):  so you can consume ads within apps on your iPhone/iPad instead of on the web… or more to the point, so that iPhone/iPad app developers have a way to monetize their apps and thus an incentive to develop for iPhone/iPad instead of for the web.

So what does Google want?  Google wants you to live on the web (unlike Apple or Microsoft which really want the web to be secondary to their platform) where they can deliver targeted search-based ads.  Let’s consider some strategic moves by Google:

  • Google and Apple are inherently at odds with each other, since the more time you spend in (for example) iPhone OS and iPhone apps, the less time you spend in the browser (looking at Google delivered ads.)  Which is why Google is promoting Android, a web-centric phone OS.  When you use an iPhone app, you aren’t on the web consuming Google ads (the internet is there in most apps, it is just buried as a communications layer that you don’t see.)
  • Google wants fast, ubiquitous, cheap, and open internet connectivity, which facilitates you spending time in web-centric applications (like Google search, Docs, Google mail, Google Apps…) and viewing ads delivered by Google.  This is why Google pushes net neutrality… That commoditizes internet  access and prevents ISPs from disintermediating Google.  That is also why Google bid for wireless spectrum… to push the market to provide open access over wireless.
  • With Google voice, the web is the platform… Since I started using Google voice, I find myself spending much more time in Google contacts, placing calls via the web, etc…  Google voice has shifted my phone experience to the web.  Where it sells ads…

So why is Google “planning to build, and test ultra-high speed broadband networks” (which, by the way, will also be “open, non-discriminatory, and transparent”, i.e. an embodiment of net neutrality.)

Not because Google wants to be a telco.

Partly because Google wants to create some competition to the incumbent cable companies and telecom companies.  But Google can’t create a material level of competition.

What Google can do is to create pressure, through the media and through regulators, on the cable companies and telecom companies.  That pushes those companies to provide high-speed, open broadband networks (which, not coincidentally, make it much easier to live on the web, happily using Gmail, Google Apps, and consuming Google ads.)  Imagine your congressman or city councilwoman asking the local cable company when they come up to renew their license “Why can Google offer an open internet service with 1 gigabit fiber-to-the-home connections when you only offer a crummy slow 5 megabit connection as part of a bundle with 6 movie channels????”  Watch the cableco executives squirm as they try to explain that one.

So does Google want to be a telecom company?  No, they want to offer proof points and create pressure for faster and more open internet connections, so you can live on the web and consume ads delivered by Google.

Not that there is anything wrong with that.

Brennan


Buy-Side Tech: High Frequency Trading

January 20, 2010

On February 4, I will be speaking on a panel in Chicago on best practices in high frequency trading.  Great location, since HFT is often associated with cash equities, but a lot of the interesting opportunity (and a lot of the action) is in futures and in arbitrage between futures and cash markets.

More detail at: http://worldrg.com/showConference.cfm?confcode=FW10009

Brennan


High Frequency Trading: Market Structure, Technology & Regulation

November 25, 2009

On December 9 I will be chairing a panel on “Emerging Technologies Enabling High Frequency Trading”.  Still lining up the panelists, with some great candidates and looking forward to an interesting panel.

http://www.cmconsortium.com/high-frequency-trading

 


Capital Markets Consortium Seminar on Dark Pools

November 6, 2009

On November 18, I will be chairing a conference on Dark Pools and chairing one of the panels.

http://www.cmconsortium.com/dark-pool


Definitions: High Frequency Trading, Flash Orders, Dark Pools, Algorithms

October 30, 2009

As I have been following the commentary in the popular press on high frequency trading, dark pools, etc., I have noticed a lot of confusion on the terminology and what these things mean.  In particular I have seen reporters, public officials, and others talk about program trading when they really mean algorithmic trading, criticize high frequency trading for the (perceived) sins of “flash orders”, and generally conflate all of these things together.  A recent article in the NY Times (http://topics.nytimes.com/topics/reference/timestopics/subjects/h/high_frequency_algorithmic_trading/index.html) for example, says “Powerful algorithms — “algos,” in industry parlance — execute millions of orders a second and scan dozens of public and private marketplaces simultaneously. They can spot trends before other investors can blink, changing orders and strategies within milliseconds.” Actually that isn’t what “algos” are, at least not in common industry parlance.

In general the confusion stems from the fact that computers play a role in all of these, and so they tend to all get lumped together.  In order to have a healthy debate on US equity market structure, we should all have a good understanding of what these things are, so I will use this post to explain some key terms.  Others may use these terms differently, but I have outlined below what I consider to be widely accepted (by actual practitioners) definitions.

One important distinction to understand with any computer based trading is the phase of the trading process that is being automated.  At a high level:

  1. Pre-Trade: An investor (whether an individual investor, a portfolio manager at a fund company, or a computer program acting on behalf of an investor) performs some analysis that leads to a decision on whether to buy or sell a stock.  This is where “alpha” is created.
  2. Trade Execution: Once that decision has been made, a trader (or a computer) is responsible for implementing that decision, and uses discretion to decide when to place an order to buy or sell, what type of order to place, where to place the order, and the size of the order.  It is important to understand at this phase of the cycle that the trader is not deciding whether to buy or sell, but how to do so in the way that best meets the investment objectives that led to the buy/sell decision.  (For example, if the investor decided to buy because he is hoping for a positive earnings report the next day, the trader may want to buy more aggressively to get into the market ahead of that news.  If the investor decided to buy a large block because he has a 5 year view on the companies outlook, the trader may take a slower and more passive approach to avoid bidding up the price of the stock.)  The ideal execution tactic should support the investment strategy.
  3. Matching: Once the trader places his order, a variety of different mechanisms may be employed to actually match a buy order with a sell order.  Before the widespread adoption of computers, this matching function was performed by specialists and floor traders on the floor of physical exchanges, and by market makers and brokers over the telephone.

For each of the different techniques described below, I will identify where they fit in this simplified three-step process.  Since these different techniques evolved over time, I will describe them in (roughly) the historical sequence in which they appeared in the market.

Program Trading: Whether you call it Program Trading, Basket Trading, or List Trading, it is one of the oldest forms of trading using computer technology.  Often used as a term by the media to describe ALL forms of electronic trading, “program trading” best describes when a trader submits a list (or “basket”) of orders for simultaneous (or near simultaneous) execution.  Program trades can be used to achieve a number of investment objectives including transitions (when a plan sponsor moves assets from one money manager to another), rebalancing, moving funds into or out of an index, etc.. “Program trading” is fundamentally a mechanism to execute a series of trades across a portfolio of stocks.   The New York Stock Exchange defines program trading as “a wide range of portfolio trading strategies involving the purchase or sale of 15 or more stocks having a total market value of $1 million or more”.  While program trading is generally automated today through the use of computers, the fundamental strategies (e.g. of tracking an index) preceded widespread automation.  Automation just makes program trading faster and easier.  More important, program trading is used as a mechanism to implement an investment strategy.  It is not a strategy in itself, and therefore fits into phase #2 in the process outlined above.  Portfolio Insurance, commonly blamed for the market crash in 1987, is one (but only one) application of program trading.

Electronic Trading: While “programs” could be executed with minimal computer technology, and were around even when most trades were executed by specialists on the (physical) floor (or market makers in the over the counter markets), the next phase in automation was when computers were introduced into the actual process of matching buy and sell orders.  So the first proper use of the term “electronic trading”, and still its best definition, is the use of computers to  match orders, i.e. step #3 in the process above.  In the US, services such as Globex and Instinet were pioneers in electronic trading.  Exchanges in Europe were among the first to go electronic, eventually followed by the US exchanges (in part by merging with existing non-exchange based electronic markets.)  Some of these systems were and are continuous real-time systems, others are “point in time” matching systems (or “crossing networks”), and while they take different approaches, they are all fundamentally matching buy and sell orders electronically.  Today most trading of simple securities such as equities is electronic, with lower rates of technology adoption in markets where instruments are less well standardized (e.g. credit derivatives.)

Dark Pools: While the name sounds sinister, Dark Pools developed to automate the function of the block trading desks that used to mint large amounts of money for sell-side firms.  Originally called “Crossing networks” (early examples included Lattice, ITG Posit, the Instinet Cross, etc.) they matched (or “crossed”) large blocks of stock.  Today they include ITG Posit, Pipeline, Goldman Sachs Sigma-X, UBS PIN, Credit Suisse Crossfinder, NYFIX Millennium, and many more.  The appeal of dark pools is not that they allow traders to hide surreptitious or illegal activity, but that they allow traders to buy and sell large blocks of stock without moving the market.  Originally dark pools only matched large blocks of stock.  More recently dark pools opened up to algorithms (see below), which allows traders to expose large blocks of stock to the orders generated by algorithms and without moving the market (because the block of stock is not displayed, nobody will see a 10,000 share sell order suddenly on the NBBO and watch the price drop in reaction.)  They allow algorithms the opportunity to trade against the large blocks and benefit from price improvement (i.e. the ability to get a price in-between the best bid/offer).  Some dark pools are operated by independents (e.g. LiquidNet, Pipeline), others are vehicles for “internalization” by brokers (i.e. they allow brokers to trade their customers orders against each other and against their own inventory, providing opportunity for price improvement and reducing exchange fees.)  Dark pools implement the third step (matching) in the 3-step process described above.

Algorithmic Trading: Once the function of matching orders was automated, networks established to connect to these markets, and programmable interfaces such as FIX were developed (as opposed to dedicated screens which is how early electronic platforms were accessed), it became possible to automate the function of delivering orders to an electronic market.  One of the jobs of a trader is to manage the flow of orders into the market so as to achieve “best execution” (a topic for another post).  With electronic interfaces in place, it became a (relatively) straightforward process for programmers to develop automated systems that took an order from a customer or portfolio manager, sliced that order into smaller pieces (which would have less market impact) and send them into an execution venue.  This function is performed by computer algorithms, and came to be known as algorithmic trading.  Those purists with degrees in Computer Science may protest that ALL functions performed by computers are executed by algorithms, and an introductory course in algorithms is mandated in all Computer Science curriculums offered today.  Mathematicians would argue for an even broader application of the term algorithm, and they would be right.  But in the trading world, the term “algorithm” is generally understood to mean automation of the (very tactical) process of placing a (usually largish) order into the market, often by means of breaking it into smaller chunks and managing the timing of those “child orders” into the marketplace so as to achieve a particular objective.  (That objective is generally formulated as a benchmark to be tracked, such as Volume Weighted Average Price.)  In other words, algorithms are used to automate step #2 in the process above, and contrary to the assertion in the NY Times:

  • They don’t “execute millions of orders a second” (they generally spread the execution of an order out over hours, or even days and weeks, placing “child orders” into the markets with intervals of minutes or hours.)
  • Just like human traders, they do “scan dozens of public and private marketplaces simultaneously” both to assess the amount of liquidity in the market (e.g. to avoid placing orders too large or too frequently and thus cause prices to move) and to determine where best to place the order.
  • They do try to detect “trends before other investors can blink”, but primarily to avoid getting poor executions, e.g. to avoid accidentally buying at the peak of a transient spike in price (not to scalp investors).
  • They generally don’t “change orders and strategies within milliseconds”, and while they might change orders (e.g. if the market starts moving against them) they do so to implement a clearly defined strategy (e.g. “buy 100,000 shares passively without moving the market” or “sell 10,000 shares at the daily volume weighted average price” or “buy 20,000 shares at as close a price to now as you can.”)

Algorithmic trading came into place for four reasons:

  • First, algorithms are simply a way of automating what traders already did.  That is to say, looking at multiple markets and determining where best to place an order (called “smart order routing” when computers do it), and breaking large orders into smaller chunks that can be released into the market at the optimal time.
  • Second, as the trading process has come under the microscope to be measured, and as benchmarks such as VWAP have been widely adopted, algorithms provide a simple and low cost way to execute against a benchmark.
  • Third, as the buy side has assumed more responsibility for its own trading, algorithms provide a low-cost way to execute trades across multiple brokers without hiring large trading staffs.
  • Finally, RegNMS has imposed “best execution” rules that require trades to be executed at the National Best Bid/Offer, which has resulted in smaller orders being shown at the NBBO, and as a consequence has driven traders to slice institutional orders up into retail sized chunks (to match the orders at the best bid/offer.)

Strategy or Black Box Trading: While algorithms are focused on the tactics of trading (i.e. given a decision to buy or sell a quantity of a security, how is that decision best effected), strategies or black box trading systems are one step higher in the food chain.  Such systems continually scan streams of market data, analyze them for patterns, and make decisions on whether and how much of a security (or usually a set of securities) to buy and/or sell.  They fit into step #1 of our 3-step process.  This includes strategies such as “High Frequency Trading” and statistical arbitrage (which may or may not be high frequency).  They are quantitatively driven techniques, implemented using high-speed computers.

High Frequency Trading: High frequency trading, very simply, encompasses a range of trading strategies (and therefore fits into step #1, i.e. pre-trade) that involved the rapid buying and selling of securities (and often the rapid posting and cancellation of orders as well.)  Broadly there are three classes of strategies pursued.  These strategies are not exclusively high-frequency, although they are used by high frequency traders (among others):

  1. Automated market making, where the HFT trader posts buy and sell orders simultaneously, makes some money (maybe) on the spread, and makes some money on rebates paid by exchanges in return for posting orders.  Like the market makers of old, HFT firms make money on some trades, lose on others, but expect to make a net profit across a large number of trades.
  2. Predictive traders, where the HFT employs software that does try to “spot trends before other investors can blink” and like all momentum traders, try to buy before the price has run up and sell out before it crashes back down.  In a way they are like the many day traders in 1999 who bought internet stocks in the expectation that prices would run up, and tried to sell them before everyone headed for the exits.  While this time the game is measured in milliseconds, the winners are still those who bet right, and get out early enough.  Of course some are stuck holding the bag after the price has collapsed.  And this time it is all done using computers.
  3. Arbitrage traders, who look for short-lived inefficiencies in the markets, buy the (relatively) undervalued asset while simultaneously selling the (relatively) overvalued asset, and unwind the trade when prices come back into an equilibrium position.  Simple examples are pairs trades (i.e. a simple pair of securities where some price relationship should hold such as options with differing durations, two different share classes of the same stock, etc).  More complex examples are statistical arbitrage, where there is a relationship between complex baskets of securities.

Unlike algorithmic trading, where computerized techniques are used to establish or exit from a long or short position, and where portfolio turnover ranges from high to low (with average holdings potentially multi-year), high frequency strategies are neither long nor short but market neutral, portfolio turnover is extremely high short (average holding periods measured in seconds or milliseconds), and the strategies aim to end the day “flat”.

Direct/Sponsored/Naked Access: As trading has become more electronic, many buy-side institutions have chosen to take on the trading function that was historically performed by sell-side traders.  In some cases this is to better control the execution of their trades, in other cases it is simply to reduce costs (or for the broker to reduce costs by pushing clients from full-service to a low-touch model.)

  • The first, and still most pervasive form of this is direct market access, where the broker provides the buy-side institution with some combination of a terminal (execution management system) and connectivity, the client sends orders electronically to the broker, where they pass through the brokers order management, risk management, and compliance systems, and on to the exchange for execution.
  • As clients became more sensitive to speed and latency, many brokers offered their clients sponsored access, where the client connected directly to the exchange, using the brokers membership and clearing through the broker, but bypassing networks that route from the client through the brokers data center and on into the exchange.  Instead the clients either collocated with the exchange or connected directly.  In this model, the client still sends their orders through the brokers risk management and compliance systems, either through software written by the broker (e.g. Lime brokerage, which also provides the hosting/collocation) or in many cases by specialized vendors such as FTEN.  This software is deployed at the same location as the buy-side institutions servers (typically collocated at an exchange).
  • Finally, there are some clients who have an intense need for speed, and who have built systems that are as lean and fast as possible.  These institutions use “Naked Access”, where they collocate their servers at an exchange, connect directly into the exchange using the brokers sponsorship and clearing, and where there is no brokerage system/software in-between performing risk management or compliance functions.

Flash Orders: Probably the most provocative innovation in 2009 (although the concept is not new, and has existed in derivatives markets well before being adopted by the cash equities markets) is the flash order.  The idea of a flash order is very simple:  A trader can (optionally) send his order to an exchange or ECN and specify that the order is a flash order.  When the exchange receives the order, if it cannot be immediately matched, it is “flashed” (shown) electronically for a very brief period of time to firms that have signed up to receive flash orders.  Those firms have a brief amount of time in which to respond to that order with a matching order, which allows the original order to get a better price than it might otherwise have received.  If none reply in time, the order is routed out to another exchange.  The firms that receive flash orders are typically High Frequency Trading firms pursuing an “automated market making” strategy (see above).  It is important to note that the firms sending flash orders are typically not high-frequency traders, and only some high frequency traders choose to receive and respond to flash orders.  Flash orders are an interesting example of where one firm (the firm submitting the flash order) is focused on the tactics of trade execution (#2 in the 3 step process) and seeking a good quality execution, and is interacting with another firm that is trading in the market as an inherent component of their strategy (i.e. the HFT firm).

Hopefully this brief explanation provides a framework for understanding and assessing the various technology innovations that are stirring debate on our public markets.

(Disclosure:  I sit on the board of Marketcetera, which offers products to the strategy trading marketplace, and has relationships with the NYSE, Lime Brokerage, and others.)