Big bank mergers, big cuts in data spend? Not so fast, experts say

April 10, 2023

Brennan Carley was quoted in this recent article by Max Bowie at Waters Magazine, on how the Credit Suisse acquisition by UBS will affect their systems and data spend:

Waters Data Management – Should I stay or should I go? What data execs can expect from the UBS–Credit Suisse merger

April 5, 2023

Brennan Carley spoke to Waters magazine recently on the people impact of the acquisition of Credit Suisee by UBS:

Waters Wrap: Big Tech, exchanges and a rapidly evolving market

December 21, 2022

For Waters 2022 wrap up, Brennan Carley was interviewed on his thoughts regarding the Microsoft / LSEG deal:

WatersWrap: ASX’s Chess DLT meets calamitous fate—what can be learned?

November 22, 2022

Brennan Carley was recently interviewed by Anthony Malakian from Waters

A New Chapter

October 22, 2021

Following the acquisition of Refinitiv by the London Stock Exchange Group, I will be leaving Refinitiv where I have spent the last ten years. For most of that time I ran the Enterprise Data Solutions business, and two years ago I took on a new challenge and created an entirely new function: the Consulting & Implementation Team which works with Refinitiv’s most strategic customers. I have had many great experiences at Refinitiv, and worked with some amazing people, and now it is time for a new chapter.

Beginning in 2022 I will once again be providing advisory services through Proton Advisors.

Architects of Electronic Trading

October 9, 2014

Check out Stephanie Hammer’s book, “Architects of Electronic Trading: Technology Leaders Who Are Shaping Today’s Financial Markets.”  I had the opportunity to contribute a chapter (Chapter 14, on FIX.)

How Fast is Enough?

October 31, 2011

In October of 1851, Julius Reuter used carrier pigeons between Brussels and Aachen, closing the gap in telegraph lines that connected Berlin and Paris.  This gave his customers a latency advantage, enabling traders in Paris to learn of news from Germany ahead of their competitors.

Since then, and especially in the last few years, many millions have been spent, and we are now measuring trading delays in microseconds instead of hours. Much has been written on the topic of reducing latency in trading systems, which begs the question: When it comes to trading, how fast is fast enough, and where will it end?

A recent survey concluded that:

  • 71.6% of respondents rated latency as crucially important
  • Of which 13.8% need the lowest possible latency
  • The other 57.8% indicated they don’t necessarily need to be the very fastest, but that being slower does impact negatively on trading profits.

So why the difference, and is it as simple as “need to be the fastest” or “fast is good but it doesn’t need to be the best”?

Let’s analyze this.

  • Different firms or trading desks have different strategies.  Some are engaged in pure latency arbitrage (when you see a price divergence of the same instrument traded in two markets, buy the cheaper one and sell the pricier one.)  Others have market making strategies, statistical arbitrage strategies, news-based strategies, and so on.
  • For any strategy, there is a signal that is an input to the strategy, from which the strategy ultimately makes a buy or sell (or do nothing) decision.  A signal could range from a price move on a market to a bit of news on a news feed, to a research report published by an analyst.  The trading decision could be fully automated in a computer or it could be made by a human being, the principal is the same.
  • While different traders pursue different strategies, they are hardly all unique.  So it should be no surprise that when a signal is generated, there are multiple traders with strategies that will read that signal, make a trading decision, and generate orders into the market.  As those orders flow into the market, they will push the price of the security towards a new equilibrium point, until either the signal has been fully “priced in” to the security being traded, or until a newer signal is created in the market.  The first to trade on that signal will capture most of the “alpha” from the signal, and over time the alpha will decay.
  • Take for example a simple pairs trade, i.e. Assume that there is a strong price correlation security A and security B.  If the price of A moves up and B moves down, traders will buy B and sell A, pushing up the price of B and pushing down the price of A, until the prices come back into alignment or until some other event occurs.
  • So if your strategy arbitrages A and B, you are competing in the latency game with everyone else that trades that arbitrage.  But suppose also that while there is a correlation between A and B, there is also a correlation between B and C, and therefore between A and C.  You are now not only in a latency race with other traders who are trading A and B, but with those who are trading the arbitrage between B and C and those trading the arbitrage between A and C.  Essentially you are in a race with a set of strategies that are triggered by the same (or correlated) signals.
  • Once a signal occurs, the race begins, and the traders with the fastest systems will be the first to trade and capture the maximum possible alpha.  They, and other traders, will continue to trade until the alpha has decayed.

So how fast is fast enough?  Very simply, to capture the maximum value of the trade, you need to be as fast as the fastest of the other traders who have comparable strategies, i.e. strategies that trade off the same or correlated signals.

What if you are not as fast as the fastest of your competitors?  As long as the alpha has not decayed completely, there will be opportunities for slower traders to pick up some of the remaining alpha.  Which begs the question, how quickly does alpha decay?

The answer to that question depends on two things:

  • How clear and unambiguous the signal is, which determines how long it takes for the market to digest, analyze, and process the signal.  In the case of a pure latency arbitrage strategy, the signal is the movement in the price of a common security on two venues, which is very clear and will immediately attract traders who will quickly (i.e. in microseconds) arbitrage away the price discrepancy.  At the other end of the spectrum, if the signal is an analysts report, investors will differ in their assessment of true value of the security, and it will take longer (hours, days, perhaps weeks) before they have fully appreciated the impact of the report and it is reflected in the price of the security.
  • How many firms are trading on that signal and how fast they are.  The more firms that recognize and trade on a signal, and the quicker they are to send orders into the market, the faster the prices will converge and the alpha will decay.

So how fast is fast enough?  It depends on your strategy.  You need to be as fast as the fastest competitors who are trading “equivalent” strategies (i.e. strategies that are based on the same signals or on signals with strong correlation).  If you are not the fastest, you may still be able to capture some value.  In general, the more your strategy depends on clear and unambiguous trading signals, the more rapidly alpha will decay and therefore the more important it is to be at the very front of the pack.

As the fastest traders continue to invest in infrastructure to reduce latency, the rest of the players need to either step up to the new higher bar, or trade different strategies, typically those where the correlations are less obvious or weaker.  Even within those strategies however, a competitor with an equivalent strategy and faster infrastructure will always gain a greater share of the profits.  150 years ago that advantage was measured in hours.  Now it is measured in microseconds, and firms at the leading edge are measuring in nanoseconds.  As long as someone is able to squeeze some more latency out of their system, the race will continue.

Caveat:  These comments relate exclusively to the ability to capture alpha from trading.  Investors who are looking to enter or exit a position (either long or short) trade with the objective of reducing market impact, which is a quite separate discussion.

Financial Information Forum Market Data Capacity Working Group

October 16, 2011

On Thursday, October 20, I will be speaking at the Financial Information Forum Market Data Capacity Working Group.  I started the working group in 1997, so it will be fun to go back after all these years!

Management Strategy in Technology Sectors

August 24, 2011

I am excited to announce that I will be teaching this fall at my alma mater, New York University.  I will be teaching a class in Management Strategy in Technology Sectors:

“This course provides an overview of the process of implementing a successful management strategy in an information-, technology- and knowledge-intensive environment. Fundamental topics include the development of strategic vision, objectives and plans; implementation of strategy and the evaluation of performance; industry and competitive analysis; SWOT analysis and competitive advantage and sustained advantage. Advanced concepts include strategic positioning in global markets, Internet strategy, strategy in diversified firms, and interactions between organizational structure and strategy and between ethics and strategy.”

Low Latency Networking… Will You Choose the Light Side or the Dark Side?

April 5, 2011

Unlike Star Wars, both the light side (lit wavelengths) and the dark side (dark fiber) can be good for your latency.  There is no single right answer, so in this article I will explain them both and highlight their respective strengths and weaknesses.  Let’s start with the basics.

Dark Fiber:

In the beginning there was darkness.   All fiber starts out dark, i.e. strands of fiber optic cable with no signal.  Telecommunications companies acquire rights of way (roads, rail lines, etc.), install conduit, and install fiber optic cable(s) in the conduit.  Cables may have from 100-800 strands per cable, each of which is “dark” when first installed.  Since spools of fiber have limited length, on all but the shortest routes strands are spliced together to create a continuous path for light to travel from end to end.  Cables also have “slack” along the route, to accommodate expansion and contraction of the earth, bridges, etc. (like expansion joints on bridges) and to make it easier to splice cables back together in the event of a cable cut.  (If there were no slack in the route, cable would be much more likely to stretch and break under tension, and it would be more difficult to splice the cable back together when it gets cut.)

Lit Waves:

To actually make all that fiber do something useful, telecommunications companies take pairs of strands (one for each direction that signals will travel) and “light” them with optical transmission equipment, literally sending waves of light along the path.  Data is encoded and transmitted over these waves, and may be presented by the optical transmission equipment in many standard formats such as Gigabit Ethernet, 10Gig Ethernet, and SONET (i.e. OC-3/12/48/192).  They then sell these as Ethernet or SONET services, or use them as transport for other services (e.g. IP/MPLS networks, voice transport, etc.)  Typically the optical transmission equipment used today is DWDM (Dense Wave Division Multiplexing) equipment, which allows telecommunications companies to transmit many (potentially hundreds) of waves on a single fiber pair, using different “colors” or frequencies of light for each wave.  (Since DWDM so dramatically expands the effective capacity of fiber optic cables, it has made it much more viable for telecommunications companies to sell unused strands of dark fiber.)

For routes longer than about 50-60 miles, things start getting a bit more complex.  As light travels along fiber optic cable, signals deteriorate, more so as there are more splices or if the quality of those splices is poor.  So telecommunications companies construct small data centers along their routes to house equipment for optical amplification (which takes place entirely within the optical realm) and regeneration (where the signal is converted to an electrical signal and a shiny clean new optical signal is generated.)  Just as highways have on-ramps and off-ramps to pick up and drop off traffic along the way, and interchanges to other highways, so too do long haul fiber networks.  In the case of fiber networks, telecommunications companies locate switching centers where it is convenient to interconnect with the telecommunications equivalent of on-ramps and off ramps (cell towers, internet peering points, local phone and cable companies, etc.) and to interconnect with other long-haul routes, using ROADMs (reconfigurable optical add-drop multiplexers) to provide the optical switching.

Do you choose the dark side or the light side?

So what does this mean for businesses that care about low latency?  As you might expect, there are some tradeoffs.

Dark Fiber: In general you will get better latency, more control, and greater economic scalability with a dark fiber network, at a cost of some complexity and initial expense.

  • With a dark fiber network, you get to make some of the decisions that the telecommunications companies would make on your behalf if you were buying lit waves.  You get to select the optical equipment and how it is configured.  You get to decide whether and where to use optical amplification vs. regeneration, and how many waves to configure on the network.  It is your private network and you control it.  As you might imagine, most telecommunications companies will opt for design decisions that maximize the amount of bandwidth and the number of services they can sell (since this maximizes their revenue), while you may choose to optimize for reduced latency.
  • You can also eliminate devices altogether (such as ROADMs) that add latency but don’t add functionality that you need.
  • While you can’t control it, you have greater visibility of the physical routing of your network, the type of fiber used, the amount of slack, and the location of facilities used for amplification and regeneration.
  • The initial cost of a dark fiber network is likely to be higher than the cost of a single lit wave (less so on short routes where equipment is a greater percentage of the overall cost than the fiber), but the cost is largely fixed.  If you own the fiber, the incremental cost of adding waves is very small, making it much easier and cheaper to add waves (e.g. when adding a new trading strategy, data feed, or trading desk.)  For large institutions, the total cost could well be less than buying a large number of individual waves.
  • The complexity of operating a dark fiber network is greater than that of buying lit waves, although some dark fiber providers will either operate the network for you or have partners that provide outsourced operations.
  • Since fiber cables are point to point and there aren’t routes between every possible pair of end-points, you may need to integrate multiple routes from multiple dark fiber providers to connect all of the end points you need.  If you are focusing on low latency trading, the dark fiber providers typically have routes that interconnect the key exchange colocation facilities, reducing the need for you to do this integration.
  • Finally you probably want resilience in the event of fiber cuts or equipment failures.  With dark fiber you have the benefit of knowing your exact fiber route, so you can ensure true diversity, but you also have the responsibility of sourcing and implementing an alternate route.

Lit Waves:  With lit wave services most of the complexity is hidden from you and the initial cost is lower, but the latency will generally be higher, the costs will not scale as well, and you lose some control.

  • With lit wave services, you are one of many customers using a fiber pair that the owner wants to manage for maximum revenue.  Since most telecommunications markets are not particularly latency sensitive, most telecommunications companies optimize their network design to maximize bandwidth, the number and type of services, and the number of interconnection points (each of which represent a revenue opportunity).  All of these add latency.
  • Since you are not consuming an entire fiber pair, your initial cost for a wave will be much lower, and if you only need a small number of waves your total cost will likely be lower.  But as you add waves, your costs can grow rapidly.
  • A lit wave service is engineered, operated, and managed entirely by the telecommunications company.  Your provider has already deployed and managed the equipment needed to provide the service, and integrated the various fiber routes necessary.  This simplifies your life, although it reduces your visibility of the network.  For example, the operator may choose to “groom” the circuit (move your traffic from one fiber to another, possibly to an entirely different route.)  This can affect your latency and could compromise your diversity.  With large telecommunications companies, even your account team probably doesn’t have visibility of the actual routes used to deliver your service.
  • Telecommunications companies can provide lit wave services that integrate multiple underlying fiber paths.   As a result, they can deliver to more end-points than most dark fiber providers, and they can provide “protected” services (composed of two or more diverse routes).

Both dark fiber and lit wave services will provide you with better latency than other alternatives such as IP or MPLS networks, and there is no single right choice for every application or every business.  Understanding the differences is a crucial first step to making the right decision for your business.