In this article, Mike O’Hara, CEO of HFT Review – in conversation with Arnaud Derasse, CEO of Enyx, Stephane Tyc, Co-Founder of Quincy Data and Ron Huizen, Vice President of Systems & Solutions at BittWare – looks at how the combination of microwave and FPGA technology is changing the landscape of market data distribution and consumption, particularly in high performance, low-latency trading environments.
In April 2013, Quincy Data, a US-based provider of ultra-low latency market data services, announced that they could deliver CME futures data directly into NASDAQ’s primary data centre in Carteret, NJ, rack-to-rack, within 4.16 milliseconds.
Considering the 734 miles distance between the two data centres – CME’s is located in Aurora, IL – this is an impressive achievement given that the absolute theoretical minimum latency (based on the speed of light in a vacuum) is just 3.95 milliseconds. Since the April announcement, Quincy has lowered its latency further – to 4.09 ms – and the company has plans to provide even faster delivery very soon
“It’s better to be fast 99% of the time than slow 99.999% of the time.”
Quincy Data mantra
Until relatively recently, distributing and processing market data at this kind of speed was unheard of. However, with the combination of two technologies: microwave and FPGA; it is now possible to distribute and process market data faster than ever before. Quincy’s service makes use of both of these technologies, being powered by the McKay Brothers Microwave Network and using FPGA feed handlers by Enyx and Novasparks within the delivery chain.
In the high frequency trading (HFT) space, it is probably fair to say that there has been a great deal of hype surrounding FPGA and microwave technology in the last couple of years. But the latency figures speak for themselves and there is no shortage of demand where speed is concerned.
If you have the best microwave link, there is never enough supply to satisfy the demand, so the take-up of this service has been incredibly fast” says Quincy Data’s co-founder Stephane Tyc.
“There is very little bandwidth and a lot of data worldwide. Distributing the fastest data across many continents is very hard. Still”, he adds, “this is a very competitive space for vendors”.
There are some key differences to consider between short-haul and longer-haul microwave routes, according to Tyc.
“Intra-urban links are much easier to build than links over long distances, so the supply is much higher and therefore the latency difference between alternative links is likely to be small. But for
the longer-haul networks between cities, the difference in latencies is much greater. There are big differences in reliability too”, he says.
Reliability is something that service providers like Quincy and their microwave partners McKay Brothers are constantly striving to improve, although microwave and millimetre wave links are unlikely to ever match the reliability of fibre networks, not least because of potential interruptions like adverse weather conditions for example. But Quincy’s mantra is that it is better to be fast 99% of the time than slow 99.999% of the time.
As microwave links are generally only available in the tens or hundreds of Mbps, another issue to consider is the limited bandwidth that wireless offers compared to fibre. But Tyc believes that this is less of a problem than it may have been in the past.
“The bandwidth constraints have a very interesting impact on the technology teams at the firms taking the service”, he says. “In the past, bandwidth was assumed to be almost infinite, so developers didn’t really worry about how packets were formed on the wire, they left all of that stuff to network engineers and – at most companies – the two live in different worlds. But now, software engineers working with market data can’t afford to ignore how packets are transmitted, so there are some very interesting rapprochements between network engineers and developers”.
“ If you have the best microwave link, there is never enough supply to satisfy the demand, so the take-up of this service has been incredibly fast”
Stephane Tyc, Quincy Data’s co-founder
Having such limited bandwidth means that service providers need to be able to allocate and share that bandwidth fairly across their paying customers, a function that is well suited to an FPGA-based approach, as Arnaud Derasse, CEO of Enyx, a provider of ultra-low latency solutions based around FPGA technology, explains.
“If a telco only has something like 100Mb to share across more than ten customers, then their customers will obviously insist on the fairness of the sharing allocation. So the telco needs to be able to demonstrate that when they send a packet, that packet will go through the link and will be processed fairly regarding the other customers. They don’t want to have one customer sending huge packets and monopolising the link, for example. That’s why they need a system like ours to do the necessary segmentation and provide fair bandwidth management over that link”, he says.
Another area where FPGAs can help firms make best use of limited bandwidth is around data compression.
“Because service providers are trying to send more and more data – on both fibre and microwave – they need good compression”, says Derasse. “But it has to be lossless compression, and for high performance trading it also has to be fast. At Enyx, we use specific streaming algorithms implemented in the FPGA hardware, which allow us to deliver compression in one microsecond and decompression in one microsecond at the other side”.
FPGA technology has become more and more widely adopted amongst the low-latency trading community for a variety of purposes, including market data filtering, feed handling and pre-trade risk checks on order flow. All of these are a natural fit for FPGAs, with their ability to directly connect to the network, run at full line rate and act essentially as an Intelligent NIC. FPGA specialist BittWare works with a wide range of customers, including proprietary trading houses, hedge funds and banks, which use the firm’s FPGA boards for all of these kinds of applications and more.
“Latency is still a major driving force to move from software-based systems to FPGA for these applications”, says BittWare’s Vice President of Systems & Solutions Ron Huizen. “But network density is also emerging as a key factor in favour of FPGAs. In other words you can handle a lot more network feeds in the same rack space using FPGAs”
FPGAs are also being explored as CPU accelerators for back-end analytics, not least because of their small power consumption, as Huizen explains:
“FPGAs are extremely powerful processing engines, and can outperform CPUs and GPUs while using a fraction of the power. For small shops, the monthly power bill may not be a concern, but for companies running huge server farms, power is a non-trivial operating cost, and can quickly dwarf the cost of the equipment itself”.
“ The telco needs to be able to demonstrate that when they send a packet, that packet will go through the link and will be processed fairly regarding the other customers”.
Arnaud Derasse, CEO of Enyx
Working with such a wide range of customers, Huizen sees a variety of ways they implement functionality on FPGAs.
“How they proceed depends both on how familiar they are with FPGAs, and how complex of a system they want to field”, says Huizen. “For customers not familiar with FPGA, making use of appliances or solution providers is a natural way to get the advantages of FPGA technology without having to ramp up a development team and undergo the implementation time, effort, and risk. And solution providers like Enyx can provide the full FPGA implementation on hardware of the customer’s choosing, (preferably of course a BittWare board!)”
Even for customers who have FPGA expertise in-house however, there are limits to what they are prepared to do themselves, as Huizen points out.
“One thing we seldom see in the financial market is the customer designing their own FPGA boards”, he says. “And there are some critical pieces of Intellectual Property (IP) like TCP/IP offload engines, which almost everyone purchases as the development effort to roll your own is just too much”
“ Network density is also emerging as a key factor in favour of FPGAs. In other words, you can handle a lot more network feeds in the same rack space using FPGAs”.
Ron Huizen, BittWare’s Vice President of Systems & Solution
With FPGAs becoming widely used to consume and process market data today, the last word goes to Stephane Tyc, who believes that the logical next step is full order handling via FPGA.
“We’re at the point with order handling now where we were with market data a few years ago when market data was starting to become mature in FPGA, but it wasn’t widely adopted”, he says. “The question is not so much about the maturity of the product, it’s more about whether it has a fast adoption rate or not, because people have to understand the technology and become familiar and at ease with it. With order handing via FPGA, it’s not something that’s going to be widely adopted overnight, but the technology is certainly there. That’s where the best prop trading firms already are and that is the thing that will be open to the wider market in the future”.