In this article, Mike O’Hara and Adam Cox of The Realization Group look at how regulation and technological advances are encouraging the buy-side to take matters into their own hands when it comes to trading infrastructure. Mike & Adam speak with Brian Ross of FIX Flyer, Jacob Loveless at Lucera and Stuart Turnham of Equinix. The benefits for buy-side players from this trend extend well beyond cost. As power shifts away from the sell-side and buy-side firms broaden their horizons in terms of connectivity, they are finding that new trading possibilities quickly open up. In a world where electronification is still spreading into new asset classes, agility is very much the name of the game.
Introduction
Technology and regulation are changing the way the market works in fundamental ways, and one of the areas where that change is most starkly visible is in the relationships between the buy-side, sell-side and vendor community. As regulators demand greater transparency and new technology offers more opportunities, large buy-side participants are suddenly finding they have far more choice when it comes to market access and how they do business.
This dynamic is seen clearly in a newfound focus on connectivity. Regulatory change is pushing the sell-side to unbundle its offerings, which in turn is prompting investment firms to consider arranging their own connectivity and ancillary services. In such an environment, large buy-side participants may be taking on some new costs but they are also reducing others. More importantly, they are gaining control.
A shifting power balance
In the not-too-distant past, a medium-sized buy-side firm had limited choices when it came to market connectivity. Brokers, and the OMS and EMS vendors they worked with, offered connectivity at a certain price and that pretty much was that.
“Buy-sides will look for cheaper connectivity solutions as they take on the cost and then will likely explore alternative trading partners to find the best liquidity choices.”
Brian Ross, CEO of FIX Flyer
But as the sell-side is being forced to itemise its services and charge for them separately, and as cloud-computing technology changes the very nature of the market infrastructure, buy-side firms are starting to see a new array of choices. Fixed connectivity on demand and the use of cross-connects are just two of the ways that they can change the way they access the market.
“Buy-side participants that today are already in the cloud using shared services for most of their IT needs are going to start looking for other connectivity models once they have to pay the bill,” says Brian Ross, CEO of FIX Flyer. “Buy-sides will look for cheaper connectivity solutions as they take on the cost and then will likely explore alternative trading partners to find the best liquidity choices.”
Stuart Turnham, Director of Enterprise and Financial Services for EMEA at Equinix, has noticed a similar change in the power dynamic.
“Changing the market conditions has created an environment where financial services firms feel comfortable with the idea of outsourcing and open purchasing data centre and interconnection services directly from a colocation provider or technology services vendor colocated in the same data centre. We have seen this first hand with our buy-side clients, who are now in more of a powerful position in the market, when it comes to IT architecture,” Turnham says.
Cloud computing, Turnham says, effectively creates a kind of infrastructure shop whereby a lot of firms can plug and play. “Once regulation is defined and security concerns are mitigated many financial services business applications and IT workloads will move to cloud-based platforms, including public cloud services. This will enable many buy-side firms to renovate existing IT models proving insufficient for the task post Basel III – and therefore be agile enough to enter new markets globally directly and not always have to rely on a brokerage arm.”
“The buy-side are now in more of a power position in the market, when it comes to IT architecture.”
Stuart Turnham, Director of Enterprise & Financial Services for EMEA at Equinix
Equinix, the leading global data centre and interconnection provider, is seeing an increase in the number of direct relationships it has with buy-side firms, including hedge funds and traditional asset managers. Outside the core product offering, the company finds itself essentially acting as a concierge, using it’s colocation facilities as a meeting place for which to directly connect market participants in close proximity to the supply chain members, counterparties, customers and partners they need to, across multiple asset classes.
Turnham says buy-siders’ increased desire to take matters into their own hands appears to stem from two issues. “Increased business agility and cost control over their own infrastructure,” he says.
“In today’s world this is possible, as it’s a lot easier to provision the deployment of IT infrastructure to the architecture you desire than it used to be.”
But it is not only the ease with which the buy-side can deploy infrastructure that is putting the focus on costs. It is also due to a change in the way different firms work with each other. “The buy-side is effectively going to be looking at costs more,” Ross of FIX Flyer says.
“In the past, they didn’t push back on vendors because the sell-side, in the end, was paying for things, whereas now they have to look at more efficient models themselves.”
Ross says buy-side participants may not have focused on connectivity costs because they were using an outsourcing model in terms of their OMS or EMS providers. “A lot of them outsource their IT to essentially virtualised environments in the cloud. I think now they’re actually going to start looking at those costs a little more closely.”
In other words, the costs were always there. They simply weren’t always so visible to buy-side participants.
“They’ve probably ignored them in the past because they were likely getting paid through commissions or getting paid through the sell-side,” Ross says. “Now you start looking at different opportunities.”
A wider world
Once a buy-side participant starts to take responsibility for its connectivity and infrastructure, it starts to consider ways to enhance its performance.
“Since the trading infrastructure is no longer being paid for by a specific broker it opens the door to saying, ‘I can go to more partners and get better execution.’ I think that’s really where a change in connectivity has to happen,” Ross says.
Of course, with great buy-side power comes great buy-side responsibility.
“From a regulatory standpoint, across the whole of financial services there’s more reporting and compliance required than ever before,” Turnham of Equinix says.
This translates to a larger, and much more complex, flow of data that needs to be captured, analysed, stored and distributed.
“As a lot of this data is associated with more rigorous valutation processes and portfolio analysis, it is starting to need to be reported to the regulators much faster and in much more detail and although buy-side firms are adapting to these market conditions, many of their existing IT models are not – in particular those relying on batch jobs and excel spreadsheets – as firms will require more computer power and data storage capacity than their current infrastructure and system architectures can provide”, Turnham says. “For them to invest in their own adequate infrastructure and technology maintenance, it would prove too costly an exercise, so they are looking to cloud computing – especially infrastructure-as-a-service and more niche software-as-a-service cloud providers – to help.”
In this brave new world of buy-side power, it may be helpful to think about what exact role technology plays for investment firms.
“We do three things in technology,” Jacob Loveless, CEO at infrastructure-as-a-service provider Lucera, says. “We move data, we work on data and we store data. That’s it. That’s our entire industry. There are networking companies, compute companies and storage companies. So making those three functions consumable as a service, that’s infrastructure-as-a-service. Then everything layers on top of that.”
“The most economical model for these things is for shared utilities that you pay for on a consumption basis. So it’s absolutely going to happen. Economics will force it to happen.”
Jacob Loveless, CEO at Lucera
Lucera sees the movement towards shared services as something that goes beyond the buy-side and is part of a larger picture, one where standardisation and market economics lead to efficiency.
For instance, a FIX engine is now commoditised so there is little commercial advantage to be had from differentiation.
“It’s a utility,” Loveless says. “There’s a reason why we don’t all have our own power generators. The most economical model for these things is for shared utilities that you pay for on a consumption basis. So it’s absolutely going to happen. Economics will force it to happen. It’s just a question of when.”
That question of when, thanks to regulatory pressure, seems to be now. Ross says MiFID II is pushing market participants to focus on their trading operations. “That’s the door for cloud providers with infrastructure/software/connectivity as a service, which can help buy-side and sell-side move from a CapEx to an efficient Opex model.”
Regulatory push
The regulatory impetus towards new forms of connectivity is part of a broader drive to give the buy-side more choice.
For instance, MiFID II is encouraging the trend by pushing for the unbundling of research from other brokerage services. Ross says that unbundling may encourage buy-side participants to expand their trading partners, which in turn could lead to a rethink of the FIX connectivity price models that have been around for a decade or more.
“If there are cheaper ways to do it, then people are going to be a little bit more cognitive of that,” Ross says. “The whole MiFID II piece is about better trading models and that comes with lowering overhead and being able to be more efficient there. So the cloud vendors are going to start changing the way FIX is handled, it won’t just be controlled by network providers and EMS/OMS vendors going into that space.”
But just because buy-side firms may want to cut out the ‘middle-man’ cost that a broker or EMS/OMS vendor represents, that does not mean they want to be managing a huge number of relationships themselves.
The end game appears to be one where the utility model dominates. This, according to Loveless, is where a firm essentially wants it to work like indoor plumbing, just turn it on and it works. Firms don’t want to deal with the complexity and the cost of managing 30, 40, 50 connections either locally in the data centre or across data centres.
Loveless says the hedge fund community has been at the forefront of this trend, recognising that the network is not necessarily a differentiator, and saving costs along the way.
“More than that, it buys you time to market. If I can spin up a client with the click of a button, that’s going to make a material impact on a business where spinning up a client used to take 45 days. That time to market is a powerful tool.”
In addition to the time-to-market factor is the flexibility in terms of services that comes with outsourcing. “It means you’re not stuck with long-term decisions,” Loveless added. “You’re not making a one-year, two-year, three-year commitment there. You’re clicking a button, you’re enabling a service, you’re trying it for a little while and if it works, you keep it. If not, you delete it.”
Market trends
It also is not just regulation and technology that are driving change. The post-crisis policy imperative for ultra-low interest rates has, in an indirect way, played a role.
“We are in really the sixth year of the nuclear winter of low interest rates,” Loveless says.
This situation has meant that volatility – with the occasional jolt of market turmoil notwithstanding – has generally been low as well, which has placed the onus on firms to find other ways to boost performance. Broader connectivity is one such way.
“If you want to get better execution in foreign exchange, you’re not talking about connecting to five, six, seven places. You’re talking about connecting to 50, 60, 70 places and that’s just a different class of problem,” Loveless says. “You have low vol, you’re going to do some cost savings but really you want to just try to weather the storm”.
For the buy-side, the upside of trading new markets and the downside of having to deal with a more complex IT challenge creates a strong incentive for managed services.
Turnham says provisioning infrastructure can sometimes be “a bit of a complex, long and inhibitive procedure”, referring to buy-side participants that can have smaller IT departments.
“As mentioned we are seeing buy-sides deploy IT infrastructure more directly with data centre providers, but at the same time we are seeing the market leverage third party technology vendors and managed services in new geographic markets or additional asset classes.”
He adds: “Long term it is simply going to become too costly and innovatively prohibitive to maintain/provision all you need in-house.”
That has prompted buy-side firms to “multi-source” the niche providers that best fit with the applications and IT workloads the business requires. Echoing the themes that both FIX Flyer and Lucera highlight, Turnham identifies two important reasons for this.
The first is performance. “ A cloud services provider can focus on the delivery of its services as its core discipline and therefore is going to provision those services to the highest standards and SLA accordingly, often better than anyone could in-house.”
A second is agility, which he says will become increasingly important in financial services.
“As people go in and out of markets and as electronic trading truly moves across multiple asset classes, you’re going to see market participants want to spin up and shut down new application and systems, trial new algorithms and enter new geographic markets within very short periods, almost overnight.”
If investment firms are having to rely on their own IT departments to deploy technology in a geography that’s not native, it will often take too long and firms might miss the very market opportunity they were wanting to develop a business strategy around in the first place, meaning they will not be able to capitalise as quickly as if they were multi-sourcing what they needed from specialist providers, cloud and collocation providers to do that, he says.
Loveless of Lucera adds that relying on shared services offers one more, very important benefit: it can boost technological performance.
“We have this really interesting thing that has happened in the last seven or eight years. Network I/O continues to dramatically outstrip disc I/O. Given large amounts of scale out of a system, and lots of network connectivity, you’ll be able to read from a system faster than you’ll be able to read from a local disc. If I take your raw data and I split it across 100 servers and then I give you 10 gig cross-connects – which costs basically nothing – you’re going to be able to read that data faster from me than you’re going to be able to read it from yourself.”
Mastering the markets has long been a matter of mastering the networks – both human and technological – that make them up. What is different now is that the technological networks have changed so radically in the past few years. Any firm that has not moved with the change may find itself at a distinct disadvantage.
That may come from the higher costs and diminished flexibility arising from overreliance on sell-side brokers, or it may come from the reduced performance and agility stemming from using outdated systems. But for the firms that embrace their newfound power, a very different picture emerges. It’s a picture based on having more choices and gaining more control.
A view from the buy-side
The Realization Group spoke with one top executive at a tech-oriented quantitative fund to hear his views on how the trading landscape is changing, both for his fund and others.
Connectivity trends
Complete reliance on brokers for trading connectivity solutions is rarer now. Broker-neutral solutions began to become more popular as much as 10 years ago and are now common. “I don’t see that particularly going away,” the executive said. His firm puts all of its brokers on a broker-neutral platform.
OMS/EMS solutions
A lot of funds are still using legacy EMS/OMS systems. “We’ve actually built our own, which in a way is cloud based in that we host part of it in AWS (Amazon Web Services).”
Cloud usage
This fund uses the cloud extensively for research, post-trade analysis, TCA and similar work. “We haven’t shifted any core info there or any trading there, but we’ve essentially used that as the centre of our research infrastructure.” The executive knew of other quant firms increasingly using the cloud, not only start-ups but also established multi-billion-dollar firms. “I definitely see that’s the way.” Cloud providers allowed for more flexibility, particularly as funds took a hybrid approach, as his fund did.
The community
“I find it interesting talking to other COOs who are a lot less tech-focused on how old-school their infrastructures are… There are still people out there who have their own data centres and things like that – which is very, very 1990s.”
What’s needed for greater adoption of new technology?
The fund executive summed it up in one word: education. That applied both to regulators and their potential misconceptions and to investors as they needed to recognise the importance of technology/vendor due diligence.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara and Adam Cox of The Realization Group investigate a radical new concept in trading: the emerging market trading hub. By linking far-flung exchanges at a neutral site in a developed market location, a trading hub essentially brings the market to the liquidity rather than the other way around. The hub revolutionises the relationship between venues and the investors who want to access them, offering significant benefits to all parties. Speaking with Rob Bath, Vice President for Global Solutions at Digital Realty, the company that is driving the initiative, and with Hirander Misra, CEO of GMEX Technologies, which is supplying technology and business expertise to a swathe of hub participants. They also hear from exchange executives Kristian Schach Møller, CEO at Agricultural Commodity Exchange for Africa, Joseph Kitamirike, CEO of ALTX Africa Group, and Alisher Shernazarov, CEO at Central Asian Stock Exchange. Stuart Turner, CEO of post-trade technology specialist Avenir Technology, a vendor backed by GMEX Technologies which is also involved in the project, also features in the report.
Introduction
One of the greatest challenges faced by new trading venues, particularly in emerging and frontier markets, is how to attract liquidity. At some point, liquidity begets liquidity, but how can a new exchange get the critical mass necessary? Rather than trying to force liquidity towards exchanges, a new business and technological approach offers the chance to do the opposite: to essentially bring those exchanges to where liquidity and trading ecosystems already exist. Emerging market trading hubs, by providing connectivity and access to distant markets from a centralised location, offer new sources of liquidity for venues, new trading opportunities for investors, new sources of business for financial firms and potentially lower costs for all concerned. Whether it means crossing time zones or crossing asset classes, these market trading hubs offer the possibility of breaking down barriers and generating a whole new sphere of trading activity.
The lure of liquidity
The poet Samuel Taylor Coleridge in The Rime of The Ancient Mariner famously wrote about how a group of ill-fated sailors were faced with “water, water, everywhere” but not a drop to drink. It’s a feeling some exchange executives may be all too familiar with. They know the liquidity is out there, but they just can’t access it. Their venues may have extremely attractive assets that investors in other parts of the world would love to trade, if only they could do so without going through difficult and costly hoops.
Rob Bath, Vice President for Global Solutions at Digital Realty, is looking to change all of that.
“What we’re driving here is the creation of a space where new and emerging markets together with existing trading ecosystems – the investor community, data vendors, independent software vendors, sell-side, traditional exchanges etc. – are effectively able to meet in a central location,” Bath says.
For some venues, the hub concept could be a lifesaver. “The inability to access required liquidity is the primary impediment to the growth of a number of these emerging market exchange platforms,” Bath says.
“What we’re driving here is the creation of a space where new and emerging markets together with existing trading ecosystems are able to meet in a central location.”
Rob Bath, Vice President for Global Solutions at Digital Realty
But an emerging market hub offers benefits to many other groups. For investors all over the world, it opens up trading possibilities. For an established venue, a hub solves major distribution issues and presents the possibility of lucrative new revenue streams. For the sell side and the vendor community, it creates a huge pool of potential new clients. Whichever way you look at it, an emerging market trading hub is a classic win-win.
There are plenty of precedents where an exchange in one centre lists the products of another.But the emerging market trading hub concept is different in that it is not simply about cross-listing. What the hub does is physically connect the venues, creating what Bath called a true partnership. “This is providing a complete, cloud-enabled trading and clearing market infrastructure ecosystem,” he says, noting that a hub also differs from existing tie-ups because it is not based on an unequal relationship where a major exchange dictates the terms for a less-established venue.
The first hub, which Digital Realty is developing with exchange and post trade technology company GMEX Technologies, is planned for London and will include a number of new exchanges from Africa and Asia.
Hirander Misra, CEO of GMEX Technologies, adds that multi-venue hubs are planned for other parts of the globe too, including ones in Chicago and Singapore.
What’s different about what Digital Realty and GMEX Technologies are doing, he adds, is that while data centres and connectivity in the developed world are focused on existing markets, participants and vendors, the hub is starting from a totally different perspective.“There wasn’t really a play on having access into the emerging markets or indeed players in those emerging markets getting better access into the developed markets as well,” Misra says. “This is bi-directional in many ways.”
“This is bi-directional, having access into the emerging markets and emerging markets getting better access into the developed markets.”
Hirander Misra, CEO of GMEX Technologies
Bath says that in this regard the idea of using a neutral location is also important. “That neutrality is, in and of itself, very attractive, because the developed market locations in question represent large and established liquidity pools, together with also allowing key participants on those local exchanges now to gain access to and sell products and services to this broader community.”
In addition, the use of a major neutral location such as London offers market participants and vendors alike the reassurance of a robust infrastructure. “In effect, it is the assurance of stable and secure management of this exchange platform, that is very interesting to the institutions providing these services,” says Bath.
The timing of the introduction of the first emerging market trading hub is significant as it comes at a time of increased interest in new and emerging markets. Although there have been plenty of headlines about market unrest, the latest global liquidity report by PricewaterhouseCoopers notes that long-term, emerging markets in general have benefited from increased capital flows as a result of quantitative easing in developed economies. The consultancy says some studies showed US QE coinciding with portfolio rebalancing across emerging markets and the US.
It comes as little surprise, then, that for the exchanges in Africa and Asia taking part, the opportunity to connect with investors on such a London-based platform could not come soon enough.
Tech trade
The web of relationships that make up an emerging market trading hub offers more than just the possibility of new streams of liquidity. It also means new ways of doing business and partnerships that are based on the exchange of technology.
For instance, Kristian Schach Møller, CEO of Agricultural Commodity Exchange for Africa (ACE), says there was nothing in terms of solutions from local providers or off-the-shelf technology that could help his Malawi-based exchange develop and grow.
“We needed to have something that was much more scalable,” Møller says. “We reached a limit as to what we could develop ourselves and what insurance, banking and other related collateral management institutions here are willing to offer. And it means that we need to link to the developed world, to developed institutions.”
“Now we’re getting their expertise to make a product that fits into these very unique structures coming out of an emerging market.”
Kristian Schach Møller, CEO of Agricultural Commodity Exchange for Africa
ACE turned to GMEX Technologies and its post-trade software partner Avenir, which supplied a software solution based on developed world trading systems.
“We call it the Formula 1 technology,” Møller says. “Now we’re getting their expertise to make a product that fits into these very unique structures coming out of an emerging market.”
These relationships underpinning the hub make it unique.
“This is different to what has been seen before, where technology companies really just delivered technology into an emerging market project or on exchange, but didn’t achieve a link to create a bigger network,” Møller says.
Misra of GMEX Technologies says this new approach was a natural fit. “We saw that all they had in place was customer supplier relationships; no-one was actually working with them closely to build up a partnership model where the interests were truly aligned.”
With so many benefits for so many different parties, the question arises as to why an initiative such as an emerging market hub hasn’t happened before. Møller suggested the main obstacle was probably people’s perceptions when it came to markets in some parts of the world. “I honestly think that anybody outside of the region, and Africa in general, perceive Africa as being very risky and probably would not even consider touching it.”
African road trip
A critical part of the emerging market trading hub story involves the infrastructure, and specifically the need to make a variety of post-trade practices work for international investors. Here the solution partly came about from an unlikely event: a road trip across Africa by Stuart Turner, an exchange technologist.
After his stint in Africa and this road trip, Stuart joined an exchange consultancy where he specialised in Africa. During one project in Kenya, he wrestled with the problem of how to get more liquidity into the SME sector. “And it was over those years of building up this knowledge of Africa that I realised that part of the problem with Africa is that they can’t afford the technology and the access,” Turner says.
Turner decided what was needed was a company that could provide affordable software for less developed venues. “We set up software which is for clearing houses and CSDs, basically all post-trade software,” he says.
The company, Avenir Technology, has teamed up with GMEX Technologies to make the software available to emerging venues. Post-trade issues are particularly important for emerging markets because of one critical factor: confidence.
“It’s not just the systems but also the way they’re operated,” Turner says, adding that Avenir has developed its systems to be compliant with IOSCO’s Committee on Payments and Market Infrastructure standards.
“Sell-side institutions as well as the investors participating in these markets want good systems and processes, and access to information.”
Stuart Turner, Exchange technologist & Founder at Avenir Technology
“You’d be surprised how much focus there is when people decide to trade in new markets,” Turner adds. “Sell-side institutions as well as the investors participating in these markets want good systems and processes, and access to information”.
Avenir relies on a combination of open source technology and the benefits of Moore’s Law about processing power. ”I could run just about every African exchange post-trade system on my laptop, seriously, even the larger ones. It’s not a problem,” he says. Turner says it makes for a good fit with GMEX Technologies since that is the London-based group’s philosophy as well.
“In terms of technology itself, given Moore’s Law, every few years, capacity doubles and technology in price halves,” Misra says. “We’ve seen this with smartphones and so forth, going back from the bricks of the ‘80s. So everything out there that’s being sold for millions and millions of dollars is legacy technology that’s 20 to 25 years old.”
He says GMEX Technologies, when it formed, wanted to start with a clean sheet of paper. “What that means is we can run on 1/10th of the hardware and 1/10th of the data centre footprint with multiple times more capacity than most other systems that horizontally scale. That means you can be more agile and run at much lower costs.”
But Misra says that in addition to being too expensive for some exchanges, technology was too often being deployed standalone, as if on an island. “So that the ships of liquidity couldn’t sail to it, so to speak,” he says. “We found the fact that these standalone tech platforms would be deployed somewhere and that there would be tumbleweed blowing through them as quite bizarre in this electronic age.”
Two-way traffic
ACE is one African venue taking part in the hub. Another is the Ugandan exchange owned by ALTX, which is based in Mauritius and operated by GMEX Technologies from London.
Joseph Kitamirike, a founder of ALTX, says the hub will create two-way traffic, enabling investors outside of Uganda to trade local assets and those inside the country to trade in London. “We have some brilliant assets trading on the markets in Africa which the rest of the world hasn’t come to know about yet,” Kitamirike says.
Kitamirike says the New York Stock Exchange within minutes can trade the volumes seen in African exchanges within a year. “We can change that,” he says.
The emerging market trading hub solves one part of the equation. Another part will be to deal with the foreign exchange component for transactions to take place, something that ALTX is working on.
After the Uganda launch, ALTX plans to build other exchanges in Africa that can sit on the same network, ultimately allowing investors across Africa to trade in one seamless venue.
“We have some brilliant assets trading on the markets in Africa which the rest of the world hasn’t come to know about yet, we can change that.”
Joseph Kitamirike, a founder of ALTX
“We think by exposing African securities to global capital and exposing globally available securities to African capital, we will build a network through which liquidity will flow.”
Alisher Shernazarov of Central Asian Stock Exchange (CASE) in Tajikistan says the infrastructural issues such as settlement offered one challenge, while another came from within. In his case, Tajikistan’s domestic market regulation required reforms.
“The legal part is one of the challenges. It takes a lot of time in terms of making some amendments to the current legislation here in Tajikistan,” Shernazarov says. “There are some regulations which have made settlements more complicated. Sometimes, it’s very difficult to execute even small transactions,” he adds.
The good news, however, is that the government recognises the value of unlocking the potential for capital inflows and Shernazarov says the ministry of finance, central bank and market regulator are working with the new exchange to make the necessary changes.
“At a certain point the local markets will become saturated and liquidity issues will become more apparent. That’s where the trading hubs can prove to be vital for emerging markets.”
Alisher Shernazarov of Central Asian Stock Exchange
Shernazarov says funding for enterprising companies in Tajikistan remains expensive and the prospect of new capital can make an enormous difference. “Sometimes it’s not viable for entities to borrow from commercial banks in order to increase their capacity. So this definitely will bring a source of long-term funding for local corporates.”
The real challenge starts when the stock exchange becomes operational, Shernazarov says. “At a certain point the local markets will become saturated, and liquidity issues will become more apparent. That’s where the trading hubs can prove to be vital for emerging markets. Connectivity and access to large capital markets can significantly ease the pressure on small emerging economies. In this aspect partners like GMEX can help us in dealing with liquidity issues, by providing linkage with global hubs such as London.”
The users
While the advantages of an emerging market hub to venues are clear, any lasting change in the way new exchanges operate ultimately needs to benefit another group: the underlying users. Here, Bath says the hub concept offers a compelling argument. “This is a participant rather than an exchange-centric proposition, because it is effectively a one-stop-shop for access to multiple exchanges, liquidity venues, products and participants.”
That in turn makes the proposition even more attractive to sell-side participants. “From the bank’s side, you want effectively to co-locate with this emerging market hub, because you now have a completely new set of customers to sell to that you haven’t historically been able to tap into, and across a broad range of asset class,” Bath says.
The new approach, Misra adds, essentially brings a lot of people to the party.
“You could be in Tajikistan in Dushanbe and be setting up a stock market from scratch and be wired into that global ecosystem, whereas before you had to look at, ‘How do I get lines into everywhere else? How do I create points of connectivity?’ And there would be a long lead time and you would sit and wait for people to invest in infrastructure. But then you’d have this chicken and egg scenario where they wouldn’t do that until you had liquidity but you couldn’t have liquidity until they did that. So you break that chicken and egg scenario, as well, by doing it this way.”
As emerging market trading hubs take off, Bath expects the concept to expand beyond the geographical sense. “I think what is interesting is the degree to which this addresses interest across a multitude of asset classes, and the potential reach associated with this,” he says. “This offers significantly greater distribution opportunity for these larger exchanges, without the need to set up costly separate exchanges in the emerging markets themselves.”
Or, as Coleridge might have put it: There will be water, water everywhere – and more than enough to drink.
Writing and additional research by Adam Cox, Associate Editor, The Realization Group
For more information on the companies mentioned in this article visit:
- www.aceafrica.org
- www.altxafrica.com
- www.avenir-fmi.com
- www.digitalrealty.co.uk
- www.gmex-group.com
- www.case.com.tj
In this article, Mike O’Hara of The Realization Group asks Ash Gawthorp of The Test People, KPMG’s Daryl Elfield and smartTrade’s David Vincent to assess the testing challenges facing firms looking to deliver banking and financial services via mobile apps.
Introduction
Your smartphone has more computing power than the combined super-computers used by NASA to send a rocket to the moon 50 years ago. Mobile devices such as smartphones and tablets are rendering PCs irrelevant for many. They provide a single gateway – via built-in functions and downloaded apps – to the services, tools and information we need to run our personal – and increasingly our professional – lives.
With so many of us increasingly relying on mobile devices to organise our lives, it is no surprise that banks and other financial services firms should be looking to mobiles as a channel to market with the potential to reconnect with a retail and wholesale customer base whose trust has been severely dented over the past decade. If banks can deliver services that are as quick, reliable, innovative and user-friendly as app-based retail and entertainment service providers, there is an opportunity in the retail space to re-establish themselves as trusted partners. In the wholesale markets, there is perhaps less ground to make up, but there is significant scope for competitive differentiation and market share growth from delivering greater convenience and richer functionality combined with appropriate controls and security.
There are good reasons why banks are not bleeding-edge adopters of new technologies, not least their responsibilities to customers and their highly-regulated status. Know-your-customer and anti-money laundering requirements have intensified over the past 15 years or so, for example, while data protection and related security laws are under constant review as the nature of criminal activity takes new forms in cyberspace.
These rules and responsibilities impose certain restraints for banks and their technology partners when it comes to delivering services via technology, whether in the office, at the branch or on the move. But to exploit new channels successfully, they also need to bear in mind another set of factors to ensure their mobile apps enhance the customer experience, not distract from it further. After all, they will not just be compared with other banks, but every other organisation that has an app-based presence on a customer’s phone.
Agility required
“Mobile is all about the customer journey. It’s all about making the user experience as satisfying as possible compared to offering a standard set of banking services. The fact that you’ve got a good desktop browser-based service is irrelevant,” says Daryl Elfield, Director of Testing Services at KPMG, who asserts that the demands of the mobile app market are “completely different” in terms of user needs and expectations. “The innovation is in making friendly, accessible, speedy customer journeys without sacrificing security.”
To achieve this, says Elfield, banks require a change of mindset. The creativity and competitiveness of the wider mobile app market has created expectations that latecomers must match. Banks should take no more than three months to get a new app to market, then must have the ability to capture and analyse all the various forms of feedback quickly enough to update the app on a regular basis.
“If you spend four years developing your mobile app, you’ve missed the boat completely,” says Elfield, who suggests that the agility required to compete effectively in the mobile app market is far from second nature in the financial services sector. “Some challenger banks and mid-tier insurance companies already have the necessary development experience. But among some of the larger players, agility is not particularly widespread.”
“If you want to test mobile apps, you need to be able to develop agile testing practices, which will be challenging for a lot of banks.”
Daryl Elfield, Director of Testing Services, KPMG
In addition to the pace of development, a further characteristic of the mobile app market that further cements the need for agility is the diverse range of environments in which new apps must operate. With user needs, network capabilities and device functionality constantly evolving, firms looking to drive business through mobile channels must be set up for continuous change. Critical to this is access to suitable testing facilities and processes. “If you want to test mobile apps, you need to be able to develop agile testing practices, which will be challenging for a lot of banks,” says Elfield.
Some banks make the mistake of assuming that mobile testing is similar to that required for browser-based software. But the multiple variables that must be accommodated when preparing an app for the mobile market add up to a level of cost and complexity that is compelling many to outsource testing to specialist third parties. In Elfield’s opinion, outsourcing mobile app testing is likely to be the most cost-effective option, with the caveat that there is currently a wide range of options on the market. “There are a lot of pitfalls when choosing the correct lab, the correct setup, the correct testing partner,” he warns.
While many outsourced testing service providers offer mobile testing facilities, not all have the necessary range of mobile-specific services, nor do they all have finance sector expertise. Capabilities flagged by Elfield include emulation of mobile devices (in addition to physical), automation testing, performance testing, user-experience testing specific to mobile apps, and social media testing. “The only way to differentiate yourself in the mobile testing market is to both demonstrate your ability to handle the specific demands of mobile testing and credible sector-specific knowledge,” he says.
Layers of complexity
How complex can mobile app testing be? A mobile testing ‘lab’ consisting of a group of human testers sitting in a room manually testing an app on a variety of the latest phones is expensive, inefficient and time-consuming. Testing today requires automated testing tools that allow for consistent testing of mobile apps in a variety of conditions, environments and devices. According to Ash Gawthorp, CTO of testing solutions and consulting firm The Test People, it is an ongoing service that must be closely tailored to the needs of individual firms and anticipates the future challenges that changes in technology and user preferences may bring.
In terms of automated testing tools, for example, Gawthorp regards the different levels of functionality required like layers of an onion, peeled back successively to reveal another level of complexity. The core requirement of an automated testing tool is the ability to ‘drive the screen’, i.e. allowing either human testers or automated tools to see what happens in response to use of hard and soft buttons on the device and to ensure the app’s displays work as expected. The second layer of testing focuses on how the functionality of a mobile app performs on a particular device, which can vary considerably given the range of devices and large differences in capability for a given platform, particularly the Android operating system.
The third level of variants that the automated mobile app testing tool must handle relates to the impact on application performance of differing wireless and mobile networks, specifically bandwidth, packet loss and latency, as well as the effect of switching between networks and recovery following loss of connection.
Closely related is the fourth layer: power, a finite resource in the mobile world. Automated tools must be able to track how an app uses power and how a reduction in power affects its performance. “If an app is chewing through the battery, users’ perception of an application is going to be quite negative,” observes Gawthorp. On the outer layer, the automated testing tool must be able to assess how the mobile app interacts with the device features that make it truly mobile. The business case for a mobile app might be driven by how it interacts with the microphone, GPS facility or camera, for example. “If you have an app on a device which is using location services, you need to be able to trigger that GPS component under controlled conditions to simulate what happens if you’re in a certain place at a certain time,” Gawthorp explains.
Moreover, to be truly effective, an automated testing tool must be able to interact with a wide variety of mobile apps and devices in both ‘real life’ and simulated conditions. In terms of apps, the automated tool must be able to test both web apps that can run on a mobile platform and native apps, as well as hybrid apps. The testing tool must also be able to run on physical and simulated devices – use of emulators and simulators can increase scalability significantly – and to configure the device in various ways, for example to allow installation of various operating systems, and return it to a known state. Finally, the tool must be able to test the app without requiring a significant change to the build process – this isn’t so bad in an organisation writing its own apps, but in a large multivendor environment with commercial agreements in place with suppliers maintaining a separate build purely for test automation this can add significant management overhead.
Performance matters
Performance testing sits at the very heart of the mobile app testing process as it focuses on how an app is able to handle various real-life impediments to its ability to deliver the user experience as intended. It’s also one of the most complex areas of testing because client side performance depends on the interplay between the app, the device and the network. Almost by definition, a mobile app is in use on the move as we switch between networks, whether for business trips or just in the course of our daily working lives. The coding and data usage of an app can make a big difference to how it performs on a device as it moves from a 4G to a 3G or even 2G network. Testing can identify both reasons for under performance and potential solutions. “It might be that there are a lot of data being retrieved but not required at that point in time or indeed at all, alternatively performance could be improved if requests are batched up to reduce the number round trips,” explains Gawthorp.
“Financial markets clients have users located in multiple geographies who expect those prices to come back in a timely fashion.”
Ash Gawthorp, CTO, The Test People
In terms of client-side performance testing, there is increasingly an overlap between performance testing and automation. An automated testing tool that has the functionality to monitor timings as required for detailed performance testing can add considerable value. “By making adjustments to a network characteristics tool, you can add the latency characteristics that would apply if you were accessing the solution from halfway around the world. Financial markets clients have users located in multiple geographies who expect those prices to come back in a timely fashion,” says Gawthorp. Failover and resiliency are also critical parts of the performance testing to identify how the app responds to a temporary loss of network connectivity or power.
Technology vendors specialising in the financial services space have had to respond to the evolving needs of clients to deliver services via new channels such as mobile apps. In the wholesale market, there is increasing competition between banks to enable critical decisions to be made securely and efficiently regardless of location, either by staff or by clients.
From a software development efficiency perspective, a key underpinning of specialist multi-asset trading system provider SmartTrade’s approach is use of HTML5 as the basis for all applications, which allows them to be rendered as required for use on mobile or desktop. The firm uses HTML5 to code the content then CSS (cascading style sheets) as the markup language to adapt the app to the specific device. Once the app is written, then comes the true challenge of mobile app delivery. “For me the challenge of writing a mobile application is first, the testing, second, the control.” says David Vincent, CEO of smartTrade, and previously the firm’s CTO for 13 years.
“The challenge of writing a mobile application is first, the testing, second, the control.”
David Vincent, CEO, smartTrade
Fast response to slow data
smartTrade has taken a largely in-house approach to the challenge of ensuring that their mobile apps can work as effectively as possible in the event of network-related limitations, including performance testing. To minimise the impact of variations between networks on the performance of its mobile apps, the firm has developed a back-end engine that can identify – and respond to – any on-screen problems caused by a device running on a low-bandwidth network. To prevent the mobile app from slowing down, the back-end engine, or presentation server, adapts to the bandwidth limitations, for example by altering the flow of data between app and server. This presentation server also responds to a dropped network connection by ensuring that content is redistributed to the app to enable the user to continue to use it seamlessly, which is particularly important for trading apps. “When people are placing trades, you need to have a way to reconcile when you log in again,” explains Vincent.
When it comes to testing, smartTrade’s approach has been to develop device emulators rather than testing its mobile apps on multiple physical devices. As the apps run on the emulators, smartTrade deploys tools that control and monitor bandwidth levels, essentially ensuring that the streaming engine adapts appropriately. “We measure round trips and other elements, so that we know exactly the way the application is going to react, but do so within a very secure and controlled environment,” says Vincent.
As well as coping with unexpected network events, such as a lost connection if a user’s train goes into a tunnel for example, trading apps must also contend with unpredictable market events. A sudden price fall or other market shock could result in a surge of usage as multiple traders log-on to check and very possibly trade out of the existing positions. Developing apps in HTML5, as opposed to building native apps, and being able to simulate a sudden rush of connections in lab conditions – independent of device – is critical to ensuring smartTrade’s apps can adapt to load changes. “In case of a significant market event,” Vincent explains, “it might be necessary for us to only send the most recent data to the end-user device, which would otherwise be flooded by data.”
Investing for growth
As a firm specialising in delivering a variety of solutions to financial markets clients, it is second nature for smartTrade to test the ability of its mobile apps to handle market shocks, but that same combination of industry vertical expertise and mobile-specific app testing capabilities is fundamental to the credibility of any third-party testing service.
For a firm such as The Test People, which provides testing-related services across a range of industry verticals, the challenge is to draw on expertise and capabilities to best serve the needs of the individual client. An increasing range of firms may be looking for a cohesive, end-to-end automation and performance offering for mobile apps, but this may look very different depending on objectives and budget. “It’s important to understand the client well in order to work out how to focus your combined efforts,” says Gawthorp. “They may want to test performance across a wide variety of devices or prefer more in-depth testing on a narrower variety of devices, for example. An outsourced testing services provider needs to find out what ‘acceptable’ is, and then advise the client on where to focus their effort in terms of performance improvement or addressing functionality or usability issues.”
“An outsourced testing services provider needs to advise the client on where to focus their effort in terms of performance improvement.”
Ash Gawthorp, CTO, The Test People
Inevitably, best practice in testing mobile apps must move as quickly as the app market itself, remaining open to changes from any number of sources. Mobile banking is seeing notable growth. A report published in June by the British Bankers’ Association – ‘The Way We Bank Today’ – found that UK banking customers had downloaded banking apps on 22.9 million occasions by the end of March 2015, an increase of 8.2 million in 12 months. It also predicted customers would use mobile devices to check their accounts 895 million times in 2015, compared with 427 million branch transactions.
The BBA report may focus on retail banking, but there are many signs of the potential opportunity represented by the nascent, but fast-growing market in financial services mobile apps. And like all opportunities, the ability to grasp it successfully will be determined by the underlying investment strategy. “Firms must recognise that mobile is different and their traditional testing teams will need support. Stumbling into mobile testing can be an expensive mistake if you don’t get some help along the way,” KPMG’s Elfield warns.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara of The Realization Group looks at how the cloud and the use of ‘microservices’ can transform operations for financial firms, bringing greater productivity and agility without sacrificing security. Mike hears from Matt Barrett & Olivier Deheurles of Adaptive Consulting, Jan Machacek & Martin Zapletal of Cake Solutions, Rob Bath of Digital Realty, Matthew Lempriere of Telstra Global Enterprise & Services and Ray Bricknell of Behind Every Cloud. The technology may be complex, but the message is simple: the cloud and use of microservices can bring serious benefits that go well beyond reducing costs.
Introduction
With the advent of the cloud the financial industry initially thought of two words, security and cost. Could the cloud ever be trusted enough to hold sensitive data? And if security issues could be addressed, just how much money could be saved by taking advantage of the cloud? Often, security concerns overshadowed cost benefits, with good reason as the financial industry is a prime target for increasingly sophisticated technology attacks. But while security will always be an area of focus, an array of techniques to prevent data breaches have been developed to address those concerns. And while cost reduction is certainly a benefit the cloud can bring, experts say there are far more important advantages to be gained. In fact, they argue, many people miss the point entirely when it comes to the cloud. It’s not about trying to lower your cost base. It’s about making your business more flexible, scalable and responsive than you’d ever thought possible.
“What we’re trying to do is realign people’s expectations and beliefs of what the opportunity of the cloud is.”
Barrett, Director, Adaptive Consulting
Belief systems
For some who are looking to turn the business vision of the cloud into reality, the mission starts with changing hearts and minds. Even the word ‘cloud’ brings with it ideas and associations that are often outdated or based on incorrect assumptions.
“What we’re trying to do is realign people’s expectations and beliefs of what the opportunity of the cloud is,” says Matt Barrett, a Director at Adaptive Consulting. “Often, people at larger investment banks think the cloud is all about reducing costs. But the chief benefit is in terms of technological velocity. Which comes from something called ‘induced demand’.
“Induced demand, a recognised economics term, comes about when you reduce the price of something so much that people just end up consuming more of it because it’s so cheap, it becomes almost like water, or cheaper than water in fact,” Barrett says.
“When you induce a huge amount of demand for your cloud computing resources, you find that you’re not necessarily going to save any money, in absolute terms. You discover that you’re using a lot more computer resources, so you end up with a situation where you need to rearrange your organization so that this feast of plenty can massively increase your velocity and your agility.”
“Cloud gives companies the speed of delivery and the speed of reacting to change.”
Jan Machacek, CTO, Cake Solutions
Jan Machacek, CTO at Cake Solutions, agrees. “The point that is missing is that cloud gives companies the speed of delivery and the speed of reacting to change. So if there is a need to really increase the computing power, the cloud is the way to go.”
Experts such as Barrett and Machacek cite a wide range of advantages that the cloud offers, particularly on a distributed system built with a microservice architecture. This is a way of reducing interdependencies among technological functions and it allows for better design, greater efficiency from development teams, more use of automation, increased innovation due to the lowered cost of experimentation, and faster time to market.
Big banks, with their large infrastructures and huge retail operations, are just one group of potential cloud dwellers. The fund industry forms another segment, although to date only a small number of funds have made serious forays into cloud technology. A third segment is trading venues, particularly as changes in market structure create new opportunities for channelling or concentrating liquidity. Regardless of the client type, what is clear is that taking advantage of this technology requires adopting a new mindset. The good news is that for all the questions cloud-sceptics may have, there seem to be good answers.
Private vs public vs hybrid
A big challenge with the cloud concerns how systems should be designed so that sensitive data doesn’t escape and so businesses can use data effectively.
“The opportunity to support a more meaningful base of critical applications in the cloud, leveraging hybrid IaaS as the platform of choice, will probably be the most likely consumption model for financial services.”
Robert Bath, VP, Global Solutions, Digital Realty
“First it must be possible to somehow control the locality of the data,” Machacek says. “Second, if a bank wants to run analytics on the data then it also makes sense to run the analytic jobs as close to where the data sits as possible. I think we’ll see a whole bunch of new engineering solutions to make that easier, because right now it still is a lot of work.”
Echoing many others, Machacek says that if a bank wants to use cloud infrastructure, it will probably use its own data centres because of concerns the public cloud might be too insecure. Many firms adopt a hybrid approach where the public cloud is used for less sensitive information and a private cloud where security is more of a concern.
Rob Bath, Vice President, Global Solutions at Digital Realty, sees hybrid models gaining popularity. “The opportunity to support a more meaningful base of critical applications in the cloud, leveraging hybrid Infrastructure-as-a-Service (IaaS) as the platform of choice, will probably be the most likely consumption model for financial services in the short to medium term. Hybrid IaaS provides the best current alignment of participants security and control objectives, that are so dominant a feature of this space, together with the provision of a more flexible approach to the support of the significantly larger number of business applications financial services firms run in contrast to those in other sectors.”
What is key, Bath says, is ensuring seamless transitions from the private environment to leased infrastructure options and then on to instance-based buys. “That stitch effectively needs to be physically in place such that as and when customers feel its appropriate to vary the way they consume IT resources, or regulatory or compliance requirements shift disruptively or whatever the case may be, the physical opportunity to establish those connections is in existence thereby enabling the required agility and seperacy of the respective operating environments.”
Freeing up development
Whether a hybrid model or a purely private cloud is deployed, firms wanting to get the most out of the technology need to make some big architectural decisions.
This is where microservices come into play.
Microservices allow work to take place in one part of a distributed system regardless of what is happening elsewhere. If one element needs to be shut down or modified, developers working on another element can just get on with the job without worrying about stoppages or synchronisation. The communication between the components of this distributed system can be tied down to a secure protocol involving cryptography, providing a fully secure interface between microservices.
“People using the cloud and techniques like microservices to de-correlate their teams are able to develop a feature during the day and have this in production at the end of the day.”
Olivier Deheurles, Director, Adaptive Consulting
Olivier Deheurles, a Director at Adaptive Consulting, says this approach can give a huge burst of speed to development. “What you want, in an ideal world, is a development team that is able to iterate very quickly. As soon as they have a feature complete, you would like to be able to start releasing this into production in front of users and getting feedback,” he says.
“People using the cloud and using techniques like microservices to de-correlate their teams are able to do that. They can develop a feature during the day and have this in production at the end of the day.”
The use of microservices, according to Machacek, also gives much greater flexibility in terms of data. “It’s so much easier to now say, I can have part of my system running here, I can have another part of my system running somewhere else, and I can still ensure that they don’t leak information anywhere. So I no longer have to duplicate databases or replicate databases or rely on only my data centre to run things.”
He adds: “The key thing is these are independent self-contained units of functionality that can be deployed independently, that can tolerate other microservices going up and down. So they are resilient .”
Not only do microservices address the issue of interdependencies, but also in a distributed system they can operate on different clouds. “Different cloud services give you different computing power, they cost different amounts of money and you can treat them in a different way,” Machacek says.
He gives an example where a sophisticated company such as Instagram may only pre-purchase a small number of machines and will buy the rest of its computational power on a spot market. So a provider such as Amazon, if it notices it has 10,000 CPUs available with no one running software on them, can sell that on a spot market at a low price, but with the caveat that those CPUs could disappear with a m One such change is the separation of a firm’s development team or a product team from the infrastructure team. “You only let them talk via an API, so the infrastructure team provides APIs that lets the development and product teams spin up new computer resources, shut them down and configure them,” Barrett says.inute’s notice.
“If you didn’t have microservices and if you didn’t have this distributed message-based system, that would be extremely difficult,” he says.
Another benefit is the freedom developers gain in terms of tools. Martin Zapletal, a Software Engineer at Cake Solutions, says: “In the past, large, monolithic applications, were using a single technology stack and firms were afraid to bring new technologies or upgrade to newer versions because it was costly. But when you have microservices, it is much, much simpler to update the technology or choose a different technology stack for a new service or a different database provider for example.” All of this results in an increase in velocity as the right tool for the job can be used.
Change on all fronts
The cloud and microservices approach entails not just new technology, but organisational change.
One such change is the separation of a firm’s development team or a product team from the infrastructure team. “You only let them talk via an API, so the infrastructure team provides APIs that lets the development and product teams spin up new computer resources, shut them down and configure them,” Barrett says.
“When you have microservices, it is much simpler to update the technology or choose a different technology stack for a new service or a different database provider.”
Martin Zapletal, Software Engineer, Cake Solutions
Zapletal of Cake Solutions says use of the cloud and microservices can achieve high levels of automation for deployment and configuration management. But to get to that stage, new thinking is required.
“Mindset is one of the biggest issues, because people are used to doing things in a certain way and it’s difficult for them to start thinking differently,” Zapletal says. “It’s not just changing the technologies, it’s changing how the teams work, how they cooperate.”
For instance, deployment and infrastructure management changes are markedly different in a microservices-based architecture versus an old-style system.
“The management of hundreds of small microservices is more difficult than managing one single monolithic application,” Zapletal says. “So firms need to change how they do this and automate deployment to be able to start microservices and manage configurations quickly and reliably in multiple deployment environments to allow rapid iterative development.”
If that sounds daunting, it need not be. “This is actually a solved problem,” he says. “There are plenty of tools and technologies that have made this simple. But, it requires some new knowledge and approaches in deployment automation and DevOps areas that many companies are not used to.”
Deheurles of Adaptive suggests that financial firms take their lead from large web companies that have massive platforms with numerous teams. “The thing is, to be able to continue to go really fast, you need absolutely to make each of those teams independent,” he says.
“Large web companies are constantly putting new features in front of clients – not necessarily all clients, but perhaps 10% of them – with analytics to know if and how a feature is being used or if there are problems in the implementation. Then they decide if they want to push that feature out to everybody or if it’s a feature that nobody has really noticed and is not using, and is not worth supporting.”
Other benefits
The velocity-iteration question is arguably the most significant of the benefits from this technology, but there are numerous others, each of which can make a big difference.
Matthew Lempriere, Head of the Financial Services Market Segment at Telstra Global Enterprise & Services, says cloud technology lets firms better test software or hardware.
“A tailored cloud allows you to have all your cloud-based infrastructure but then have a colo rack that you can place it in. We can place third-party hardware in a rack so that you can try it before you buy it, effectively.”
Sometimes a firm wants to know a product’s limits, so it may want to see if it can handle, say, 50 Terabytes. That’s an amount of memory that most firms don’t have and wouldn’t want to pay for to buy outright. “I want to do extreme testing to see if I can break something, because until I know I can break it, I won’t be comfortable using it. I don’t know where or when it will break.”
“A tailored cloud allows you to have all your cloud- based infrastructure but then have a colo rack that you can place it in.”
Matthew Lempriere, Head of Financial Services Market Segment, Telstra Global Enterprise and Services
Yet another benefit concerns the ability of the cloud to incorporate new hardware. “What will become important for these grand cloud infrastructures is more exotic hardware, more types of applications,” Machacek of Cake says. “There will be many applications that will need a GPU to run. So far there are very few cloud schedulers that allow you to do that, but keep an eye out for those.”
Machacek also notes that deep neural networks have been around for a long time. “But now we have the computing power to actually make use of them.” At the same time, it can be difficult to buy the very specialised hardware needed. “So again, go to the cloud and run one of your microservices on this specialised hardware. It’s the perfect way to spend money as an experiment.”
Ray Bricknell, Managing Director of Behind Every Cloud, says many in the fund industry have qualms about security and risk but they often haven’t explored the realities of these concerns or the ways to mitigate them. Bricknell, whose company specialises in the hedge fund and asset management space, says vendors haven’t helped the situation.
“The vendor community doesn’t know how to communicate with this sector and articulate that value in a language that they understand. The vast majority of the managed service providers that we work with have no real capability of actually truly engaging with finance type clients and articulating the value proposition in a way that makes sense to them. It’s a classic IT having trouble talking to the business type of scenario.”
Bricknell expects to see more use of the public cloud for ‘bursty variable’ workloads in the financial sector.
“The classic example that is beginning to emerge is in the area of risk modelling, essentially high volume number crunching that you might do at sporadic points during the business cycle.”
He says any opportunity to speed up processing of such exercises is highly valuable because understanding risk exposures has become so important since 2008.
Right now the fund industry breaks into two camps when it comes to the cloud: small firms that embrace it and established firms that are hesitant.
“New start-ups are being born in the cloud, but most times into a private cloud rather than a public cloud,” Bricknell says. “Then the rest of the market, quite frankly, is nearly all still using infrastructure that is owned and developed and managed by themselves. This segment is now however accelerating toward adoption of 3rd party managed infrastructure.”
“Part of the challenge is investor perception, part of the problem is regulatory (meeting for example BCP requirements). No-one is going to risk attracting a $1bn investment just to save a few bucks on their costs. I can’t see critical production systems being deployed into public cloud by these larger hedge funds and asset management firms anytime soon, but they are beginning to explore using public cloud and the new age world of DevOps tools to make their application development functions more responsive.”
“New start-ups are being born mostly into a private cloud rather than a public cloud, the rest of the market, is nearly all still using infrastructure that is owned, developed and managed by themselves.”
Ray Bricknell, Managing Director, Behind Every Cloud
The mindset question revisited
Many in the financial industry know that they need to change, but there is still resistance or that old belief that the driver will simply be cost.
“It’s the smaller platforms, like the smaller multi-dealer or retail platforms, that will be the first to go that way I think, because they understand the importance of iteration,” Barrett says.
That is particularly important during periods of market change. As an example, a large number of corporate bond platforms are currently being built due to changes in market structure.
“The winners will be the ones that use the cloud for agility and velocity, who iterate towards market fit for what they’re trying to build rather than the old way of, ‘build it and they will come and we’ll just hope and wait, because no one actually knows what it is that the market wants right now.”
Meanwhile, Deheurles says the banking industry seems to be stuck in the past.
“Banks are still in a world where UX (user experience) people and business people who think they know what their client wants and spend weeks or months deciding what they want to build,” he says.
Large e-commerce platforms outside of finance stopped working this way a long time ago. “Now when they think about a feature, they want to put it in front of the client in a matter of days or a week and actually test and measure to see if it’s used and if it’s driving more business.”
Can the financial industry finally take a leap into the future? For firms such as Adaptive Consulting, it all comes back to getting expectations right, and communicating the opportunities to the right people.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara publisher of The Trading Mesh, asks Brian Ross and Jim Timmins of FIX Flyer, Sassan Danesh of Etrading Software, Robert Bath and Stef Weegels of Digital Realty and Toby Corballis of Bridline to consider the future role of managed services in increasing the efficiency and reducing the cost of the FIX connectivity infrastructures of banks and brokers..
Introduction
For many securities market participants, FIX is part of the furniture. Starting life over two decades ago as a standardised messaging protocol for conveying orders between large asset managers and their brokers, FIX has become ubiquitous. It has continually evolved to enable higher volumes and greater detail to be carried along electronic pipes between trading counterparts in the equities markets. Moreover, FIX has expanded its horizons and those of its users. Under the guidance of the FIX Trading Community, the not-for-profit standards body that maintains and develops the protocol, FIX is now reaching new asset classes and further along the transaction value chain, deep into the post-trade arena.
Such expansion is to be welcomed. A standardised messaging protocol can provide the building blocks upon which service providers can add new value to a global financial markets community in urgent need of workflow efficiencies. But these new frontiers for FIX also offer an opportunity to reassess existing FIX implementations, many of which could benefit from root-and-branch rationalisation, and to consider emerging service models.
Areas of complexity
In facilitating an increasing volume and variety of trading-related communication, FIX has become a victim of its own success, stretching well beyond its initially envisaged limits. “The power of FIX is in its flexibility. But firms have customised more than was necessary, which has led to considerable complexity,” says Jim Timmins, COO of FIX Flyer.
“Once I’ve used today’s technology business rules to establish a new platform, it is much easier to migrate a legacy platform to it.”
Jim Timmins, COO, FIX Flyer
When most firms first implemented their FIX infrastructure, they typically connected to a handful of core counterparts. At that level, it was easy to accommodate minor differences in configuration or customised tags. But as the utility of FIX increased, small differences in rules of engagement (ROEs) rapidly became an unwieldy spider’s web of exceptions, embedded deep in the coding of a firm’s communications infrastructure. A further problem with becoming part of the furniture was declining visibility. Over time, firms’ FIX infrastructures were asked to perform more of the same core tasks, with little thought given to upgrades and improvements. New connections were implemented over the years by a succession of staff, often with little consistency or record-keeping. Because previous ROEs were so customised, it became safer to start from scratch when onboarding a new customer. Legacy connections remained untouched for years.
In extremis, the complexity of a bank or broker’s FIX infrastructure has become a barrier to growth, rather than an enabler. A technical issue very quickly becomes a business problem if a firm cannot onboard a client for another six weeks because of a combination of over-complex connectivity processes and over-burdened staff.
Furthermore, advances in technology have overtaken some of the customisations previously built into brokers’ FIX infrastructures, rendering much of their connectivity framework obsolete, says Timmins. “Today, there are a lot simpler solutions. And if I rebuild today, I can then focus on my new sources of revenue. Once I’ve used today’s technology business rules to establish a new platform, it is much easier to migrate a legacy platform to it.”
For many firms, the broadening of the scope of FIX to new asset classes and workflows is the right time to weigh up the cost considerations of continuing to maintain FIX through internal, sometimes non-specialist, and always over-burdened IT resource, versus alternative, outsourced, approaches.
Expansion as catalyst
Experience suggests new implementations can be a catalyst. Sassan Danesh, managing partner at Etrading Software and co-chair of the global fixed income sub-committee of the FIX Trading Community, has been working on a project which demonstrates the opportunities and challenges of the FIX protocol’s expansion. A key part of the Neptune project is the development of new message standards for sharing information about the corporate bond inventory held by buy- and sell-side institutions. Perhaps surprisingly, Danesh found that fixed income was not a ‘green field’ site for a new streamlined approach to FIX implementation.
“Banks can see the benefit of outsourcing the management of the network to a third party.”
Sassan Danesh, managing partner, Etrading Software
“When establishing new workflows, we created specifications in FIX version 5.0 (introduced in 2006), which were ratified by FIX committees and backported to 4.4 and 4.2. While brokers were willing to implement the new messages using version 5.0, in part because they managed their FIX connectivity internally, some asset managers expressed a preference for version 4.2 or 4.4, mainly because firms’ FIX connectivity relied on a range of third-party technologies, much of which used earlier versions of the protocol,” he says.
In other ways, however, users are taking next FIX-based projects like Neptune as an opportunity to break from the past. Banks were given the choice of using their existing FIX infrastructure to connect directly to clients or communicating bond inventory information via a managed service, the Neptune network. “The consensus was overwhelmingly in favour of the concept of a managed service, a network where costs can be shared, with an external party managing the FIX network. Historically, the tendency would have been to add another layer on top of their existing FIX infrastructure, but banks can see the benefit of outsourcing the management of the network to a third party,” says Danesh.
One problem; many solutions
Faced with a complex, legacy FIX infrastructure, banks and brokers are looking at a range of approaches to reduce costs and improve efficiency, from streamlining connectivity to adopting best practices to outsourcing to third parties.
New approaches to network management can lay the foundations for greater efficiency and flexibility, according to Robert Bath, vice-president, global solutions, Digital Realty, a developer and operator of data centres. Due to the static nature in which connectivity is currently managed between the FIX infrastructure of banks and brokers (i.e. static assignment of unique TCP/IP endpoint identifiers and static and manually operated databases), significant operational complexity and impediment to change burdens the network. “Improved operational simplicity (via network abstraction and ‘single pane of glass’ monitoring) and provisioning agility (through effective automation and orchestration techniques) can underpin the evolution of connectivity services in support of both real-time execution via FIX and the requirements of new asset classes,” says Bath.
At a granular, software layer, firms can simplify FIX connectivity via increased automation, for example streamlining certification to reduce the effort required to onboard counterparts and to monitor customisations subsequently. Recent progress toward machine-readable ROEs offers the prospect of automated comparisons and alerts to outliers and exceptions, rather than the time-consuming process that has been required to date.
“Improved operational simplicity and provisioning agility can underpin the evolution of connectivity services.”
Robert Bath, VP, Global Solutions, Digital Realty
“The ability to automatically compare ROEs could put us on the road to a long-held goal: an industry-standardised ROE for equities,” says Brian Ross, CEO of FIX Flyer. “Beyond machine-readable ROEs and comparison tools, the industry also is looking to the development of progression testing tools and release management tools. Overall, we’re seeing a more mature interaction between counterparties.”
In the absence of a central register of ROEs, standardisation may be best achieved by encouraging the industry to move to an operating model that minimises the impact of different ROEs through automation and abstraction, agrees Toby Corballis, director of specialist consultancy Bridline. “Work needs to be done at several different levels to reduce the dependence on people in the process,” he says.
FIX on demand
In a world of steeply rising capital costs for banks, notably under Basel III, the capital and operating expenditure of running a commodity service such as FIX connectivity is going to come under increasing downward pressure. “Firms are already looking at infrastructure on demand, software on demand to solve their cost issues. The time is right for vendors to offer FIX on demand,” says Timmins.
“Infrastructure-as-a-service makes a lot of sense for people who want to control budgets.”
Toby Corballis, director, Bridline
The increasing maturity of managed service offerings such as FIX service bureaux that deliver both tools and expertise via the cloud let banks and brokers outsource much of the management of FIX connectivity. Sell-side firms have traditionally been cautious of sourcing services via cloud technology, but information security concerns have largely been addressed by multi-vendor data centre services and the firms they host. “Firms that offer infrastructure-as-a-service or software-as-a-service have been upgrading their systems and infrastructures to pass muster in respect of information security,” says Timmins.
Service bureaux can eliminate the historical duplication of effort involved in managing FIX connectivity by deploying dedicated expertise and robust, road-tested processes. If a service bureau has already connected one buy-side client to a particular broker in line with FIX-recognised best practice, it can clone large parts of that connection for other asset managers. Cloud delivery, of course, means services can be drawn down as and when required.
Stef Weegels, sales director for EMEA at Digital Realty, says the range of hosting choices and related service options offered by FIX service bureaux will be enhanced if located at a data centre with a strong cloud and FIX community. “FIX-based services can be offered as-a-service, with the significant latency benefits of hosting with a cloud provider in a multi-tenant data centre. The multi-tenant data centre is also the ideal venue for a hybrid hosting strategy, with the base load hosted in either dedicated infrastructure or a private cloud solution, and the burst in the public cloud,” he says.
“The multi-tenant data centre is also the ideal venue for a hybrid hosting strategy.”
Stef Weegels, sales director, EMEA, Digital Realty
Taking onboarding as an example, managed service providers can analyse the logs and existing workflows relating to how a firm manages the certification process and subsequent monitoring of traffic across particular connections. By analysing a firm’s ROEs and other aspects of their FIX framework, and comparing them to industry norms, third-party providers can help sell-side clients identify best practice and move toward a more efficient and less complex structure. “With managed services, a bank can decide to put certain dedicated tools in the hands of their internal staff, or go further and outsource a greater share of the management of their FIX connectivity. Banks can get back to focusing on their core business, spending less time and on managing technology,” says Ross.
Moving towards a software-defined network provisioning platform can provide an effective decoupling (abstraction) of a bank’s connectivity management from its internal network and server farm management. “This has a number of key benefits, most notably, consistency of application performance, geographic agility and scalability and automatic re-direction of client connections to disaster recovery sites of choice for mission-critical FIX applications,” says Bath.
Although managed services are gaining acceptance for a wider range of uses across the financial market, Bridline’s Corballis suggests it is too early to be certain which models will have the most impact on the FIX connectivity market.
“Infrastructure-as-a-service makes a lot of sense for people who want to control budgets and have the ability to flex up and down depending on need. Platform as a service has been around for a while but offerings are becoming more specialised and as such can serve to create pools of FIX standardisation. I also expect to see an increase in specialist FIX network-as-a-service models to complement the two other models,” he says.
Best practice
As firms seize the rationalisation opportunities arising from new implementations of FIX, the future challenge for the whole industry remains one of maintaining a balance between flexibility and standards to avoid the complications in the past.
According to Danesh, the FIX Trading Community has focused a lot of effort in recent years on the establishment of best practice, reinforcing recommended methods for using the protocol. “There are always good reasons for customising FIX to a firm’s individual needs, but our sub-committees are working with users, first to forge industry agreement on best practice, and then to support implementation. The cause of standardisation has been helped by the fact that banks no longer see FIX connectivity as a point of differentiation and are willing to seek outside help in implementing changes to – and then managing – their FIX infrastructure,” he says.
The development of new FIX messages under project Neptune to support the automation of voice-broked, over-the-counter markets such as fixed income and swaps is an example of the FIX Trading Community’s approach. “As workflows become standardised, they become automated and best practice can be established and reinforced. For Neptune, we have documented standardised workflows around inventory distribution and will look to maintain that level of documentation and standardisation as new workflows evolve,” he says.
Danesh accepts that there are no hard and fast rules paving the way from innovative work flow to industry standard functionality to commodity widget, thereby identifying the optimal point at which the industry should follow the same rules for a particular task. “It is a matter of establishing consensus and FIX committees are a very good environment to bring that about,” he says.
The challenge of upgrading a FIX infrastructure grows the longer the task is put off. The number of connections for a major broker means a full upgrade could require a multi-year project plan and a rock-solid business justification.
A key problem that can delay upgrade projects is that many banks and brokers do not have transparency on the cost or efficiency of their FIX connections. If it is hard to establish which connections are most efficient or inefficient, it will be even harder to work out which areas of your FIX infrastructure are in most in needed of attention. “Many firms do not have the ability to interrogate log files. The sell-side really needs to know who are their best clients, who are their best connections, how can they make certain connections more streamlined. Equally, they need to identify the connections that are losing them money, either because of the ROEs or simply a lack of trading volume,” says Ross.
“Do you really want to pile FX or fixed income trading on top of your existing FIX capabilities?”
Brian Ross, CEO, FIX Flyer
If connectivity is a commodity, managed connectivity services are not. As well as the analysis and expertise they can bring to the onboarding process, service bureaux are competing to reduce banks’ total cost of ownership of their FIX connections by highlighting the costs of licensing, implementing and monitoring. Most have developed the capability to read log files for firms and provide reports on the efficiency of their connections. “Banks are looking to vendors to add value not just in terms of monitoring connectivity but in areas such as managing trade flows, using heuristics to predict changes in the market. The data is increasingly becoming available to enable firms to assess how market events will impact the cost-effectiveness of their FIX infrastructure. The next question is: “Do I need to own those tools? Do I have someone else run those tools for me?” says Timmins.
Fresh start
In many cases, FIX connectivity remains under the control of banks’ equities teams, because the electronification of trading in that asset class was the process FIX was initially designed to support. Two decades or so later, it may not be efficient for their fixed income colleagues to leverage that existing infrastructure, especially if it has been under-resourced in recent years.
FIX is moving forward. The industry is asking it to meet new needs and the FIX Trading Community is responding. But banks and brokers with outdated FIX infrastructures risk falling behind. As FIX is used for a wider range of workflows – both in the post-trade arena and for trading a wider range of instruments – such firms will face increasingly urgent questions about the viability and cost of their FIX connectivity infrastructure. “Do you really want to pile FX or fixed income trading on top of your existing FIX capabilities or does that become an opportunity to start a fresh implementation to which you can then migrate your existing equities connectivity in due course?” posits Ross.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara and Adam Cox of The Realization Group investigate the development of new, ultra-fast wireless communication networks for trading financial markets. Mike speaks with Stéphane Tyč, CEO of network provider McKay Brothers, while Adam talks with Tariq Rashid, a long-time data centre executive, and with Alexandre Laumonier, who has published extensive analysis of the state of play in the microwave industry. Together, they describe a fast-changing, competitive technological scene that is revolutionising the world of trading.
Introduction
Microwave transmission of data has been used in the telecommunications industry for decades. But it is only during the past half dozen years – when the technology has offered the tantalising prospect of both lower data transit costs and greater speed for trading firms – that financial market participants have begun to take microwave seriously. When prices for the fastest available fibre-optic networks shot up and began eating into trading firms’ margins, forward-thinking technology providers and trading groups began hunting for new solutions. Microwave technology, it turned out, fits the bill. The technology has its drawbacks, most notably a reliance on line of sight when setting up point-to-point communication links. But the advantages more than outweigh the disadvantages. Now, microwave and millimetre networks are being built all the time, both by a new breed of network providers and by some of the trading companies themselves. What is more, some people are even starting to think of ways the technology might be employed over previously unthinkable distances, such as across the Atlantic Ocean. It seems that the race to zero has somehow just got a lot faster.
The need for speed
Stéphane Tyč of McKay Brothers has been at the forefront of the microwave network business since it began around 2010. His firm built a pathway from Aurora to New Jersey that today is considered the fastest available. McKay has also built a number of popular microwave links in Europe, connecting different trading ecosystems in London with each other and with Frankfurt. The firm is looking at doing much more.
“You need to be faster than someone else and the amount of time that you need to win by is dictated by the random jitter inherent to the order sending and data publishing on exchange.”
Stéphane Tyč, CEO, McKay Brothers
So what is fuelling all the demand? The simple answer is that speed has always ruled in the markets.
“I’m not sure anything has changed. Fundamentally, you need to be faster than someone else and the amount of time that you need to win by is dictated by the random jitter inherent to the order sending and data publishing on exchanges,” Tyč said.
Different venues have different protocols and different infrastructures, so the degree to which a speed advantage makes a difference often depends on where the trading is taking place.
“So, if when you’re faster by 10 µs, you always win, then, being faster by 10 µs really counts,” Tyč said. “There are exchanges where, if you’re faster by one µs, you’ll always win and others, if you’re faster by 100 µs, you might not win, because of the randomness of the propagation and different orders to the matching engine.”
But while the importance of 10 or 100 microseconds can vary from venue to venue, the importance of speed in general is not in question.
In the early days, microwave was pretty much strictly the province of high frequency trading (HFT) firms. Their trading models depend on speed more than anything else.
Tariq Rashid, a veteran UK collocation industry insider, said that after the initial success with the Chicago area-to-New Jersey routes, trading firms immediately became interested in Europe.
“It went from zero to a lot of interest very quickly because firms were seeing the advantages it had gained in the US,” Rashid said. “The interest ramped up very quickly because people saw the inherent arbitrage advantages that microwave gave them.”
Rashid’s data centre decided early on that it would have to facilitate trading firms and network providers with microwave links. “If we hadn’t offered it into our facility, they would have built a tower alongside the facility and then come in for the last quarter of a mile or so using fibre, thereby creating a level playing field for participants,” he said.
Meanwhile, after an initial period of spectacular growth, HFT firms began to face a daunting array of obstacles, from increased competition to higher costs. Overall revenues and margins in the past five years are thought to have fallen, while the macroeconomic environment – with rock-bottom interest rates and record low volatility levels – has encouraged an extended period of orderly trading that has provided fewer opportunities for HFT firms to make money. Add to that intense regulatory scrutiny in the wake of the Flash Crash, high-profile mishaps, lawsuits and loud complaints from the buy side. One result of all this has been consolidation in the HFT sector.
“A lot of the bigger HFT players have swallowed up some of the smaller players, so there are fewer HFT players out there.”
Tariq Rashid
“A lot of the bigger HFT players have swallowed up some of the smaller players, so there are fewer HFT players out there,” Rashid said.
And yet, demand for microwave networks is only growing. One reason is that it’s not just HFT firms that are seeing advantages from them.
“It’s just progressively growing and it’s spreading beyond the classical HFTs – especially on the market data side. Everyone’s kind of trying it and it’s very interesting because the price point is attractive. The number of market data clients is steadily increasing. Every month more firms are using our data service,” Tyč said.
By market data clients, he means firms that consume the data, though Tyč notes that the ones interested in microwave are only those which collocate at an exchange. “There are a few hundred firms in the world that are our natural clients today, though that number continues to expand. More firms – whose strategies are not latency dependent – are taking advantage of being milliseconds more responsive to changing markets.”
Typically these would be banks and smaller market makers, which, as the McKay Brothers executive notes, are not like HFTs. “The banks, for instance, have tons of extra layers of security. They cannot just shave the last hundred nanoseconds off their connection. But, still, they need to have good data to execute orders for their clients,” he said. These firms need the most up-to-date data so that their orders and trading models are more deterministic and so they know what’s traded and when. “Microwave latency is the low-hanging fruit of competitive improvement for those firms,” notes Tyč.
Artists cartographers and technologists
Microwave networks rely on tall towers (the taller the better because of the line-of-sight factor) which network builders use as relay sites. Since regenerating a signal requires time, the fewer the relay sites, and the straighter the line on a map, the lower the latency.
Just as is the case with fibre optic, considerable effort goes into finding ways to make as straight a line from point A to point B as possible, and that means identifying locations for repeater towers that will process signals. In other words, building a microwave or millimetre wave network is as much an exercise in artistry and cartography as it is in technology. After the initial proof of concept in terms of microwaves applicability to financial trading, a large number of networks across Europe have sprung up.
Alexandre Laumonier, an anthropologist who has written extensively about microwave networks, said the main competition now is between the independent providers such as McKay Brothers and others and proprietary trading firms such as Optiver, Jump Trading and Vigilant Global.
“Independent providers need money to build their networks, before selling them to various customers, but trading firms spend their own capital to compete with the providers,” Laumonier said.
He doubts there will be many new entrants into the field given some of the competitive and infrastructural obstacles now in place in Europe. “We could say there is no space for competition anymore – the competition is how you can manage and improve a network by trying to have better routes,” he said.
Better, in other words, means faster.
McKay has current connections that link the LD4 data centre in Slough (one of the world’s largest trading ecosystems) with Frankfurt and with the LHC data centre in the Docklands area of London. The company basically is creating a mesh to link all the major sources of liquidity in London with each other and with Frankfurt.
To give an idea of the speed involved, the round trip journey from LD4 in Slough to Frankfurt and back again takes just 4.64 milliseconds on the McKay Brothers network. Among other things, this means that synchronisation and timestamps must be extremely precise or a firm may not know whether an executed order took place first in one location or in the other.
Further afield, various firms are looking at Zurich, Milan and Stockholm. As Laumonier noted in a popular blog that delved deep into the microwave business: “The U.S. Army, NATO and the RAF erected a lot of towers during the second half of the 20th century, and these towers are now invaded by dishes owned by various trading firms/microwave providers.”
In North America, efforts are underway to build connections between Toronto and trading centres in the United States.
But perhaps the most interesting activity is taking place far from any major trading centre.
The Hibernia project will create a new, faster fibre-optic connection under the Atlantic Ocean, with a handoff at LD4. Rather than reaching the United Kingdom at Land’s End, where an array of microwave networks currently extend, the new trans-Atlantic network will meet land at Brean, not far from Bristol.
While there has been some uncertainty as to whether there will be an opportunity for microwave networks to connect with Hibernia at or near Brean, the expectation is that the networks extending to Land’s End will soon become redundant. It’s an example of how quickly the map can change when it comes to finding the optimum routes from one centre to another.
Cloak and dagger
As can be expected, various firms have been building pathways to Brean and buying up frequencies in the UK. In fact, frequency and tower “squatting” has become a common issue, according to Laumonier. He said some firms ask for licences for a tower they know they won’t use so that competitors can’t use the tower anymore; others install dishes on towers without legal authorisation, or before they received authorisation; and others are authorised to install equipment of one type or another, such as a certain-sized dish, but end up installing different-sized dishes.
“The fact is, there are some discrepancies between the public data available (the legal authorisations) and what you can see on some towers,” Laumonier said. “Some regulators seem to be more vigilant than others. It’s clear that the firms who really respect the law ask for more regulation and control in order to prevent ‘unfair’ competitors from having illegal advantages.”
“The fact is, there are some discrepancies between the legal authorisations and what you can see on some towers. Some regulators seem to be more vigilant than others.”
Alexandre Laumonier
In the UK, regulators make data about who owns what network and what frequencies public. But Britain is an outlier in this respect. Tyč said France publishes some information, but Belgium and Germany do not provide any public data. “I really hope there’s a European directive that makes it all available, that would be very good,” he said.
Questions about transparency extend not just to who owns which networks, but also to how fast they really are. Laumonier said: “Those who publish latencies obviously do it to show that they are faster than competitors. But what do those latencies exactly mean? Are they the latencies between the dishes of two exchanges? Do they include the few microseconds needed by the data to travel between a dish at the top of a data centre and the trading firms’ collocated servers that process the data in the heart of the data centre?”
For example, McKay Brothers notes that between data centres in Aurora and nearby Cermak, where futures are traded, the latency is 0.184 milliseconds if a trading firm server is installed on the 2nd floor. But it is 0.183 millisecond if the server is on the 9th floor. In other words, what Laumonier calls ‘McLatencies’ can vary by a millionth of a second from one rack-to-rack route and another.
“I’m not sure the other public latencies published by microwave competitors are so precise. That said, each microsecond counts now,” he said.
Ultimately, only the trading firms themselves really know what latencies they are dealing with actually are, whether that stems from the route provided or factors such as what floor their servers are on.
There are a variety of factors that can affect latency, but generally speaking the best route will be based on how close a microwave network can get to providing a straight line between two points. Laumonier said McKay data shows that its Illinois-New Jersey route is only three kilometres more than the 1,180 kilometre length of a straight line between the two points.
“This is impressive as McKay Brothers was not the first firm to build such a network. Probably three or four trading firms had already built their own proprietary networks before McKay, but it seems that they didn’t really take as much care with the design of the paths. They only wanted to be faster than fibre-optic, and they were so. But McKay decided to design the shortest, and fastest, network,” he said.
Laumonier noted that the US McKay microwave network only needed 22 segments to join the two areas, about two-thirds the number of other competitors. That saved money because fewer dishes and cables were needed. “Achieving the shortest microwave network between two areas is a form of art, and an economical challenge too,” Laumonier said.
The dizzying array of networks criss-crossing Europe that Laumonier has identified is testimony to how popular microwave and millimetre wave have become. Linking cities and venues across hundreds of kilometres via this technology has become relatively straight-forward. But what about trying to go much, much further? Could wireless communication, for instance, ever take place across the Atlantic Ocean?
Even if the technology could be harnessed to allow a signal to travel such vast distances, an immediate problem comes up: the curvature of the earth. After a certain point, the shape of the planet eliminates line of sight. At the same time, building towers across the ocean is not thought to be realistic.
There has been talk of specially designed balloons that could relay signals. Google, with its Loon project, is currently exploring the use of balloons for making the internet accessible in all parts of the globe.
Another intriguing technological answer may lie in what is called Tropospheric scatter, or troposcatter. This technology uses the troposphere, the lowest level of the earth’s atmosphere, like a kind of backboard. A signal is sent up into the air at an angle; it hits a device in the troposphere and bounces back down to a dish as much as a thousand kilometres away.
For shorter distances, Laumonier does not expect a better technological solution than microwave, even considering new methods such as laser technology (which is hampered by its vulnerability to fog).
“The next challenge seems to be wireless networks between continents, i.e. between London and New Jersey for instance,” Laumonier said. He knows of one firm that is currently developing technology inspired by the old troposcatter method.
What he finds interesting is how the most modern of firms are attempting to harness technology that is actually quite old. “It is fascinating to see how the trading world – HFT or not HFT – Is trying to encircle the globe with all these microwave technologies,” Laumonier said. “They are old technologies now used in a very modern way,” Rashid is in no way surprised by this turn of events.
“Throughout history, people have been using technology to gain competitive advantage, lower latency tools to help transport information from one destination to another in order to make better trading decisions. Whether microwave or millimetre wave is the last, I suspect not,” Rashid said. “I’m sure that there will be other technologies that will be coming along in the future.”
Tech options
In addition to microwave technology, there are two other forms of wireless communication that are being used to trim milliseconds: millimetre wave and laser technology. All three go at the speed of light. But there are different technical factors that affect each. What are some of the pros and cons?
Microwave: This occupies the radio spectrum between 1 and 30 GHz. It can handle less data than fibre-optic links but it is far faster. When light travels through glass, as it does in a fibre-optic cable, it is estimated to be 33% slower than when traveling through air. This is because glass has a higher refractive index. But signals need to be repeated when traveling via microwave. Tyč says microwave can go over 100 kilometres.
Millimetre wave: This is the same speed as microwave but more bandwidth. It is between 30 and 300 GHz. One drawback is the distance allowed between repeated signals is much smaller -Tyč says about 10 kilometres. For the network providers catering to this demanding set of clients, both microwave and millimetre wave are important. Heavy rain or flying birds are said to be able to disrupt signals for both communication methods.
Laser: This technology is currently the highest bandwidth. It is just being introduced for trading networks. It was developed in the 1990s for the purpose of gathering images from outer space and was adapted for use in military jets for communication. Fog will completely stop laser communication and rain will attenuate it also. In all cases of wireless communication, network providers build in fall-backs that allow switching to other methods in the event of disruption.
Images from Laumonier
A map Laumonier designed (below) – shows some of the actual and attempted microwave pathways between the UK and continental Europe.
The illustrations below, from Laumonier, show the impact Hibernia will have for trading networks. The top image shows the south of England and the numerous networks stretching to Land’s End to link up with trans-Atlantic fibre optic networks; the bottom image shows how many of those microwave connections will be redundant once Hibernia is operational. The question is whether Hibernia will allow microwave links at Brean.
For more information on McKay Brothers visit www.mckay-brothers.com
In this article, Mike O’Hara, publisher of The Trading Mesh, talks to Hirander Misra of GMEX, Luke Hickmore of Aberdeen Asset Management, Nomura’s Lee McCormack and Byron Baldwin of Eurex Clearing, about the challenges – and alternatives – facing asset managers as they contemplate central clearing of OTC derivatives in Europe for the first time.
Introduction
Buy-side users of OTC derivatives face many uncertainties as they prepare for mandatory central clearing in Europe, a requirement that finally comes into force next year, but which stems from the G20’s 2009 pledge to reduce systemic risk in the market. Initially, the European Market Infrastructure Regulation (EMIR) demands that major clearing brokers must centrally clear a select group of highly liquid interest rate derivatives. But eventually all counterparties must make arrangements to adopt central clearing if they want to carry on using any standardised OTC derivative.
For asset managers – many of which fall into EMIR’s ‘Category 2’ basket of market participants who must start central clearing derivatives six months after clearing brokers migrate – interest rate swaps (IRSs) are a core risk management tool for bond portfolios, also used for hedging very specific client liabilities as part of liability-driven investment solutions. Ahead of EMIR’s deadlines, asset managers have been assessing the capabilities of clearing brokers, getting to grips with the collateral implications of margin calls by central counterparties (CCPs), selecting account structures to protect clients’ assets and liaising with end-clients such as pension funds to inform them of the cost and risk aspects of the new clearing requirements.
On top of these complex and challenging tasks, asset managers – along with other users of OTC derivatives – must also come to terms with ‘frontloading’, a requirement unique to Europe’s approach to migrating from bilateral to central clearing. Because derivatives contracts expire over a variety of maturities, the migration process could lead to some instruments being centrally cleared and others bilaterally, thereby creating an uneven playing field for market participants.
To ensure that the European market moves swiftly to a centrally cleared environment, EMIR includes the frontloading obligation, which requires bilateral trades entered into before central clearing is introduced, to be centrally cleared once the new rules are in force. The rule has proved controversial and has been subject to a series of changes and clarifications over the last 12 months or so. In short, the frontloading period has shrunk, the range of exempt counterparties has increased, and a threshold has been introduced whereby only non-clearing member financial institutions with more than a certain level of derivatives notional outstanding must comply with the requirement.
Nevertheless, there will be a seven month period during which ‘Category 2’ asset managers know that any bilateral agreement to enter into an IRS is very likely to result in a central clearing obligation, if the contract has not expired by the time the EMIR clearing mandate comes into force.
Buy-side clearing challenges
For the vast majority of asset managers that have not previously centrally cleared OTC derivatives transactions, the initial response to the incoming EMIR clearing mandate has been to select a clearing broker and a CCPs through which to clear. Some firms that already used exchange-traded derivatives turned initially to their futures clearing brokers, but the central clearing of OTC instruments is such new, unchartered territory that many asset managers have found themselves looking for brokers that could demonstrate capabilities and expertise across the OTC and exchange-traded space and across asset classes.
The advice of clearing brokers is critical to another of the important decisions facing asset managers, that of selecting appropriate account structures. EMIR specifies that as well as existing omnibus account structures that hold the assets of multiple clients of a clearing member, CCPs must offer individually segregated accounts. These are designed to offer maximum protection to asset managers’ clients, such as pension funds whose assets are posted as collateral, thereby funding initial and variation margin payments in support of centrally cleared OTC derivatives transactions.
From a frontloading perspective, one of the first tasks asset managers need to do is establish whether they trade sufficient volumes of OTC derivatives to be categorised as Category 2 or 3 market participants, the latter benefitting from a longer phase-in period. Although category 2 and 3 firms have different clearing obligation timelines, the frontloading obligation only applies to category 2 firms.
“To define yourself as category 2 or 3, you need to calculate your OTC derivative positions – at an individual fund level and not at a group level – over a rolling three-month period to determine if you are over the EUR8 billion notional activity that would determine the fund being regarded as Category 2,” explains Lee McCormack, Clearing Business Development Manager at Nomura, “You also need to work out who you’re trading with, whether they’re going to be classified as 2 or 3, and whether the frontloading obligation applies”.
Moreover, buy-side firms need to consider the operational and valuation implications of having OTC derivative transactions on their books that fall under the frontloading obligation. Trading a swap on the understanding that it will eventually go for central clearing may have an impact on credit support annexes (CSAs) and discount valuations, with implications for pricing too.
“It’s not about transaction or usage charges. It’s about the long-term performance brake that placing collateral with a CCP for initial and variation margin can put on the portfolio.”
Luke Hickmore, Senior Investment Manager, Aberdeen Asset Management
For Luke Hickmore, Senior Investment Manager at Aberdeen Asset Management, uncertainty about clearing costs is the primary but not the only concern over frontloading. “There will be a period during which we’re holding the contract but we won’t know what the clearing costs are going to be at the end of that period. How that affects the value of your contract is going to be really important and needs addressing as soon as possible. Buy-side firms and their end-clients will have to look closely at the existing CSAs they have in place with their counterparties as well as the documentation provided by clearing houses. For us, it’s about taking a project management approach to getting over these complications,” he says.
The number of parties potentially involved and the complexity of the issues raised by frontloading means prompt action is required by asset managers, regardless of the scope for further slippage of regulators’ timelines. “Buy-side firms should be working with their clearing members now to get their trading limits and initial margin limits in place so that they have a lot more certainty that when they trade that product, they will be able to put it into clearing simply,” recommends McCormack.
The earlier the issue is addressed, the better chance asset managers give themselves of working through all of the implications from the front to the back office. “Buy-side firms need to be in a position to track, monitor and report transactions that will need to be frontloaded and ensure that those positions are then factored in as far as central clearing is concerned. It is not necessarily that difficult, but, there is a great deal of operational work to adapt systems for adequate tracking and reporting, as well as the work required in terms of interfacing with CCPs in preparation for central clearing of IRS and other OTC derivatives,” says Hirander Misra, CEO of GMEX Group.
From an operational perspective, Aberdeen’s Hickmore cites regulatory uncertainty as causing problems for the buy-side at a time when resources are stretched by a need to deal with a wide range of reforms and rule-changes in parallel. Europe’s central clearing rules have been consulted on, re-drafted and delayed on several occasions and it is highly likely that asset managers will not now have to start clearing interest rate swaps until Q3 2016. That might give extra time to prepare, but the stop-start nature of implementation projects is far from ideal.
“A lot of it has been done, but has now been put on ice. We saw our project stop at the end of last year when the time to clearing was getting longer. Since then, we’ve revisited the project to ensure our cost assumptions and concerns over operational complications are still valid. That’s not easy and it requires resource,” says Hickmore.
Alternative approaches
As noted earlier, a key objective of the emerging post-crisis regulatory environment is to reduce systemic risk in the OTC derivatives market. In part, this means incentivising market participants to choose the most highly regulated and operationally robust instruments. For example, the margin requirements for non-standardised OTC derivatives are based on a 10-day value at risk (VaR) treatment, while margins for plain vanilla, centrally cleared OTC derivatives are calculated on a five-day VaR basis, and listed derivatives attract a two-day VaR treatment, making the latter potentially the cheapest to fund over time, provided it offers the same level of protection.
Aberdeen’s Hickmore views higher margin requirements for centrally cleared OTC derivatives compared with the historical cost of bilateral trades as a potential performance drag on his portfolio. “It’s not about transaction or usage charges. It’s about the long-term performance brake that placing collateral with a CCP for initial and variation margin can put on the portfolio. It’s hard to quantify, but asset managers are going to have to get on top of it,” he observes.
“Our clients are looking at exchange-traded alternatives with keen interest.”
Lee McCormack, Clearing Business Development Manager, Nomura
The rising cost of swaps and other OTC derivatives instruments, not to mention the multiple uncertainties over future clearing costs, as exemplified by frontloading obligations, has sparked a number of innovations from venue operators that have caught the eye of buy-side firms.
“The overall weight of the regulations and the costs of clearing trades centrally versus bilateral transactions mean that our clients are looking at exchange-traded alternatives with keen interest,” says McCormack.
In the US, swap futures listed by the CME Group and Eris Exchange have gained a foothold, while in Europe new exchange-based products are also being introduced ahead of EMIR’s
central clearing mandate. One of these is GMEX’s Constant Maturity Future, the value of which is based on an underlying proprietary index to replicate the economic effect of traditional IRS, in an exchange-traded environment.
“Buy-side users of OTC IRS are facing a capital shortfall as these instruments are forced into central clearing, and are looking for cheaper alternatives. With a Constant Maturity Future, you get the effect of an IRS, but a lower cost of margin and the cost of funding due to the two-day rather than five-day VaR treatment,” explains GMEX’s Misra.
“The GMEX Constant Maturity Future closely mimics the underlying IRS market, the difference being it’s a two-day VAR product as opposed to a five-day VAR product, so it’s substantially cheaper on the cost of margin and the cost of funding”, continues Misra. “With an IRS-type framework but exchange-traded, that is good for the buy side because they can look at moving some of their positions to contracts like this. Equally, it also ensures that they can use their capital in a much more efficient manner”.
Aberdeen is actively looking at use of exchange-traded alternatives to centrally cleared OTC derivatives and sees tools such as the GMEX Constant Maturity Future as having operational benefits. “For us, certainly in a credit world, it’s a good instrument, because it doesn’t have to be rolled all the time,” says Hickmore. “This means you can have a portfolio fully hedged off against your risks all the way across the yield curve on an ongoing basis. The on-exchange nature of such products also makes them operationally simpler.”
“Buy-side users of OTC IRS are facing a capital shortfall as these instruments are forced into central clearing.”
Hirander Misra, CEO, GMEX Group
But further innovation is required if asset managers are to find exchange-traded alternatives to all their hedging needs. “We’ll certainly be looking to do more exchange-traded derivatives, but many of our clients need more custom-built interest rate swaps, which will have to be centrally cleared,” says Hickmore.
Developments in the exchange-traded market may not be sufficient just yet to replace the precision that tailored OTC derivatives can deliver to individual counterparts, but the widening range of instruments available offered by trading venues in response to the central clearing mandate is drawing in new market participants. Niche money managers that might have balked at the complexity of the OTC market are exploring the new competitive landscape with vigour says Nomura’s McCormack.
“Smaller clients that have never had access to the OTC markets are seeing this as a great opportunity to get into trading different types of products,” he observes.
Supporting roles
Despite new innovations such as swap futures, some buy-side market participants will continue to want to use familiar hedging instruments, at least until the new regulatory and competitive landscape takes a firmer shape. As such, they are looking to their clearing brokers to provide execution services, access to clearing and expertise, with the latter perhaps being the most important factor for buy-side firms that have never previously dealt directly with a CCP.
“Both clearing brokers and clearing houses need to continue their efforts to raise awareness among buy-side firms of the implications of central clearing and the opportunities to maximise capital efficiencies, for example by identifying and pursuing margin offsets across similar product sets,” says Misra.
Clearing houses have had to embrace an entirely new role in interfacing directly with investment management firms, having previously dealt only with clearing members. According to Byron Baldwin, Senior Vice-President, Eurex Clearing, they are already working closely with the buy-side, notably by simulating currently unfamiliar processes in preparation for central clearing, such as posting margin, on a daily basis.
“It’s a matter of testing the pipes between the various platforms, testing the information flow, and understanding the margin calculation process.”
Byron Baldwin, Senior Vice-President, Eurex Clearing
“In our simulation programme, we take trades from execution, through to clearing and then generate the reports they would receive. It’s a matter of testing the pipes between the various platforms, testing the information flow, seeing the confirmation of trades, and understanding the margin calculation process. A lot of buy-side firms are sending us their portfolios to help them assess the margin implications of the positions within those,” he says.
For CCPs, central clearing is an opportunity to generate new relationships and revenues, but it requires a number of tweaks and adjustments to existing services and operations as well as the development of new ones. Eurex Clearing, for example, already accepts a wide range of asset types as collateral for margin payments and offers cross-margining across listed and OTC products.
In 2014, Eurex introduced new services – Direct Collateral Transfer and Collateral Tagging – to help buy-side firms to tackle new challenges thrown up by central clearing of OTC derivatives, such as transit risk, i.e. the risk that collateral directed by an asset manager to an individual segregated account resides with a clearing member at the point of default and thus does not reach the CCP. “By introducing Direct Collateral Transfer, we eradicated transit risk. With Collateral Tagging, a big fund manager with 100-plus segregated accounts can achieve operational benefits by having just one fully segregated account with collateral tags on a per-fund basis,” explains Baldwin.
Although the services required to handle the shift to central clearing are gradually falling into place, many challenges remain. Larger buy-side firms typically have a greater capacity to absorb the implications of regulatory change than their smaller counterparts. Their obligations under reforms such as EMIR are greater too of course, but smaller firms must also comply, often needing more input from their sell-side counterparts to do so.
“There is a level of clients which will have had lots of attention from their dealers and from their CCPs, but also there are a lot of smaller clients who have not had the time and attention,” notes McCormack. “It’s also important to get the message out to them, helping the clients understand their obligations and how they can prepare for them.”
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh talks to Olivier Deheurles of Adaptive Consulting, Ash Gawthorp of The Test People, Eddie McDaid of Software AG and Ofer Deshe of Tobias & Tobias, about the current state of front-end user interface (UI) technology in trading platforms and investigates how firms can future-proof their investment in UI development.
Introduction
In recent years, financial markets professionals have seen a massive growth in the number and variety of trading-related applications accessible not just from their desktop, but also via the web and on mobile and tablet devices. Anyone attending an industry trade show such as TradeTech or FIA Expo these days will encounter a multitude of such trading front ends, running on a variety of platforms.
For the firms that develop these applications – whether banks or specialist ISVs – the decision around which technology to use for the front-end user interface can be critical to the success or failure of the product. Getting the decision right however, is easier said than done. UI platforms come and go at an alarming rate; what is flavour of the month one year can be almost obsolete the next.
So how can firms ensure that the front end they build today will continue to satisfy the requirements of their clients and end users tomorrow? In short, how do they navigate their way through what has become something of a UI technology mess?
A plethora of UI platforms
Looking across the spectrum of end-user platforms, there are many possibilities to consider. From a hardware and OS perspective alone, clients might be using PCs or Macs running various versions of Windows, Linux or Mac OS. They might also be using a variety of mobile and tablet devices that could be running anything from Apple’s iOS to Google’s Android, Blackberry OS or Windows 8, for example.
Factor in the choice of development platform – HTML5; browser plugins such as Flex or Silverlight; desktop environments such as WPF; native mobile apps and so on – and things start to become increasingly complicated, especially as new versions of these platforms are constantly being released.
Add to that the fact that the back-end systems that drive these end-user applications – particularly those operated by banks – are often legacy systems written in entirely different languages with their own communications protocols, and it all just adds to the confusion.
Given this fragmented UI landscape, firms developing trading applications need to be clear on who their target end users are and what platforms they are currently using, as Ash Gawthorp, Technical Director of The Test People, a leading performance engineering and testing solutions firm, explains.
“In the consumer world, web apps are designed for the latest versions of Chrome, IE, Safari and Firefox because those are the big ones in terms of usage stats”, he says. “But the elephant in the room in the corporate world, particularly in banking, is that the browser used almost to the exclusivity of all others is Internet Explorer, and often older versions such as IE8. So if you’re creating an HTML5 app for example, a lot of the features and functionality just don’t exist on that platform”, he says.
Olivier Deheurles, Co-Founder and Director at Adaptive Consulting, a software development and integration firm that works with a number of global tier one and two investment banks, agrees that the banking world may not be quite ready for mainstream adoption of HTML5-based apps just yet, even though the desire might be there.
“With Silverlight and Flex dying because they are not going to be supported for much longer, everybody is looking at HTML5, especially in the retail and consumer market”, he says. “But the user base of banks is very, very different from the user base of the likes of Google and Facebook. And the fact is that very few of the elements of HTML5 that are useful for building trading apps are implemented in IE8 and IE9. It’s not that HTML5 is not a solution, it’s a good and relevant one, but there are challenges. It’s not as simple as people might think”.
If HTML5 isn’t the answer for investment banking and trading applications (yet), what is
“The user base of banks is very, very different from the user base of the likes of Google and Facebook. And the fact is that very few of the elements of HTML5 that are useful for building trading apps are implemented in IE8 and IE9.”
Olivier Deheurles, Adaptive Consulting
Performance
As is so often the case in these situations, the answer depends. Developers of front-end applications need to have a clearly defined set of criteria to help them choose the appropriate UI platform(s) to focus on.
Another key factor to consider when developing trading apps is performance. Regardless of how rich an application’s functionality is, if it is too slow and unresponsive, people just stop using it, so there are various factors that need to be taken into consideration around performance.
Deploying to a global client base for example can create problems around latency, due to the physical distance of the app’s users from the web servers. Another factor that can impact performance when building new UIs on top of legacy applications, is that those legacy systems are generally not designed to minimise the number of requests back and forth, when delivering content back out to the browser, for example.
“When you’re deploying to a global client base it’s very easy to build a web app, but people often don’t take care of the problems that can create; they don’t think about latency and the fact that if you’re rolling the app out globally, then the distance away from your web servers may be a problem in itself. Another challenge is where you’re building new UIs on top of legacy applications, which probably haven’t been designed to minimise the number of requests back and forth when delivering content back out to the browser”, he says.
How to choose?
There are various other criteria that need to be considered when choosing a UI platform, says The Test People’s Gawthorp. One is installation friction. “If you’ve got a fat client application, then the upgrade process can be difficult, whereas if it’s in a browser it’s a lot easier. But even in a browser, you need to consider which plug-ins the app requires, because an HTML5 app for example requires no additional software installing in the browser, whereas Flex or Silverlight apps require plug-ins that need to be downloaded, installed and kept up to date all within the organisation’s standard release cycle”.
Availability of development resources is another important point, says Gawthorp. “Web developers are far easier to find than good Flex developers for example, so the availability of those skill sets and the price that you pay for them is a key consideration”, he says.
Another factor to consider is longevity. Given that UI technology moves so fast, developers need to have an idea of how long the front-end UI platform is likely to be around for. Other criteria might be how easy or difficult it is to test and release the app on the chosen platform, the total development costs and even the ‘stickiness’ of a desktop app versus a web-based app. Whereas a fat client sitting on a desktop is always there and can be heavily branded, anything running in a web browser disappears as soon as the browser is closed.
Adaptive’s Deheurles explains how issues around workflow are particularly important when developing trading applications for mobile. “The workflows and functionality that you expose on different UIs are completely different”, he says. “A desktop application has a very rich experience with lots of functionality and supports expert users. On mobile, we see people building more specific workflow-driven approaches, which perhaps integrate with both desktop and mobile clients. It’s critical, in this case, to have a shared backend and API so you can reduce costs drastically and consume the same functionality on different platforms but expose it different on those devices.”
Future-proofing
Given the challenges, the range of choices regarding UI technologies and the fact that the trading application space is so competitive, what are some of the best practices that firms can adopt in order to ensure they future-proof their investments in UI development?
“By keeping as much common platform development on the back-end as possible you can reduce complexity on the front-end, making everything event-driven and reactive is a great way to achieve this.”
Eddie McDaid, Software AG
The answer may be somewhat counter-intuitive in that it involves investing more heavily on the server side – rather than the UI itself – to enable multiple front-end technologies to work with a common infrastructure.
“Having to redevelop business logic in each UI can sap precious development resources”, says Eddie McDaid, Head of Product Management for Streaming Analytics and Big Data at Software AG. “By keeping as much common platform development on the back-end as possible you can reduce complexity on the front-end, making everything event-driven and reactive is a great way to achieve this.”
Gawthorp agrees. “The choice of the right enterprise messaging technology is crucial, because the right one will provide bindings and APIs for all the different UI technologies enabling reuse of the same backend services – both for today and for new client technologies in the future”, he says.
These are the principles that the Adaptive Consulting team have been following for some time when working with investment banks. “Realistically, technology on the back end is always around for much longer than you’d expect, maybe 15, 20 years”, says Deheurles, “whereas UI technologies move much more quickly and also design fashions change, so you need to keep things nimble and really keep the costs down at the front end. The back end services have to survive multiple generations of UIs, so they should be very highly engineered and provide very strong APIs to support multiple UIs and UI technologies.”
“The choice of the right enterprise messaging technology is crucial, because the right one will provide bindings and APIs for all the different UI technologies enabling reuse of the same backend services – both for today and for new client technologies in the future.”
Ash Gawthorp, The Test People
A real world example
Adaptive’s Deheurles gives an example of how investing more heavily in the back end than the front end can work in practice. “Say you are streaming prices to an FX trading application and you need to provide different prices to different groups of clients. The problem with just pushing out core prices and applying logic and calculations in the UI – which is certainly efficient from a server-side infrastructure perspective because you are mutualising lots of stuff and streaming the same thing to everybody – is that if you want to build the app on another UI platform to do the same thing, you’ll need to port all of the functionality and implement it all over again. So a better approach, even if it’s more expensive up front, is to do that kind of complex business logic on the server side and have the UI free of all this logic. That way you can still build a rich user experience but the UI is a lot thinner and doing a lot less”.
The kind of functionality that Deheurles describes can be achieved through the implementation of flexible messaging fabrics that integrate with technologies such as complex event processing (CEP) engines on the back end and multiple UI platforms on the front end, according to Eddie McDaid of Software AG. “By combining the CEP / streaming analytics engine and the messaging system, you can control pricing subscriptions from the back end rather than the UI”, he says. “The pricing engine on the server side can contain the logic enabling it to become the orchestrator of distribution, and the benefit is that it simplifies the application logic on the remote application. The more logic you have where it’s easy for you to manage and control it – i.e. on the server – the better, because not only do you get more simplicity, you also get better performance”.
Know your users
Possibly one of the most important things to keep in mind when navigating through the UI tech mess, is that – regardless of the technology components – for any UI to be successful it has to address the needs of the people who are using it. That means that the UI designers need to understand the total environment of those users, according to Ofer Deshe, Managing Director of Tobias & Tobias, a firm specialising in human-centred system design.
“The key element of the UI is the user experience”, says Deshe. “In order to get the UI itself right, you require a deep understanding of the people who will be using it, not only in terms of their business requirements, but also in terms of their psychological, cognitive and behavioural characteristics; their work flow; their mental models; how they collaborate; how they work, and so on. If you can gain a deep understanding of all that, you can choose the right UI elements and information architecture that will make the user experience truly successful”, he says.
“The key element of the UI is the user experience. So in order to get the UI itself right, you require a deep understanding of the people who will be using it.”
Ofer Deshe, Tobias & Tobias
Deshe, who’s background is in Psychology and Cognitive Science, creates models to discover how people working in a capital markets environment retrieve and process information in order to make decisions. He explains how Tobias & Tobias works closely with Adaptive Consulting on design projects such as those for single-dealer platforms.
“We conduct a contextual enquiry, which is where we sit and observe people in their natural environment to understand how they think”, he says. “We look at how they work and what they do in the context of their environment so we can create prototypes. And Adaptive are able to bring these to life very quickly so they can be put to the test”.
Conclusion
In conclusion, it is clear that there are many factors that need to be considered when developing front-end user interfaces for trading applications, from both a user experience and a technology perspective. As Ofer Deshe summarises, getting the user experience right is good for business.
“Get it right first time and you will see faster – and greater – return on investment, high rates of user adoption and user satisfaction”, he says. “Get the user experience wrong and you will suffer from slow user uptake, poor and inefficient user interaction, higher rates of human errors, reduced revenues, higher training costs, and unnecessary increase in post-implementation change requests due to uncovered user requirements and usability defects. Even more importantly, your brand is not simply a logo, or a standard set of visual treatments. Your brand is also the way people experience your software products and digital services. Poor UI leads to brand failure. Great UIs strengthen brands and enable new opportunities”.
There are also some solid principles that should be followed from a technology perspective. Adaptive’s Deheurles sums these up.
“On the front end itself, know your users and focus on the platforms they are likely to use. Be aware of the fact that the lifespans of these UI platforms are short, so don’t invest too heavily in any one platform. Instead, invest more on the back end and messaging technology that will enable your server side infrastructure to continue to operate with a constantly changing landscape of UI platforms”.
By following these principles, firms can minimise the significant costs and problems of constantly re-developing trading apps for every new UI platform that comes along and navigate a clear path through the ‘UI tech mess’.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh talks to Ash Gawthorp of The Test People, Matt Barrett and Olivier Deheurles of Adaptive Consulting to find out just how critical it can be for companies to take a prudent and careful approach when setting non-functional requirements for new technology.
Introduction
Non-functional requirements can make the difference between game-changing technology and expensive flops. A non-functional requirement doesn’t determine if the product will work, but it can determine if the product will work well. In one widely cited study, German and Japanese engineering researchers noted the failure of the Mercedes Benz A-Class to pass the ‘Moose Test’ as an example of what can happen when NFRs are not adequately considered1. The Moose Test, which has been around for decades, involves a vehicle making quick turns to avoid an obstacle, as if a car suddenly had to avoid a moose. German auto engineering has a reputation for excellence, but in this instance, which took place in the late 1990s, the Mercedes overturned when put to the test. So how can businesses make sure their technology will pass whatever equivalent Moose Tests they’ll face? The answer appears to have as much to do with mind-set as with technological expertise.
Holding the line
During those critical moments when requirements are being defined, Ash Gawthorp knows that businesses need to have one eye on the now and one on the horizon. Setting and defining non-functional requirements at the start of a project often doesn’t give you immediate value but pays dividends a little further down the line as the software matures.
“Setting non-functional requirements is all about considering the characteristics that don’t immediately spring to mind from the product point of view, but must be defined, and embedded in the approach from the start to avoid it biting you further down the line” said Gawthorp, technical director at consulting firm The Test People.
In the case of functional requirements, it’s all about the present. As Gawthorp said, something either works or it doesn’t, and most people will be able to spot the difference. But with NFRs, it’s not always clear how important a requirement will become later on. “It’s the characteristics and features that are either invisible to you as a user, or only impact the user experience under specific scenarios – hardware failure or high concurrent user load for example” he said. “And if you don’t invest time and effort into it, then the risk of something going dramatically wrong increases significantly over time.”
All businesses are under pressure to get new features and functionality out of the door in their desire to get ahead of the competition. This, Gawthorp said, is often at the cost of good engineering practices resulting in technical debt “If you are under pressure to get new features out of the door faster and faster, then often a few corners have to be cut… if you don’t invest the time and effort to tidy up and refactor you end up building this every increasing debt of technology that has to be addressed at some point.”
His firm sees this behaviour frequently. It requires strong leadership in IT to push back on the business and make them aware of risk of technical debt building up, and to ring fence budget to ‘shore up’ any corners cut to meet the feature requirements. Failure to do so risks putting the entire platform at risk of catastrophic failure. “It’s essential for IT to ‘hold the line’, they have a responsibility to ensure the platform continues to adhere to NFRs, and where it starts failing time must be spent to remedy the problem early, and not ignore it folding under the pressure of every increasing feature requests.”
Many people believe that by thinking about performance and scalability, they’re doing enough to future-proof their efforts. Performance and scalability definitely matter, but there may be a dozen or more non-functional requirements that can be equally important. Response time and throughput were a couple of examples he gave.
“If you are under pressure to get new features out of the door faster and faster, then often a few corners have to be cut… if you don’t invest the time and effort to tidy up and refactor you end up building this every increasing debt of technology that has to be addressed at some point.”
Ash Gawthorp, The Test People
Setting good NFRs also means having the ability to test and monitor them. This is where technologists can help a business make better decisions. Say a business owner wants to add a particular feature to a product or system. With the right NFRs in place – and the means of measuring them – product developers are in a position to explain what impact the new feature will have on performance.
Sometimes, the impact of a single change is negligible but a series of small tweaks can collectively take a toll over time. “Often what causes failure in systems are not one huge single change or event when people can say, ‘Okay, we’ve done this and it’s suddenly fallen off the edge of a cliff’,” Gawthorp said. This can be especially true when companies launch a system that fails to perform in the way intended or do everything the architecture was designed for. “As times goes on more features and functionality are added, without investing the time to address tech debt the purity of the design degrades, resulting in poorer performance and elongated times to add new features with greater risk,” he added.
Another key factor: money. Too often commercial executives view technology as a matter of cost rather than revenue. “At the other end of the scale is where an organisation sees that it’s not just setting NFRs, but ensuring that a system adheres to them as a real profit centre,” Gawthorp said. If business owners consider the amount of money they make as proportional to the number of users they have on a system, adding users and delivering a good service with the same cost or infrastructure becomes a major windfall.
The cost-revenue prism is just one aspect where a company’s mind-set is important. In the context of NFRs, the wider nature of the business-IT relationship becomes paramount. How business owners communicate and work with developers can make all the difference.
The negotiation
“There is a gap, as there always is, in communication between business and IT,” said Matt Barrett, a director at technology consultative group Adaptive. “A lot of the time when we get requirements, we get them without context.”
For instance, a company may say a system needs to process an event in a certain amount of time or handle a specific load. But it will neglect to give the business context, so IT will not have any understanding of how that requirement may change over time. Barrett’s advice: spend time on understanding context.
“If we just all shared the same context about the use of the application and the business case for it, the non-functional requirements would be obvious.”
He suggested the reluctance of IT technologists to often engage with the business properly may lead to an unhelpful insistence that they are given concrete numbers for NFRs. “It would be much better to just understand the business case and then it would be apparent what the non-functional requirement was,” Barrett said.
Gawthorp of The Test People describes the early stages of a project in terms of a negotiation.
“The negotiation implies two teams set up across the table from each other, seeing who will give up the least,” he said. But it doesn’t have to be adversarial. It should just be an open discussion where both sides learn a great deal. “When it works well, the business is able to explain to technology why it is that they feel this pain and why it is that something needs to be a certain way. Conversely IT is able to explain to the business why something is so much harder or riskier to do it in one instance when it may be trivial to do it in another way,” he said.
To make that negotiation successful requires communication and that in turn depends on at least two things: a common language and trust.
“There is a gap, as there always is, in communication between business & IT. A lot of the time when we get requirements, we get them without context.”
Matt Barrett, Adaptive Consulting
Negotiations sometimes don’t get off the ground because IT developers are not aware of NFRs in the first place. Suddenly, a business owner wants to know why a function doesn’t take place quickly enough and frustrated developers respond that no one ever specified that time requirement.
“Part of the challenge is that common language,” Gawthorp said.
The ability to speak the same language can depend on who is at the table. For instance, he said development and test teams often are more involved with the business because they spend more time understanding business requirements and turning them into software; further downstream, teams such as deployment, infrastructure or networking tend to have less involvement with the business, although the areas they manage can affect important product or service features such as resiliency, robustness and security.
“The conversation needs to happen where IT can explain to the business what they can do and can’t do, what’s easy and what’s hard for them, and to see how everybody can move forward,” Gawthorp said.
The language gap may be so pronounced that sometimes, one side of the table is not clear on whether the other side wants something in the first place. So many applications don’t fulfil NFRs because developers actually were never told of NFRs by the business. And for product managers, it is not always clear whether they should be initiating the discussion about NFRs.
A matter of trust
Beyond the common language issue, there is a more fundamental factor: trust. Both sides need to be transparent about how important different requirements are. That’s where face-to-face discussion becomes paramount. If a business owner starts out saying that a certain requirement is ‘non-negotiable’ only to become more flexible when the development team says it can’t be done, it reflects a lack of honesty in the conversation.
Sometimes requirements are just handed down from on high. In such cases, if they are onerous, IT staff may not take them seriously or they may simply say the requirements are just not possible.
The main word that Gawthorp stresses is collaboration. He imagines a diagram with business owners on the left hand side and various technologists, each of whom has a part to play in the product pipeline, on the right. “There are often small changes that each of those teams can make, that will make the lives of the upstream and downstream teams far, far easier, but frequently don’t happen,” he said.
Sometimes that happens because teams work in silos. Other times, the common language factor is at play. “Really a conversation needs to go on around what is realistic and what is achievable,” he said.
Genuine collaboration between IT and business is actually a relatively recent phenomenon.
“It’s only recently in the last 10 years or so where IT and technology has been able to open the box a little bit and let the business have a look inside,” Gawthorp said.
Olivier Deheurles, a co-founder and director at Adaptive, said it can be important to have at least one technical person working within the business team and helping to shape NFRs.
When a dialogue is working well, a business owner will be able to be more transparent when discussing the importance about certain requirements, or even the lack of immediate clarity about how precise they need to be. At the same time, IT staff will feel they can be honest about what they can and can’t deliver. That fosters the conditions for practical solutions such as having a fast feedback loop.
“It comes down to being agile,” Deheurles said. “It is not about building massive specifications up front and knowing everything.”
The irresistible force paradox
To illustrate what collaboration doesn’t look like, Gawthorp described a scenario where IT asks a business owner for an NFR on a log-in time. “IT says, for instance, ‘What is an acceptable time for a user to log into this system?’ the business’ response is, ‘As fast as possible’.
Alternatively, if a business owner suggests a time that would be prohibitively expensive to achieve and is therefore unrealistic, it can kill a conversation stone dead. In those scenarios, IT often will just go away and do whatever it thinks best knowing it can’t achieve it.
“Frequently an NFR will be provided with little or no investigation into the business benefit realised, say ‘ninety percent of logins must take under five seconds’, five seconds may require a huge engineering effort, but eight seconds is easily achievable – questioning the business value is important to avoid wasted effort,” Gawthorp said.
“It comes down to being agile. It is not about building massive specifications upfront and knowing everything.”
Olivier Deheurles, Adaptive Consulting
Despite the precision required to trade successfully in today’s markets, pinning down hard numbers can actually be problematic. In low-latency systems for example, it can be very difficult to specify what is the appropriate NFR to be targeting in terms of what is realistic and what is gold-plating.
In the basic example of the login time that Gawthorp gave, it may come down to whether to invest in lowering the overall time, or in having a slightly longer login but eliminating outliers where users are kept waiting for long periods of time.
“It’s almost a bartering system essentially, a give and take between what the business is looking for and what is realistically achievable and understanding those points.” He adds: “It’s really a question of understanding where the pain is and where to invest.”
This can involve weighting certain NFRs differently. But even if companies can have those honest, transparent conversations about trade-offs, it is still not a guarantee of success. They also need to have them early on in the process. Architectural decisions at the beginning of a project can have huge implications later in terms of scalability or any number of other NFR factors.
After all, it is rarely the case that there is one clear-cut way to meet a goal. Barrett said: “There are many different ways to achieve something when you are building these platforms but they all have trade-offs and the non-functional requirements are what guide you to make the right trade-offs.”
It’s not all about the numbers
Sometimes, getting the NFRs right can be more about feeling than numbers.
“It doesn’t always require specifying an exact number,” Barrett of Adaptive said.
A company need not create an iron-clad service level agreement and spend time and energy forcing a provider to adhere to it strictly. A more sensible approach could involve just building the metrics and looking at the trends over time. “Don’t say the number. Say that you want to be able to observe the property of the system,” Barrett said.
He describes the ideal scenario, which involves not only collaboration, but also flexibility and context. “I think a successful conversation around non-functional requirements would lead to a development environment and a team situation where you could go up to any one developer, in an ideal world, and give them alternatives for an implementation and they would know which one was right, because they understand the non-functional requirements.”
Conversely, when NFRs are not widely understood, the IT decision-making process can become chaotic. In such an environment, when faced with a requirement for a new feature, every developer on a team may have a different take on the best way forward.
“Ten or fifteen different ideas come up and are all readily discussed and they have very different or wildly different complexity profiles, throughput characteristics or other second order effects,” Barrett said. “And there is no way to decide, because no one understands what the non-functional requirements are. No one has got a clue. So everything is done ad hoc, inconsistently.”
Barrett said that in any given project, there will always be those who understand the importance of non-functional requirements, but they are not necessarily the people who make decisions.
Still, he believes the level of understanding is improving.
“I think it is getting better. People have seen these projects go wrong enough times that now they understand that it’s probably due to non-functional requirements where most of the disappointment lies. Because I think that, actually, within the financial services industry, we are not that bad at delivering software. It may take a long time and it may be expensive, but when you get to the end of it, it tends to do what it was supposed to do. The problem is that perhaps all the things it was supposed to do weren’t specified – and that’s where the pain points come from.”
NFR pitfalls – A top ten list
Creating the conditions for business-boosting NFRs is not just about what business owners and technologists do. Often it is about what not to do. Based on the views from our experts in the field, we’ve created a top ten list of pitfalls to avoid.
• Adopting a hierarchical rather than collaborative approach to setting NFRs
• Considering NFRs late in the process (eg: after architecture has been decided)
• Focusing on time to market above all else
• Demanding only concrete NFR numbers rather than business context
• Setting unrealistic or prohibitively expensive targets
• Not allowing for honest back-and-forth discussion between IT and business
• Calling for open-ended NFRs such as ‘as fast as possible’
• ‘Gold-plating’ a feature unnecessarily, at the expense of other factors
• Not taking due account of a business model (eg: where is the pain?)
• Focusing exclusively on the target rather than the metrics and the trends
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – talks to Paul Reynolds of Bondcube, Peter Fredriksson of Baymarkets, Stu Taylor of Algomi and Andrew Bowley of Nomura to discuss ways the financial industry can work with the buy-side to address some of the structural problems that have led to chronic illiquidity in the fixed income market. Also, from the buy-side perspective, Gianluca Minieri of Pioneer Investments spoke with Mike about the issues buy-side firms face in today’s market.
Introduction
The fixed income market has undergone a dramatic transformation in the years since the global financial crisis. Banks have shrunk their balance sheets massively and stopped warehousing large holdings of bonds, a trend which has prompted a sharp decline in secondary market trading. In the meantime the overall size of the market has grown by 50% and recent reports suggest there is a remaining supply short-fall of as much as $400bn. Meanwhile, attempts to emulate equities and create fixed income central limit order books have yet to gain traction. Bond holders are reluctant to post orders via Requests for Quotes (RFQ) or fully lit Central Limit Order Books (CLOBs), liquidity remains fragmented and both the buy side and sell side struggle with how to make the market work to everybody’s benefit. The solution, some firms say, can be found in two words: innovation and collaboration.
“We are now in a situation where RFQ works very well for liquid bonds and small orders, but RFQ does not serve large and illiquid orders due to too much market risk.”
Paul Reynolds, Bondcube
When the hurricane hit
Paul Reynolds, CEO of trading platform Bondcube, says the extreme volatility in all asset classes seen in October 2014 offers an important lesson. Whereas before, many participants believed the problems holding back secondary market trading were restricted to the relatively less liquid corporate debt market, during this bout of market instability the situation became so dysfunctional that even US Treasury bonds stopped trading.
“There was the living proof that, actually, this is a major market malfunction. Everyone needs to get round the table now, not just banks, not just buy sides, not just technology, but regulators as well. It has got to be a four-cornered discussion.”
As it currently stands, the market is so fragile it can buckle under pressure and at times fail to accommodate even the most basic needs. This is not for lack of new ideas. A number of new trading platforms have come on stream in recent years. Some, like Bondcube, aim to create ‘all-to-all’ markets where the buy-side can post orders and trade with each other. Other firms, such as trading systems vendor Baymarkets, have sought to preserve some of the best features of the traditional OTC model while addressing structural issues that have held back secondary market activity.
Bondcube and Baymarkets are two examples of firms that are offering modern solutions to long-running problems. Bondcube, in bringing buy-side players together, is using some of the successful concepts employed by social media companies to offer investors a way to navigate the market. Baymarkets, in opting for a hybrid model where the sell-side retains a central role, also focuses on information flows and aims to provide enough information to trade without fully emulating a lit market such as equities.
The temptation to make fixed income markets look and act like equity markets has been strong. If only someone could build a venue where everyone felt comfortable posting orders and could enjoy a wealth of information via a Central Limit Order Book (CLOB) and various data providers. But there are good reasons why large fixed income investors are wary, says Peter Fredriksson, CEO of trading platform vendor Baymarkets.
“The values are so large that investors fear tipping their hand the moment they post a price. They don’t want to tell anybody where they are, what they want to do and how much,” Fredriksson says.
The answer to many structural issues, according to Reynolds, starts with technology but it doesn’t end there. “Technology is extremely good at networking people and collecting data and making data available. We have got to turn this market into something that looks a bit more like Amazon or eBay.”
Who knows what
One of the problems is figuring out how to create a method to give the right people the right information. For instance, imagine a firm is looking to unload a large amount of bonds. For a sell-side firm, that information becomes valuable as it can have an effect on the price of future sales. But another buy-side firm just wants to know the latest price and volume for valuation purposes. The latest quote may be available and is useful, but it is potentially not as useful as the latest dealt price and volume.
The US market recognised this with the development of TRACE (Trade Reporting and Compliance Engine) in July 2002. All broker/dealers who are FINRA member firms have an obligation to report transactions in corporate bonds to TRACE under an SEC approved set of rules. That means that the US corporate debt market is more transparent, Reynolds says.
Stu Taylor, founder of specialist fixed income data provider Algomi, says a buy-side firm typically has little information to go by when it seeks to execute a transaction. “If they want to do it in any kind of size, then who do they go to? How do they get a two-sided quote rather than a one-sided quote? How do they know who might have the other side of the trade that they want to do?”
“Nowadays, the best of the online businesses – the Googles, the Facebooks, the LinkedIns of this world – essentially put the individual in the middle of the universe and they surround them with information and data and let them draw their own natural connections.”
Stu Taylor, Algomi
Reynolds says investors are coming up with interesting ideas to gain new information. One Bondcube client is looking to scrape any chat, voice conversations or email in real time.
“The very nature of fixed income, with all these 100s of 1,000s of securities that never trade, is it has got to be data driven. And we have the technology. It has existed for years. Why don’t we use it? It needs to be done in a collective way,” Reynolds says.
Information flows are paramount, adds Taylor. “What you’ve really got to do to solve the problem is to get the right information to the right people at the right stage in the transaction, whether that’s pre-trade, during the trade or post-trade. You’ve got to also join the dots and connect people together. We think that’s a natural extension of what the sales role is today. We think it’s a natural extension of the client.”
Bondcube works by having users submit indications of interest. Matching IoIs is performed darkly, such that everyone may be asked if they are a match, but the only person aware of the match is the person who is on the other side of the I oI. Buyer and seller then work out size and price, with the execution price between the bid and offer.
To have those conversations, participants still need a sense of where the market is and where it has been. Bondcube helps address the problem by allowing investors to see historical volume for each bond. For less liquid bonds that rarely trade, participants can see old orders that have not been filled and rejuvenate them.
The model borrows from social media firms by essentially putting each market participant at the centre of his or her network. Trades are not just about putting prices on screens and matching buyers and sellers.
“Nowadays, the best of the online businesses – the Googles, the Facebooks, the LinkedIns of
this world – essentially put the individual in the middle of the universe and they surround them with information and data and let them draw their own natural connections,” Taylor of Algomi says.
Different methods
There are three main ways that bonds get traded. The first is the interdealer market, which excludes the buy side. Second is dealer-client, but an interdealer trader cannot participate there. Finally, there is all-to-all, which is the model that Bondcube has offered.
What has kept the fixed income market going is the constant stream of new debt. Primary market volumes have reached record levels as borrowers seek to take advantage of historically low interest rates.
“You have got a new issue spigot that is pouring new bonds into the market every day. It only stops when a hurricane blows through town. As soon as the hurricane is gone, it is pumping away again. The market gets bigger, and bigger, and bigger, and yet, its infrastructure is crumbling around it,” Reynolds says.
In those periods when new issuance dries up, the market, as it currently stands, is in trouble. But it’s not all doom and gloom.
Andrew Bowley, head of market structure strategy for EMEA at Nomura, says there is a wealth of development taking place, with new, and potentially disruptive, ideas being floated to address the conundrum the fixed income space finds itself in. “And if there’s any time that is a good time to bring in new, alternative platforms, then with the structural issues and with the regulatory change coming, now is probably the best time to do so,” he says.
“If there’s any time that is a good time to bring in new, alternative platforms, then with the structural issues and with the regulatory change coming, now is probably the best time to do so.”
Andrew Bowley, Nomura
Despite some of the stark difference between fixed income and equities, Bowley says the latter still offers some pointers for the debt market. “It’s around things like best execution. It’s about the commercial construct and relationships and things that perhaps haven’t been considered in depth before in fixed income,” he says.
But to create a market where the concept of best execution is even possible, more information is necessary.
Taylor says the liquid part of the bond market is not the problem. There are a good 500-1,000 issues that are regularly traded, where various bond trading platforms and voice-broking solutions meet investors’ needs.
In that respect, Taylor says that rather than reinventing the wheel and trying to create a totally electronic market, a hybrid-voice model makes sense. “Quite frankly, it’s how the trades are happening today. It’s just, can we make that process work better?”
For Baymarkets, the way to make the process better has been to create what it calls a “twilight” pool, which is somewhere in between a lit market and a dark pool. Participants can see some aspects of the market, but not all the information in a traditional lit market.
“It gives a flavour of where the market is. If there’s a reference price somewhere or a mid-price somewhere and then there are indications that there are activities in the market, then at least people know the rough price level and that there is activity around that level,” Fredriksson says.
Taylor says Baymarkets is a good example of a hybrid model. “The whole point is designed around a broking concept and it’s designed to complement electronic with running voice process for certain trades,” he says.
Another idea is to try to concentrate liquidity at set times with secondary market auctions, based on sectors or geographies.
“Instead of having a CLOB or an auction running all day, typically you run auctions for 15 minutes only for a sector,” Fredriksson says. “You focus liquidity to a much smaller time during the day.”
Such auctions allow investors to find counterparts in a much shorter time. Taylor says bespoke auctions have a good deal of appeal. “You’re combining what is a partly electronic market and extending it by allowing the sales people to take control, invite selected clients into a negotiation process while still giving the benefits of matching technology but without completely taking the sales person out,” he says.
“Instant transparency is probably what the buy-side doesn’t want. They don’t want to tell everybody about their intentions.”
Peter Fredriksson, Baymarkets
Pitfalls to avoid
Providing transparency in the way that other markets do is not the answer, according to Fredriksson.
“Instant transparency is probably what the buy-side doesn’t want. They don’t want to tell everybody about their intentions,” he says. “Even though it’s good to have a price level defined but I doubt that the buy-side wants too much transparency in that.”
Fredriksson says one route to consider is the idea of ‘work-ups’, where traders establish a price level and then keep trading until one of them is done. “Then the remaining interest goes out to everybody else in the market. There’s your reference data,” he says.
“That’s one way to attract liquidity because once the price level is established other people may want to join in.” Baymarkets provides this feature and Fredriksson says it is similar to the RFQ process.
Another concern: too much disintermediation. Eliminating the ‘middle man’ is often thought of as a good thing, but the bond market needs some mediation. “We think that any solution that’s going to work needs to be done with the support of the major dealers, not fighting against them,” Taylor says. For its part, Bondcube has welcomed dealers onto its platform to act as the intermediary between buy-sides.
Finally, the sell side, buy side and vendor community have to recognise that the fixed income market has its own set of requirements that mean the equity market model is of limited value.
“You can’t just apply the equities structure and the equities type technology to the fixed income market because they’re two very, very different beasts,” Bowley of Nomura says. “What is really going to be required is some really interesting innovation as to how the technology can be brought in to apply to the way the fixed income market operates.”
The idea that fixed income markets can migrate en masse to a CLOB-style structure is widely rejected, but central limit order books could have a role to play. More likely in the market’s evolution is a system based on participants’ roles, such that agency- and principle-based businesses can operate side by side.
“It’s less about the platforms and more about the infrastructure and the environment. To me the more interesting part of the landscape is actually the desktop that we have here and the information available to our traders and our sales people,” Bowley says.
People may think of fixed income as an RFQ market, but transactions within the buy side are often coming through to the buy side dealer as order flow. “As the buy side technology landscape evolves, that order handling of the buy side would actually be easier to use and follow through in an order-handling framework,” the Nomura executive says.
“So the combination of various structures in the RFQ world and the interaction of the RFQ world and the order handling world are going to be the trickiest dynamic for people to try and get their heads round and link up.”
Regulators welcome
Financial markets in the post-crisis era have generally been wary of regulatory reform. While many have accepted that stricter oversight was inevitable after such a jarring period of market upheaval, the costs and disruption from so many new rules and requirements has been a running concern for much of the market.
But the fixed income world may be different. Its structural problems, and the disincentives for some parts of the market to change the way they do business, means that this is one area where an external nudge could be welcome.
As it is now, much of the data that investors want and need to conduct transactions gets hoarded.
“Unless we begin to address this fundamental infrastructure, we are never going to get there,” Reynolds says.
Regulators, he says, should offer a quid pro quo, where the sell side is encouraged to disclose more information in exchange for greater latitude in other areas. The message would be something like this: “You start producing data that is available like it is in other asset classes. That will help people understand where bonds are trading. We will give you a little bit of bandwidth on what you can do.”
MiFID II is starting to drive people to think about where they want to position their franchise in the landscape, Bowley says. Regulatory pressure will be an important dynamic in areas such as where best execution obligations may sit in the future.
As order books do develop, or even if the dominant model remains RFQ-based, buy-side firms will still need to decide which and how many platforms they will want to be part of. In some cases, that means exchange membership, which brings with it oversight, regulation, supervision and due diligence. Bowley says Euronext offers a good example of buy-side participation in an order book-led market. Buy-side firms are typically intermediated by a sell side firm that sponsors their access to the exchange.
“MiFID II is not only bringing these structures more clearly but is also increasing the due diligence that an exchange has to put on its members, for instance. So I think the dynamic between some of the platforms and the buy-side is going to have to evolve as a result of the status of these platforms,” Bowley says.
Numerous question marks remain from all four corners of the fixed income world – the sell-side, the buy-side, vendors and regulators. Will regulators push for change? Can the sell-side work side by side with market innovators to alter traditional trading practices? Is the buy-side ready to do business in a new way? The answers to many of these questions will take time. But what is clear is that the fixed income market is undergoing a major transition, one that holds out the promise of a fairer, more effective trading environment.
The buy-side perspective:
The view from Pioneer Investments
Gianluca Minieri is Global Head of Trading at Pioneer Investments, one of the world’s biggest investment firms. The group has nearly 200 billion Euros under management, 60% of which is invested in fixed income. Gianluca spoke to Mike O’Hara about the issues buy-side firms face in today’s market. Below are some of his thoughts.
On ‘equitisation’ of fixed income markets:
Policy makers and regulators have to understand there are differences between the fixed income and equity markets but also there are a lot of differences that exist within the fixed income space itself.
The structural problem is that you have a lot of markets acting within the same market. This issue came about when the regulators started to think about how they could impose the same level of transparency that is currently being enforced on the equity market to the fixed income market. It’s like trying to cure two different illnesses with the same medicine.
When I consider the equity markets of today, I see a market that is fragmented, where liquidity is difficult to find and where there is open space for speculation. I don’t see a market that can be taken as a benchmark, or as an example of an efficient market. On one hand, you have a market that is traded on exchanges, on the other hand one that is still predominantly traded over the counter.
On transparency:
I have been part of a group that has worked with the European Commission. We have said on many occasions to policy makers that we are not against transparency. We are in favour of it, where it is strictly correlated with liquidity. We also support the development of certain electronic books, for example for very liquid instruments, where the platform could operate in a similar fashion to an equity-style exchange-driven order book, which actually would help to take the noise out of the market and increase transparency. In the more illiquid markets, this challenge cannot be solved through an electronic platform as liquidity is all that is required.
On technology, liquidity and standards:
Technology can help in two ways. On the most liquid part of the market, technology can help develop an electronic platform similar to an equity-style order book. On the most illiquid side, technology needs to be utilised to try to minimise the protocol differences that exist in many exchanges. At my last count, we have approximately 35 different venues – in the fixed income space alone – where you can execute your trade. Each wants to have its own space in terms of liquidity, the connectivity standard and their protocol language.
It’s a nightmare as the burden is placed on us to look for liquidity. Every time our traders need to buy and sell a bond they have to look into 35 different venues and spread their trading intentions to the wider community. The reality is the market making model is now completely useless in fixed income. Why? 97% of the inventory is actually held by the buy-side. Every time we want to buy or sell a bond, we have to find another buy side that has the opposite trading intention that we have and can match. The result is that market makers are only needed as an intermediary between two buy sides.
Don’t get me wrong, you need an all-to-all electronic platform that can aggregate liquidity. As a buy-side firm we have been very proactive in supporting a number of initiatives that are precisely aimed at standardising, to establish a credit market network on a utility basis. The key objective would be to create a network that can act as a carrier and assist buy-side and sell-side but without having the problem to look into different venues and speak in different languages.
On Bondcube, Algomi and other projects:
They are aimed at providing participants with the possibility of accessing many different pools of liquidity at the same time, through a single user interface as well as integrating access, inventories, IoIs, RFQs all in the same place. How do you do this? You do it by commoditising the network connectivity.
We are seeing a change in the way that the sell-side is relating to us on the buy-side. They realise what they have to do, if they don’t want to miss future opportunities.
For more information on the companies mentioned in this article visit:
- www.algomi.com
- www.baymarkets.com
- www.bondcube.com – website no longer available
- www.nomura.com
- www.pioneerinvestments.co.uk
- www.thetradingmesh.com
In this article, Mike O’Hara, publisher of The Trading Mesh – in conversation with Hirander Misra, Andrew Chart and Philip Simons – looks at how the new Constant Maturity Swap future from GMEX aims to help firms continue to hedge their interest rate exposures cost effectively in the post-G20 landscape.
Introduction
The reforms instigated by the G20 in the wake of the Global Financial Crisis have resulted in a number of structural changes to the world’s interest rate derivatives markets, changes which are now starting to have a significant impact on market participants.
The G20’s stated objectives to reduce systemic risk and increase transparency across global financial markets were clear, in that all OTC derivatives contracts should be reported to trade repositories (TRs); all standardised contracts should be traded on electronic trading platforms where appropriate, and cleared through central counterparties (CCPs); and non-centrally cleared contracts should be subject to higher capital requirements.
It remains to be seen how successful these initiatives will be in the long term. However, it is clear that in the short term at least, the increased capital & margin requirements have placed a greater strain on the financial resources of many firms active in this space. Likewise operational changes are also making it more difficult for firms to accurately hedge their interest rate exposures. Buy-side firms in particular are facing a range of new challenges around duration hedging.
“Clients face multiple challenges with moving their OTC derivatives into a CCP environment – most notably their collateral arrangements.”
Will Davies, Head of Institutional & PTG Sales UK, Societe Generale Newedge
Increased swap costs
Historically, OTC interest rate swaps (IRSs) have been widely used by the buy-side to hedge their interest exposures. However, in this new environment, it is becoming much more expensive for firms to continue duration hedging using swaps.
“Clients face multiple challenges with moving their OTC derivatives into a CCP environment – most notably their collateral arrangements,” says Will Davies, Head of Institutional & PTG Sales UK, at Societe Generale Newedge.
“Whether it’s the erosion of CSA1 thresholds creating daily movements, providing multi-currency cash movements where they previously used securities bilaterally, or the amount of Initial Margin, the question remains: where do they find all that liquid, eligible collateral? This could be a cash flow they’ve never funded before, or assets of a quality that they haven’t had to supply. The Treasury and liquidity challenges could be significant depending on how they are invested or how readily they can access cash,” says Davies.
With standardised swaps being subject to 5-day VaR and non-standardised swaps requiring 10-day VaR, those funding costs will be magnified when compared to listed derivatives or similar products using 2-day VaR treatment. Davies and his colleagues at Newedge refer to this situation as ‘margin discrimination’.
“With Basel III and CRD IV provisions, OTC instruments are likely to weigh heavier from a capital requirements perspective”, says Davies.
“Firms will have to increase capital and liquidity provisions to cover these transactions. Some won’t be able to leverage up as easily as they could because the new capital/position ratios require them to put more into their capital reserves. Combined with the collateral challenges, this suggests a serious cost implication of staying in OTC for some clients,” he says.
The net result is that interest rate swaps are becoming prohibitively expensive to the buy-side. More and more funds are now being directed by their investment committees to pull out of the swaps market and to find alternative hedging mechanisms. But this is easier said than done.
“ The CMF gives you the closest approximation a futures contract can to the way in which the OTC interest rate swap market moves and is traded on a daily basis.”
Hirander Misra, CEO of GMEX
Challenges with swap futures
One of the problems facing the market is that there are very few viable alternatives to interest rate swaps for managing duration hedging, although a number of exchanges – including NYSE Euronext, CME and Eris Exchange – now offer various flavours of swap futures.
“From a buy-side perspective the products offered by those exchanges have a number of perceived disadvantages when compared with the swaps market, based on feedback market users have provided to us”, says Hirander Misra, CEO of Global Markets Exchange (GMEX) Group, which, subject to FCA approval, will operate an exchange in London.
“Certain sections of the buy-side community are telling us that existing swap futures just aren’t suitable for them to manage their duration hedging, because they don’t provide a like-for-like hedge”, he explains.
“Of course, there’s no such thing as a perfect hedge but with current quarterly rolling swap futures, you don’t get the granularity of duration hedging you get with IRSs. This makes managing the deltas extremely difficult because only certain points along the curve can be used. And as these swap futures expire every quarter, hedging longer term exposures means that the contracts must be rolled each time they reach maturity. Every roll leads to more transactional costs, which add up and eat into the value of the portfolio, particularly when done multiple times over the life of a hedge”, continues Misra.
“Also, certain swap futures are or will be physically deliverable. So if a buy-side firm actually goes to delivery, they are faced again with the associated capital requirements and 5-day VaR of maintaining a swap position.”
According to Misra, this is why, to date, no existing swap futures contracts have yet managed to build a critical mass of liquidity relative to the volumes seen in the OTC IRS market.
The constant maturity approach
In order to address all of these challenges, GMEX recently announced the launch of its Constant Maturity Future (CMF).
The CMF is a new breed of swap futures contract linked to GMEX’s proprietary IRSIA index, which is calculated in real time using tradable swap prices from the interbank market. By accurately tracking every point on the yield curve in this way, retaining its maturity throughout the lifetime of the trade and being traded on the rate, the duration hedging capability of the CMF is much more closely aligned with an IRS than other swap futures contracts that have set durations and expiry dates. This is the key for the buy-side, according to GMEX’s Misra “The CMF gives you the closest approximation a futures contract can to the way in which the OTC interest rate swap market moves and is traded on a daily basis”, he says.
“Additionally for example if you want to hedge a 30 year Gilt issue that rolls down to maturity, given the CMF offers every annual maturity from 2-30 years you can gain a very granular hedge by periodically rolling the appropriate number of 30 year CMF contracts down the curve to 29 year CMF contracts. Rather than rolling quarterly, this can become a simple middle office, daily or periodic hedge tool. The advantage being that there is no quarterly brick wall by which point you have to roll”, adds Misra.
As a listed futures contract, the CMF comes with all the advantages that futures offer over swaps in terms of cheaper margin (2-day VaR as opposed to 5-day); electronic trading capability and accessibility; clearing through a central counterparty; and reporting via a central trade repository.
And with no quarterly roll and no deliverable element, the disadvantages typically associated with other swap futures are removed.
Diversity of market participants
In order to create liquidity in any market, a diverse group of participants – including both makers and takers – is required.
“We’ve thoroughly researched the market and it’s clear that anyone who hedges interest rates needs a product like this” says GMEX’s Misra.
“The buy-side need it for their duration hedging; the sell-side also have IRS exposures that they need to hedge more cheaply; all the banks are capital constrained and have fixed income exposures that they need to hedge; futures players like it because it’s a standardized IRS futures product that will see natural buy-side flow; electronic market-makers and proprietary traders like it because it gives them opportunities to arbitrage the CMF against other interest rate instruments; corporates with sophisticated treasury and hedging requirements and even insurance companies who currently run naked exposures because they’ve assessed the alternatives and deemed it cheaper to take one-off hits than run expensive hedges”.
Clearing
The IRSIA CMF will be centrally cleared by Eurex Clearing (subject to final agreement at the time of writing).
“With the introduction of the new Basel III capital rules, the cost of clearing is now determining not only which instruments are used for hedging but where they are cleared”, says Philip Simons, Head of Sales and Relationship Management at Eurex Clearing.
“Market participants will inevitably use the best tools available that manage the risk. This will include OTC IRS, traditional futures and options as well as new instruments such as GMEX’s IRSIA CMF”.
According to Simons, what will be crucial is the ability to clear all instruments at the same CCP with appropriate cross-margin benefits. This will not only reduce the cost of funding but – more significantly – reduce the cost of capital, through a combination of maximising netting benefits for exposure at default, having an efficient default fund and minimising the funding costs.
“With the introduction of the new Basel III capital rules, the cost of clearing is now determining not only which instruments are used for hedging but where they are cleared.”
Philip Simons, Head of Sales & Relationship Management at Eurex Clearing
“The higher the risks the higher the costs of capital as reflected through higher initial margin and higher default fund contributions which will inevitably be passed on the end client”, says Simons.
“Capital and operational efficiency will drive liquidity in the future”.
Operational considerations
The IRSIA CMF will be listed on the Eurex Exchange, which is also the execution venue for the product. GMEX will operate a Central Limit Order Book via its own proprietary matching technology for the matching of orders. Matched orders will be reported to the Eurex Exchange and will be subject to its affirmation and confirmation process. In addition, GMEX will offer Request for Quote and the facility to report negotiated trades will also be available.
GMEX will offer access to the market via its own trading screens as well as third party vendor products. Most firms may prefer to trade through screens such as those provided by ISVs such as Fidessa and Trading Technologies, many of which offer functionality for trading spreads or running other cross-instrument or cross-market strategies. For direct electronic access, GMEX provides a well-documented API, which is available in both FIX and Binary format.
Execution and prime service brokers such as Newedge will offer DMA and potentially Sponsored Access, as well as value-added services such as cross-product margining and linked margin financing of correlated portfolios.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – talks to Matt Barrett of Adaptive Consulting and Eddie McDaid and Tony Foreman of Software AG, about how sales productivity at global investment banks can be significantly improved by bringing together four “pillars”: management information systems; straight-through processing; control; and pro-active outreach; and how this can be achieved through a combination of business intelligence and technology innovation.
Introduction
Since the global financial crisis – and particularly as markets have moved towards greater electronification – many investment banks have reduced the headcounts on their sales desks and directed their technology investments more towards low-touch electronic, as opposed to high-touch, voice-based trading.
However, despite this trend, many markets – particularly rates and fixed income – are still predominantly voice-traded and are likely to remain so in the short to medium term. This raises a question. How can banks leverage the technology investments they may have made in the electronic realm, to improve sales productivity in these voice-traded and high-touch environments?
According to Matt Barrett, Director of Adaptive Consulting, a software development and integration firm that works with a number of global tier 1 & tier 2 investment banks, the answer lies in bringing greater automation to four key areas, what he refers to as the “four pillars of sales productivity”: MIS/analytics; straight-through processing (STP); control; and the monitoring and reporting that enables pro-active outreach to clients.
“It is important to capture every negotiation, because that’s what will build up the real intelligence.”
Matt Barrett, Director at Adaptive Consulting
Pillar one: MIS / Analytics
One of the problems with voice-based markets is that workflow tends to be very ad-hoc, so capturing data in an electronic format to facilitate any kind of MIS is not easy. And, as Matt Barrett explains, in order to achieve meaningful analytics, not only does data need to be captured as early as possible, it also needs to include all client interactions, not just those that have resulted in a trade.
“It is important to capture every negotiation, because that’s what will build up the real intelligence”, he says.
“Today, if a client calls and does a trade, obviously you will manually book that trade onto the system. Generally what isn’t captured is where a client calls, you give him a price and he doesn’t trade or trades away, and the various reasons for that. For example if you were in competition with two other dealers and your price wasn’t good enough. Clients sometimes give you this kind of useful information, so if you want rich MIS and analytics, you need to be able to capture it somehow.”
Sales people in voice-traded markets are notoriously reluctant to enter more than the bare minimum of information onto trading systems, so one of the challenges here is how to ensure such data gets captured. The key is in user interfaces that enable sales people to immediately see the value, according to Barrett.
“If the stick is management telling sales people that they need to enter this data, the carrot on the other side is the sales people seeing the results in rich analytics in real-time, which can really add value”, he says.
“Ideally you want a user interface where, if a negotiation results in a miss, the next time the client calls, the sales person sees that previous miss, even if it’s just 30 seconds or a minute later. Not only that, but they would see that client’s interactions with the bank across the entire desk or across an entire asset class, so that trends can be visualized, understood and acted upon”.
What technology components are necessary for this?
“You need things like real-time multi-protocol messaging and a strong CEP (complex events processing) engine, which bring different benefits but are both necessary. The CEP engine is able to pull streams of data from a number of heterogeneous sources, and monitor all those flows for specific conditions that have been set. All of this analysis takes place in real time, which is incredibly powerful. You need push-based messaging so that you can make your real-time analysis shine. Being able to sending an alert to an individual based on an relevant confluence of events, originating from different sources and occurring in a specific window in time is an incredibly powerful tool, and we find clients get addicted to the functionality once they are exposed to it. The benefit of course is that these tools are arming you with the necessary business intelligence to be able to increase your hit ratios, if used correctly”, says Barrett.
“STP is all about enriching trade data with specific information so it can flow through the various systems automatically.”
Matt Barrett, Director at Adaptive Consulting
Pillar two: STP
If MIS is all about how to enhance hit ratios and therefore increase revenue and profitability, STP is more about lessening the number of manual interventions on trade flow in order to reduce costs and lower operational risk, factors which can also help improve sales productivity.
But how realistic is true STP in a voice-traded environment? What are the challenges around having trade information flow seamlessly through the various systems that need to read, enrich and report on that data? And how can those challenges be addressed?
“The traditional problems here are around the impedance mismatch between the bank’s different messaging and data store platforms, some of which may not be exposed electronically”, explains Barrett.
“If you think of all the sources this information needs to be enriched from, many of them within different organizational units built on different technology platforms, when you build one of these message flows, you’re forced into a vast array of different integration exercises”.
An alternative implementation is to flip the traditional data-lookup pattern on its head, and have different data sources contribute the required data to the CEP. This keeps an up-to-date cache of all the slowly changing data needed for STP enrichment, meaning it can be accessed quickly and easily when it is required.
“STP is all about enriching trade data with specific information so it can flow through the various systems automatically”, he says.
“Enriching the client ID with details of the book or the clearing instructions for example. The steps may be different for interest rate swaps, or for FX, or for bonds, but at the end of the day a workflow needs to happen. With a powerful CEP engine and the right sources of data, you can implement a generic workflow giving you one STP solution for a specific product, and then duplicated for other products”.
Pillar three: Control
While complex event processing and universal messaging technologies can help facilitate the first two pillars, an additional technology – in-memory data management – comes into play in the area of Control, the third pillar of sales productivity, as Tony Foreman, Financial Services Sales at Software AG, explains.
“In-memory data management coupled with complex event processing can enable systems that give the sales person a real-time view of all their customers’ activity across all asset classes, highlighting anything that seems abnormal.”
Tony Foreman, Financial Services Sales at Software AG
“From a control perspective, in a typical investment bank, there is more and more data that needs to be stored and retrieved extremely quickly”, he says.
“Looking at market surveillance for example, there is an evolution going on and there has been a growing requirement beyond standard deterministic alerting toward the spotting of the abnormal. There are more data requirements than simply factoring in market and trade data and standard out-of-the-box alerting to things like front running, spoofing, insider dealing, etc. Identifying and responding to abnormal behaviour has many aspects, particularly as the concept of normality can be quite fluid in this environment. For example a trader or a client might be suddenly making – or losing – money in a particular asset class and the bank needs to be alerted to that. Whereas if it’s a gradual change it might be considered normal”.
Analyzing these factors in real time can take an incredible amount of memory, according to Foreman, which is where fast in-memory data management comes in.
“In-memory data management coupled with complex event processing can enable systems that give the sales person a real-time view of all their customers’ activity across all asset classes, highlighting anything that seems abnormal”, he says.
“The two technologies are complimentary, because a lot of the stuff coming through the CEP engine has to be persisted in memory in order for it to be retrieved, checked and analyzed quickly”.
Adaptive’s Matt Barrett adds his take on this.
“There’s another aspect of control that can ease a regulation problem, where the risk to the bank is no longer easily calculated just by the size of the position, it also has to take into account the specific assets and equity that need to be held based on where the client is planning to trade”, he says.
“If a client is doing trades out of Japan versus trades out of North America, there’s a difference there in the amount of risk that the bank is exposed to because of the underlying regulatory conditions in each location. This all comes back to the need for the sales people and the traders to have an expert system on their desk that collects information from lots of different places to give them an accurate picture of what needs to happen, both before and after they make the trade”.
According to Barrett, the challenge around the regulatory and control aspect is how to source, integrate and present the data in a constantly changing environment.
“It’s not solely a technological problem”, he says.
“We know how to build, deploy and run these systems. The problem is in sourcing the data, some of which may not exist in an electronic form in those banks that think of regulation as a static thing. They talk to their regulation departments once every six months to figure out if how they’re operating today is okay. But with regulatory changes being introduced monthly or weekly, you need to electronify the ability to change and respond”.
Pillar four: Pro-active outreach
Bringing all of this together, where the value can really be delivered to increase sales productivity is in the area of ‘pro-active outreach’, where real-time business intelligence is incorporated into the sales people’s workflow.
Eddie McDaid, Head of Product Management for Streaming Analytics and Big Data at Software AG, discusses the idea.
“In the world of capital markets, what’s interesting is the concept of analyzing data in flight, this idea of bringing together the streaming analytics with in-built distribution orchestration”, he says.
“The combination of CEP, fast messaging and in-memory data storage enables banks to be smarter about how they distribute prices, who they distribute prices to, what margins they apply to those prices and what kind of heuristics they use to make those decisions. In the old days they might have asked how much brokerage they had taken from a particular customer today, last week, last year, whatever. But now you’re talking about a much more rich, much more granular set of data. So not just how much business a client might be doing, but also perhaps some predictive analytics modeling, gauging whether or not someone is going to trade. Whether they need that little nudge over the line, or whether they were going to trade anyway”.
“In the world of capital markets, what’s interesting is the concept of analyzing data in flight, this idea of bringing together the streaming analytics with in-built distribution orchestration.”
Eddie McDaid, Head of Product Management for Streaming Analytics & Big Data at Software AG
Predictive, heuristic analysis that pre-empts the client’s likely moves and informs the sales person what the client is likely to do next, based upon multiple input streams – including previous and current patterns of behaviour – can obviously provide a competitive edge. But is this a holy grail?
“It’s about democratizing the flow of data, aggregating different data sources via the CEP engine and then setting up rules to warn when specific events happen, so that appropriate action can be taken”, says Adaptive’s Matt Barrett.
“The real-time nature of today’s markets means that sales people should be able to look at all previous negotiations that have occurred with a client and feed that data back into their pricing models to drive more successful hit ratios”.
Conclusion
One thing that Barrett is keen to point out is that introducing the four pillars of sales productivity does not have to be an all-or-nothing exercise. They can be introduced a step by step basis. The technology that facilitates the four pillars – CEP engines, universal messaging and in-memory data management – can be introduced and layered over existing systems.
“You could go away for two years, build something, put it into production and then discover that things have moved on so you’re not getting any real value from it”, he says.
“Whereas if you drip-feed these things in – in a more tactical way – you’re likely to see much better take-up and usage. In reality, you’re never at your strategic, target architecture. But as you get to use all these different components of the four pillars that we’ve talked about, you will see improvements in your bottom line, depending on what you’re looking to do, whatever key indicator you were looking at when you kicked off the project”.
The key takeaway seems to be that these four pillars of sales productivity can certainly be achieved by making best use of the resources and assets that already exist in the bank. Assets such as high touch sales people, existing technology components that may have been introduced for low-touch electronic trading and – importantly – interactions with clients that are not currently being captured or analyzed.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, CEO of HFT Review – in conversation with Arnaud Derasse, CEO of Enyx, Stephane Tyc, Co-Founder of Quincy Data and Ron Huizen, Vice President of Systems & Solutions at BittWare – looks at how the combination of microwave and FPGA technology is changing the landscape of market data distribution and consumption, particularly in high performance, low-latency trading environments.
In April 2013, Quincy Data, a US-based provider of ultra-low latency market data services, announced that they could deliver CME futures data directly into NASDAQ’s primary data centre in Carteret, NJ, rack-to-rack, within 4.16 milliseconds.
Considering the 734 miles distance between the two data centres – CME’s is located in Aurora, IL – this is an impressive achievement given that the absolute theoretical minimum latency (based on the speed of light in a vacuum) is just 3.95 milliseconds. Since the April announcement, Quincy has lowered its latency further – to 4.09 ms – and the company has plans to provide even faster delivery very soon
“It’s better to be fast 99% of the time than slow 99.999% of the time.”
Quincy Data mantra
Until relatively recently, distributing and processing market data at this kind of speed was unheard of. However, with the combination of two technologies: microwave and FPGA; it is now possible to distribute and process market data faster than ever before. Quincy’s service makes use of both of these technologies, being powered by the McKay Brothers Microwave Network and using FPGA feed handlers by Enyx and Novasparks within the delivery chain.
Supply and demand
In the high frequency trading (HFT) space, it is probably fair to say that there has been a great deal of hype surrounding FPGA and microwave technology in the last couple of years. But the latency figures speak for themselves and there is no shortage of demand where speed is concerned.
If you have the best microwave link, there is never enough supply to satisfy the demand, so the take-up of this service has been incredibly fast” says Quincy Data’s co-founder Stephane Tyc.
“There is very little bandwidth and a lot of data worldwide. Distributing the fastest data across many continents is very hard. Still”, he adds, “this is a very competitive space for vendors”.
There are some key differences to consider between short-haul and longer-haul microwave routes, according to Tyc.
“Intra-urban links are much easier to build than links over long distances, so the supply is much higher and therefore the latency difference between alternative links is likely to be small. But for
the longer-haul networks between cities, the difference in latencies is much greater. There are big differences in reliability too”, he says.
Reliability and bandwidth
Reliability is something that service providers like Quincy and their microwave partners McKay Brothers are constantly striving to improve, although microwave and millimetre wave links are unlikely to ever match the reliability of fibre networks, not least because of potential interruptions like adverse weather conditions for example. But Quincy’s mantra is that it is better to be fast 99% of the time than slow 99.999% of the time.
As microwave links are generally only available in the tens or hundreds of Mbps, another issue to consider is the limited bandwidth that wireless offers compared to fibre. But Tyc believes that this is less of a problem than it may have been in the past.
“The bandwidth constraints have a very interesting impact on the technology teams at the firms taking the service”, he says. “In the past, bandwidth was assumed to be almost infinite, so developers didn’t really worry about how packets were formed on the wire, they left all of that stuff to network engineers and – at most companies – the two live in different worlds. But now, software engineers working with market data can’t afford to ignore how packets are transmitted, so there are some very interesting rapprochements between network engineers and developers”.
“ If you have the best microwave link, there is never enough supply to satisfy the demand, so the take-up of this service has been incredibly fast”
Stephane Tyc, Quincy Data’s co-founder
Fair bandwidth management and compression via FPGA
Having such limited bandwidth means that service providers need to be able to allocate and share that bandwidth fairly across their paying customers, a function that is well suited to an FPGA-based approach, as Arnaud Derasse, CEO of Enyx, a provider of ultra-low latency solutions based around FPGA technology, explains.
“If a telco only has something like 100Mb to share across more than ten customers, then their customers will obviously insist on the fairness of the sharing allocation. So the telco needs to be able to demonstrate that when they send a packet, that packet will go through the link and will be processed fairly regarding the other customers. They don’t want to have one customer sending huge packets and monopolising the link, for example. That’s why they need a system like ours to do the necessary segmentation and provide fair bandwidth management over that link”, he says.
Another area where FPGAs can help firms make best use of limited bandwidth is around data compression.
“Because service providers are trying to send more and more data – on both fibre and microwave – they need good compression”, says Derasse. “But it has to be lossless compression, and for high performance trading it also has to be fast. At Enyx, we use specific streaming algorithms implemented in the FPGA hardware, which allow us to deliver compression in one microsecond and decompression in one microsecond at the other side”.
Other FPGA applications
FPGA technology has become more and more widely adopted amongst the low-latency trading community for a variety of purposes, including market data filtering, feed handling and pre-trade risk checks on order flow. All of these are a natural fit for FPGAs, with their ability to directly connect to the network, run at full line rate and act essentially as an Intelligent NIC. FPGA specialist BittWare works with a wide range of customers, including proprietary trading houses, hedge funds and banks, which use the firm’s FPGA boards for all of these kinds of applications and more.
“Latency is still a major driving force to move from software-based systems to FPGA for these applications”, says BittWare’s Vice President of Systems & Solutions Ron Huizen. “But network density is also emerging as a key factor in favour of FPGAs. In other words you can handle a lot more network feeds in the same rack space using FPGAs”
FPGAs are also being explored as CPU accelerators for back-end analytics, not least because of their small power consumption, as Huizen explains:
“FPGAs are extremely powerful processing engines, and can outperform CPUs and GPUs while using a fraction of the power. For small shops, the monthly power bill may not be a concern, but for companies running huge server farms, power is a non-trivial operating cost, and can quickly dwarf the cost of the equipment itself”.
“ The telco needs to be able to demonstrate that when they send a packet, that packet will go through the link and will be processed fairly regarding the other customers”.
Arnaud Derasse, CEO of Enyx
FPGA implementation
Working with such a wide range of customers, Huizen sees a variety of ways they implement functionality on FPGAs.
“How they proceed depends both on how familiar they are with FPGAs, and how complex of a system they want to field”, says Huizen. “For customers not familiar with FPGA, making use of appliances or solution providers is a natural way to get the advantages of FPGA technology without having to ramp up a development team and undergo the implementation time, effort, and risk. And solution providers like Enyx can provide the full FPGA implementation on hardware of the customer’s choosing, (preferably of course a BittWare board!)”
Even for customers who have FPGA expertise in-house however, there are limits to what they are prepared to do themselves, as Huizen points out.
“One thing we seldom see in the financial market is the customer designing their own FPGA boards”, he says. “And there are some critical pieces of Intellectual Property (IP) like TCP/IP offload engines, which almost everyone purchases as the development effort to roll your own is just too much”
“ Network density is also emerging as a key factor in favour of FPGAs. In other words, you can handle a lot more network feeds in the same rack space using FPGAs”.
Ron Huizen, BittWare’s Vice President of Systems & Solution
Future uses
With FPGAs becoming widely used to consume and process market data today, the last word goes to Stephane Tyc, who believes that the logical next step is full order handling via FPGA.
“We’re at the point with order handling now where we were with market data a few years ago when market data was starting to become mature in FPGA, but it wasn’t widely adopted”, he says. “The question is not so much about the maturity of the product, it’s more about whether it has a fast adoption rate or not, because people have to understand the technology and become familiar and at ease with it. With order handing via FPGA, it’s not something that’s going to be widely adopted overnight, but the technology is certainly there. That’s where the best prop trading firms already are and that is the thing that will be open to the wider market in the future”.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – is in conversation with Konstantin Kudryashov, Liz Keogh, David Evans, Alan Parkinson, Chris Matts and Dan North.
Introduction
The Banking and Financial Markets industry is a dynamic and ever evolving landscape, which in recent years has become increasingly reliant upon technology. And as more and more processes within banking and finance become automated, that dependency is becoming ever greater.
This is particularly the case in trading and exchange operations where – in order to keep up with changing market conditions, a rapidly evolving regulatory landscape and increased competition – practitioner firms are constantly required to either deploy new systems or enhance their existing ones, all at substantial expense.
Given this rapid pace of change, the task of identifying project requirements and translating those requirements into clear system specifications poses a significant challenge to all concerned, leading to what some in the industry are now calling a “requirements crisis”.
Project failures
It is an unfortunate fact of life that many IT projects end up costing a great deal more money and delivering much less value than planned. In fact, in 2013, only 39% of IT projects were delivered on time, on budget, and with the required features and functions1.
So it is clear that many projects fail, but why?
“ Many software project failures, regardless of theindustry, are caused by people focusing too much on scope and functionality rather than the added value the project needs to deliver.”
Konstantin Kudryashov, BDD Practice Manager at Inviqa
According to Konstantin Kudryashov, BDD Practice Manager at leading open source development firm Inviqa, one of the reasons is that people tend to focus on the wrong things when putting requirements together.
“Many software project failures, regardless of the industry, are caused by people focusing too much on scope and functionality rather than the added value the project needs to deliver”.
He refers to a recent survey on business technology by McKinsey2, which reports a growing dissatisfaction with IT’s ability to facilitate and meet business objectives. “IT has become less effective at enabling business goals because IT people spend time talking about features and functionality, which in some cases don’t deliver any value at all”.
“In any business, the first thing an investor looks at is how the business will provide a return on investment. But we’re in this weird state in IT where people spend all their time talking about features and functionality, which in some cases don’t deliver any value at all”.
Another common reason for failure is miscommunication, particularly in more complex projects where requirements are often communicated between business people and developers through a series of ‘Chinese Whispers’, resulting in much of the context being lost in translation. Research shows that 54% of defects in delivered software are due to incorrect, misunderstood or badly specified requirements as opposed to implementation errors and program bugs3.
So what can be done to address these challenges?
Behaviour-Driven Development
An increasing number of firms in the finance sector are now turning to Behaviour-Driven Development (BDD) as a way to ensure that not only are the correct requirements identified, specified and communicated, but that projects do indeed deliver real value to sponsors through a truly collaborative process.
BDD, a concept created by Dan North in 2003, is a software development process that grew out of TDD (Test-Driven Development) as a way of enabling business analysts and software developers to better communicate and collaborate via shared tools and shared processes.
Like many of the best ideas, BDD is very simple in concept but very powerful in terms of what it can achieve. At its core is the ‘Given-When-Then’ format, which allows a requirement to be specified based upon examples of its desired business behaviour. This simple approach enables business analysts and those with domain knowledge to specify, in a framework immediately understood by developers and testers, what should happen when a particular event occurs under a given set of circumstances.
“BDD really improves communication”, says Kudryashov.
“It helps every single person in the delivery team to understand why they are doing what they are doing and why they are delivering a specific piece of functionality”, he adds.
Clarifying uncertainty
Independent consultant Liz Keogh, a BDD expert who has been working with the framework since 2004 and who regularly blogs on the topic, explains how BDD can help remove uncertainty from a project.
“BDD uses examples in conversation to illustrate behaviour and is therefore very useful in helping clarify requirements”, she says.
“And BDD is also really useful in helping people appreciate where uncertainty is and in finding a different way of addressing that uncertainty”, she adds.
“The teams that are most successful at this are the ones who get feedback on the most uncertain aspects of their work before they automate the examples”.
“ BDD uses examples in conversation to illustrate behaviour and is therefore very useful in helping clarify requirements.”
Liz Keogh, BDD expert & blogger
How might this work in practice?
“Say a new regulatory requirement comes along and people start discussing what that requirement means and what the relevant applications should do. They’ll frequently end up getting into arguments around what the outcome of a particular example ought to be. You can tell from the uncertainty in that argument that this is something new that doesn’t lend itself well to analysis. So the right approach in this sort of situation is to try something out, to experiment and to get feedback. Talking through examples can help you to spot that”, says Keogh.
“If there is a high level of domain expertise, if there is a person who is very familiar with the regulation you’re implementing, then the BDD approach is great for drawing out that expertise and learning from that person. And conversely if it is something very new and there is no real expertise and people are just guessing, BDD can be used to identify and clarify that uncertainty”.
Continuous delivery
Delivering value in small iterative steps and being able to prove things work before implementing them at full scale is a key aspect of BDD, according to Kudryashov.
“In six months the business environment could be drastically different from when the project was first started”, he says.
“Taking this into account, you need to treat the specific software as constantly evolving, because the features that deliver value today might not deliver value in the future. So you need to constantly deliver small, incremental improvements and measure the impact on the business. That’s what the essence of BDD is. Basically it drives your company towards your business goals in small steps, constantly measuring and making sure that you’re going in the right direction”. Keogh agrees that the ongoing, continuous nature of BDD is an important factor when it comes to delivering value.
“If your requirement is to ensure that you have some limit on the amount of risk associated with a particular counterparty for instance, knowing that that’s what you’re trying to do with the underlying functionality and having that vision very clear in everyone’s heads helps you, after you’ve made all of the functionality work, actually check on an ongoing basis that you’re really meeting that vision and those capabilities” she says.
“ The first thing BDD gives a team is a clear mandate to use examples as first-class objects in the software development process.”
David Evans, Independent consultant
The power of examples
The key to the success of BDD lies in the power of using concrete
examples when discussing requirements, according to David Evans, an independent consultant specialising in agile quality.
“The first thing BDD gives a team is a clear mandate to use examples as first-class objects in the software development process” says Evans.
“Our brains naturally latch onto concrete examples rather than generalisations – we prefer to discuss specifics when getting to grips with new or complex ideas. Yet traditional analysis approaches tend to dismiss examples as little more than a means to an end. Unfortunately, that means potentially useful examples tend to be discarded and reinvented multiple times in the analysis, implementation and testing of a feature. BDD can short-circuit that by allowing discussions to centre on a small set of key examples that are meaningful to all.”
Evans agrees with Liz Keogh that discussing the examples is the most important part of BDD, whereas many teams focus only on the automation of those examples as test scenarios.
“BDD is first and foremost about improving communication between those responsible
for what a system must do and those responsible for how it will do it,” he says. “Most of your quality assurance comes from preventing misunderstandings, hidden assumptions and undiscovered exceptions leaking into the code. The examples have immediate value as discussion points. Later they will have additional value as automated acceptance tests. Ultimately they will also have ongoing long-term value as documentation of that feature’s behaviour.”
“ If teams successfully focus on the requirements and communications aspects of BDD first, the tools add more value.”
Alan Parkinson, CEO of Hindsight Software
More than Test Automation Toolkit
Alan Parkinson, CEO of the specialist BDD firm Hindsight Software, is keen to point out that BDD is much more than a set of tools.
“Many people only equate BDD with test automation and immediately think of tools like Cucumber, Behat, SpecFlow etc. So they adopt BDD for test automation and don’t discover the requirements benefits until much later, if at all”, he says.
According to Parkinson, the consequence of this can be profound.
“Because the business Stakeholders haven’t been involved in writing the valuable Given-When-Then statements that express the business requirements, the statements end up looking like test scripts. As a result, they don’t add any value. If teams successfully focus on the requirements and communications aspects of BDD first, the tools add more value”, he says.
Parkinson points out that when legacy systems have to be maintained across organisations, due to mergers and acquisitions within the financial sector, it often results in ‘duct taped’ integration between systems.
“One problem that firms face, when dealing with legacy systems or integrating systems due to an acquisition, is the lack of documentation”, he says.
“This poses a significant problem when the system needs updating, say for a new regulatory requirement”. An example of this is the recent introduction of the seven-day bank account switching obligations, which has been problematic for a number of banks.
“BDD is really useful in documenting the current understanding of an existing system. And then automation tools like Cucumber and SpecFlow can be used to check if this understanding is correct. This creates living documentation for the existing systems, where you only have to run the automated checks with the BDD scenarios to discover if the requirements are being met. As and when you need to modify or replace the legacy system, you can use that living documentation to confirm that no regressions have taken place and that new functionality has been successfully implemented”, he says.
“Business people can very easily understand a structure like Given-When-Then, because it’s very straightforward, semantically clear English.”
Chris Matts, original BDD core team member
Specifications in plain English
One of the members of the original BDD core team and co-creator of the ‘Given-When-Then’ format is Chris Matts, also known for his innovative work on Feature Injection and Real Options.
“Business people can very easily understand a structure like Given-When-Then, because it’s very straightforward, semantically clear English”, says Matts.
“So rather than having a business analyst or subject matter expert come up with an example that’s abstracted into something that only they understand, the BDD approach forces them to express things in such a way that the developer doesn’t have to do any translation,” he says.
According to Matts, BDD gives business people the ability to frame their knowledge and domain expertise into a series of examples that developers and testers can understand.
“Traditional business analysis is all about extracting and creating abstract models that
are passed to the developers for implementation. With BDD, any businessperson who understands a requirement is able to describe and explain the context in the Given-When-Then format. This makes it very easy for the business users to express requirements, because they’re talking about examples rather than abstract models. And it becomes additive as well, meaning the examples can be expanded and built upon all the time, which is particularly useful in finance because there are so many intricacies”, says Matts.
Feature injection
Working with examples is all very well, but how can you ensure you are working with the right examples? The answer lies in an evolution of BDD created by Matts called Feature Injection, which consists of three steps, as he explains.
“The first step is identifying where is the value. The value in any IT system is generally in the output, but that’s not normally the way the value is expressed. So with regulatory requirements, you would first identify what output you want. What are the reports that you’re going to give to the regulators and what sort of data will be on those reports?
“Based on that, you work out what data you need in the system and what calculations you need to do on that data. Also what needs to be pushed upstream into trading and risk management systems for example. That’s the second step, identifying what inputs you need and what processes need to be run.
“The third step is actually identifying the examples. That’s where you need the business domain knowledge, and this is where BDD really fits nicely because it helps you specify all the different concrete examples. When you do that, you can go to the whole organisation and ask if anyone can think of an example that is going to break the set of rules you’ve defined, which means you can now start working on exceptions, whereas previously you would try to work on the general case and maybe ignore exceptions”.
According to Matts, this can really speed up the development cycle, from initial requirements through to the delivery of some kind of minimum viable product or the first iteration of a working system, because it makes it very easy for developers to code up examples that have been clearly defined using the Given-When-Then template.
BDD on the trading desk?
Dan North, the original inventor of BDD, has followed an interesting path since originally coming up with the concept when working at ThoughtWorks in 2003. Since then, he has spent a couple of years working on ultra-low latency trading systems at a proprietary high frequency trading firm and is currently working with a large US investment bank.
How do the BDD principles translate to a high frequency trading desk?
“At ThoughtWorks, I had a reasonable sized team of business analysts, programmers, testers, project managers, team leaders, etc. There would be an array of various stakeholders and we’d be delivering in that context where there were typically a lot of corporate constraints”, explains North.
“Whereas at the HFT firm, I was working with very small, tightly integrated teams where the developers are literally on the trading desk with the traders. The speed at which you can iterate on software in that kind of environment is remarkable. I was used to operating on a timescale of a few weeks or more for a release, but now I was delivering significant amounts of functionality in days or less.
“Part of this was because the team members were very experienced but part of it was because of the immediacy of valuable feedback from the traders. I’d be sitting next to traders who would explain what they were doing and why, and more importantly they could explain the rules by which they operated. We would go off and build that, come back, they would test the results against the rules and that would be the entire development cycle, of course within the constraints of the usual rigorous risk checks. We dispensed with anything we deemed unnecessary in terms of writing down scenarios and other typical ‘agile’ artefacts. And there was no ambiguity because we had the trader as our feedback mechanism”, says North.
Complexity and scale
Is there a level of complexity at which BDD breaks down?
“The level of abstraction changes as the system becomes more complex, but it’s fractal”, says North.
“You can describe at a detailed level within code what the code should do, you can describe at an application level what this particular system should do and you can describe at a portfolio level what a suite of applications should do. For each of those things you’re likely to use slightly different vocabulary because you’re in different domains. One domain might be risk management, one level below might be netting and settlement, the level below that might be getting data in and out of databases.
“The mechanism doesn’t change. It’s description by example, that’s the core part. At the lowest level you’re coding by example, at the middle level you’re doing analysis by example, and at the top level you’re almost doing governance by example. But it’s always doing things by describing observable behaviour. What changes is the tooling at each level. For example, Cucumber4 is a really good tool for the middle level, but at the level below that you’d probably just use code to describe other code. You can still write Given-Then-When structured code using any open-sourced testing framework, you don’t need a dedicated tool for that.
Likewise at the higher level, those guys aren’t interested in something as fiddly as Cucumber. But you still want them to describe, by example, what they want across their portfolio” explains North.
BDD beyond software
Although BDD was developed as an approach for software development, the concept is bringing value to other areas, according to Liz Keogh.
“I’m increasingly finding that people are starting to apply the philosophy of BDD to areas outside of software” she says.
“ The BDD approach to analysis and identifying a process in terms of its observable behaviour is something that can be used outside of software and could have enormous impact on value stream mapping and that kind of thing.”
Dan North, Original inventor of BDD
“Particularly where people are using examples to clarify transformation requirements. This might be when a firm is adopting Agile for the first time, actually getting examples from people and getting ideas of what that means to them and what might happen in their environment and their context”.
North agrees with this.
“The BDD approach to analysis and identifying a process in terms of its observable behaviour is something that can be used outside of software and could have enormous impact on value stream mapping and that kind of thing”, he says.
“In trading, from pre-trade through to post-trade, settlement, analysis and so on, describing all of that in terms of scenarios and examples is an enormous clarifying exercise and could help in a number of areas, like regulatory & compliance, market surveillance, or even bringing new traders up to speed”.
It is clear that BDD is an extremely powerful mechanism and is likely to become more and more widely adopted, not just within the financial markets but also across many other industries.
“BDD changes the way that everyone looks at software” says Konstantin Kudryashov.
“At Inviqa, when we’ve introduced BDD to organisations, from start-ups to FTSE 250 firms, the overwhelming response from everyone who gets involved, both technical and non technical, is the same – that it’s so logical, and so obvious. Developers who get into BDD generally say that they wouldn’t be able to go back and look at software engineering in any other way. It changes their perception”.
Chris Matts agrees.
“Anyone who wants to develop software properly will eventually be using BDD because it’s such an obvious way to do things. Instead of creating a model, you create examples and you drive your development from those examples. Sadly, I don’t foresee a day where everyone does things this way because people are entrenched in the old ways. Whenever you’ve got a legacy system, people always use it as an excuse not to do this kind of thing. And getting software developers to do the same thing is a bit like herding cats!”, he says.
Are the financial markets facing a requirements crisis? Possibly.
Does the answer lie in Behaviour-Driven Development? Probably.
For more information on the companies and people mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – talks to Ash Gawthorp of The Test People, Matt Barrett of Adaptive Consulting, Andy Phillips of LMAX Exchange and Dan Marcus of ParFX about how technology challenges – particularly around performance – are being addressed as both the retail and the professional FX trading sectors adopt new trading platforms.
Introduction
The Foreign Exchange (FX) market is the most heavily traded market in the world, averaging $5.3 trillion per day in 20131. Although most of this takes place in the Institutional and Interbank markets, a significant portion – $185 billion according to industry estimates – is traded by individuals and professional investors.
In both the retail and the wholesale FX sectors, user demands are changing as technology plays an ever greater role. From a retail perspective, traders are making more and more use of mobile and hand-held devices. And in the wholesale and Interbank market where electronification and the growth of high-speed algorithmic trading has had a significant impact, a number of market operators are now completely re-assessing their technology strategies and business models.
Performance of mobile trading apps
From a retail end-user perspective, there is no shortage of FX trading platforms that can now be accessed via mobile apps. Today, most retail FX brokers and trading venues offer some flavour of trading app, many of which include news, charts, position keeping, price updates and order entry/execution functionality. Often however, trying to trade on these apps can be a frustrating experience, as Ash Gawthorp, Technical Director of performance engineering specialist firm The Test People, explains
“Applications are often not written with performance on mobile networks in mind”, he says.
“When it comes to mobile and desktop applications, the performance aspect of the trading app is not considered early enough in the development life cycle.”
Ash Gawthorp, Technical Director at The Test People
“Developers often have high-spec desktop machines with fast network connections in to test systems, which positively skew their perception of actual end user performance. What works well on 1Gb Ethernet on a quad core workstation will not work anywhere near as well on a mobile data connection from half way around the world”.
In Gawthorp’s experience, when it comes to mobile and desktop applications, the performance aspect of the trading app is not considered early enough in the development life cycle.
“Non-functional requirements and performance targets should be baked in from the start, with regular architectural reviews undertaken and reviewed amongst peers to discuss messaging, UI design and reconnection strategy, mindful of the capabilities of the device and the network it is targeted for”, he says.
“Everyone has a responsibility for performance; it often either can’t or shouldn’t be ‘fixed’ by just one team, it requires collaboration between teams involved in architecture, development, test, infrastructure operations, networks and deployment. Everyone has a part to play and each team should understand the other team’s challenges and requirements as often a performance challenge encountered in one team can be fixed or removed entirely by a team further upstream or downstream in the process.”, he continues.
Replicating the end-user experience
Gawthorp believes that platform providers need to take a different approach in order to gain a truer picture of the end-user experience, particularly where trading apps are rolled out internationally.
“They need to perform load testing in the cloud from locations around the world, providing a picture of how price latency increases with user load levels by location, additionally performing client driven performance tests on the actual client application than at a protocol level, so they are able to see the errors as observed by a user of the application. That is essentially the approach The Test People takes in order to get meaningful results”, he says.
One interesting observation, mobile data networks aren’t just for mobile devices. Frequently for users in the Far East wired broadband is unavailable or prohibitively expensive, instead users adopt mobile data services for desktop internet connectivity – this creates a performance double-whammy of increased latency back to the client data centre often still in London or NY/NJ and increased latency from the user’s ISP to the handset. To worsen the picture, data usage on mobile data contracts are often capped, an inefficient, bloated messaging protocol will further increase latency and chew through the client’s data usage limits with every price update.
“We can then re-use these tests in a continuous framework in the live environment, monitoring key metrics such as price latency, login times and time to download the assets required for the application. This provides rich insights into how the system performs continuously, building visibility into patterns of poor performance and also providing the support teams with an early heads up of any issues”.
“All too often, client side performance and resiliency tests are performed as a one-off-activity or not at all conducted alongside server side scalability and load testing. This often shows that something worked and there wasn’t a problem on a very short sample of data, often in a production-like environment but – crucially – not in the production over an elongated period of time”.
“Connections dropping is a big problem. On a mobile device in the real world it can happen all the time. So you need to be using the right middleware to communicate between your device and the server.”
Matt Barrett, Director at Adaptive Consulting
Dealing with dropped connections
Matt Barrett, Director at Adaptive Consulting, a firm specialising in front-office trading systems development, explains some of the additional technology challenges associated with mobile FX trading apps.
“Connections dropping is a big problem. On a mobile device in the real world it can happen all the time. So you need to be using the right middleware to communicate between your device and the server”, he says.
“You can’t be polling for example, because if you drop the connection with the server, the costs associated with re-establishing the connection can be massive. A naïve platform
implementation will, on connection, re-acquire all the data it needs to have synchronized between the client and the server. That’s far too expensive to do on a mobile device, when it might be happening almost constantly as you move around. So the client apps have to be smarter and you have to take a slightly more sophisticated approach, only sending what’s changed rather than constantly informing about the current state of the world, which puts more pressure on your server and your infrastructure”, says Barrett.
The introduction of multi-path TCP will go a long way towards addressing this challenge, according to Barrett.
“When fully deployed, this will allow a single TCP connection to move between different network interfaces (wifi, cellular) without the application using the connection to notice the difference. For FX trading apps on mobile devices this is obviously great because a user could start their session on wifi in the office, move out of range and have it seamlessly hop across to 4G, then move back to wifi when the user is back within range, all without any TCP reconnection and the associated re-synchronisation”, explains Barrett.
Cross-platform development
Given that there are so many different devices now available for running FX trading apps, another key point to consider is the development effort on multiple client platforms, according to Barrett.
“There are recent advances in technology that allow the sharing of some level of code, which means that costs can also be shared across development for multiple platforms, whether that’s thick client desktop development or IOS or Android mobile development”, he says.
“Obviously the desktop is going to be significantly richer in functionality than the hand-held device. So reusing the backend services and the first tier of the client layer is something you can save a lot of money by doing, you just need to be aware that the user experience is going to differ, which isn’t necessarily a bad thing. You shouldn’t try to force ‘one size fits all’ across all your client platforms”, he adds.
Satisfying the Professional and Institutional traders
In the Institutional (Funds, Asset Managers) and Interbank markets, where message traffic has increased exponentially in the last few years with the growth of high-speed electronic trading, trading venues face further technology challenges.
“We face the same sorts of issues that any large internet site faces, but we also have to try to make everything run as quickly as possible, in a few milliseconds.”
Andy Phillips, Director of Technical Operations at LMAX Exchange
This is particularly the case for execution venues catering to clients from multiple market segments.
LMAX Exchange, the first regulated MTF (Multilateral Trading Facility) for FX, is uniquely positioned in the industry as it the only venue to service clients from all three market segments: Professional, Institutional and InterBank.
“We face the same sorts of issues that any large internet site faces”, says Andy Phillips, Director of Technical Operations at LMAX Exchange.
“But we also have to try to make everything run as quickly as possible, in a few milliseconds. So we find ourselves in the position of having all of the challenges of a leading-edge exchange in terms of latency and quality of execution and thousands of orders per second flowing through our system, as well as trying to provide good quality of service for individual traders accessing us over the Internet”.
Adopting an agile approach
To meet this challenge, central to the trade matching infrastructure of LMAX Exchange is the ground breaking ‘Disruptor’ concurrent programming framework, which was discussed in detail on The Trading Mesh back in August 2011 and which will no doubt be familiar to many technologists in the Java open source community. That said, Disruptor however is just one of several projects that has been open-sourced by LMAX Exchange over the years.
The development team use continuous integration and continuous delivery to release new software every two weeks. And the Agile process used for development is not just specific to building software. According to Phillips, it is used across the organization.
“We use the same Agile Kanban-type methods for running the operations teams, the network teams and so on, which means we’re all speaking a common language, because there are occasions when you need to have an integrated approach across all departments”, he says.
Dealing with growth in message volumes
Given the fact that end-users might be using a range of applications and different interfaces to access the market, how do venues ensure that the price users see on their screens or within their applications is the true, latest, current price in the market and that they are able to deal on those prices?
“That’s a problem of any fast-moving market”, says Phillips.
“At peak times, we can be processing 40,000 updates per second. So the sheer volume of price updates means that some customers are not able to deal on them because the market is moving too quickly. They might get the market data, make a decision to trade and send the order, but by the time the order reaches us, the market has moved. For people who are on latent connections, or who quite frankly can’t deal with the volume of market data that we throw at them (which can be a function of latency and packet loss) as well the speed of their system, what looks to them like the slippage and rejects they would get trading on a last look liquidity are in fact just a consequence of the fact that it’s a fast-moving market”.
The MTF infrastructure has tools to monitor how much data clients can consume and has the ability to send out less data to certain clients and tune things, according to Phillips.
“It’s the exchange execution experience that is the big differentiator for us. We’re finding that a lot of customers who come to us stay with us. There’s a real ‘stickability’ from customers because they haven’t experienced elsewhere the kind of execution that we can give them once we’ve matched the data rate we send to the rate they can consume safely”, he says.
Can randomization prevent ‘gaming’?
With this kind of growth in message volumes, some FX trading platforms are now introducing artificial delays into their matching processes, in a bid to ‘even the playing field’ and remove opportunities for high-speed gaming.
“We’ve got a pilot scheme starting with a number of buy side participants which include some who use high frequency trading strategies, they’re all really interested in this model”.
Dan Marcus, CEO of ParFX
Last year EBS introduced a process whereby incoming orders are batched and assigned a random delay of between one and three milliseconds before being matched. Thomson Reuters recently announced proposals along similar lines.
Another venue adopting a similar approach is the recently launched ParFX, which applies a random pause to every order entering its system.
Dan Marcus, CEO of ParFX, explains how this works.
“Every single message that comes into ParFX – whether that’s an order submission, cancel or replace – is subject to a randomized pause for any period between twenty and eighty milliseconds. We use this period because it’s meaningful enough to nullify anyone who is aiming to game, but meaningless enough for anyone who wants to actually trade spot FX for hedging purposes.”
According to Marcus, this is not necessarily an anti-HFT measure, but it does bring the focus back from pure speed-based to value-based plays.
“We’ve got a pilot scheme starting with a number of buyside participants which include some who use high frequency trading strategies, they’re all really interested in this model”, he says.
“Does that mean they’re going to try to game us? Not necessarily. Generally they’ll have a bunch of strategies, only some of which are based on speed. Others are based on relative value, which are perfect for our platform as they can operate in a benign environment that rewards intelligence.”
Many would assume that introducing an artificial delay would lead to wider spreads, but Marcus says this has not happened.
“We thought that might be the case as the vision is focussed on firmness of liquidity, i.e. what you see is not a mirage by virtue of disruptive HFT activity or duplicated on multiple platforms. But due to the high quality nature of the participants and the increasing network effects as we onboard we’ve seen tight spreads as well as excellent depth”, he says.
IRS technology
The technology behind the ParFX trading platform is based upon Trad-X, Tradition’s interest rate swaps (IRS) platform, which is designed to handle very large volumes of orders.
“We produce well over a billion messages a day on that platform, at around 80,000 messages per second. The high message traffic is due to the implied engine used on our CLOB which is required as IRSs are relative value products”, says Marcus.
“IRSs are a bit confusing in that they are low latency in relation to message traffic but traded on a comparatively low volume basis with large size. We took that technology and developed it as the basis for our FX platform, so we had a huge amount of headroom for message traffic. Where it got quite complicated was the messaging around the randomizer, because it generates more messages, i.e. message going into the randomizer, message going out, message hitting the platform. That was a challenge”.
Conclusion
For the individual FX trader accessing the markets via a smartphone or the multinational bank executing billions of dollars in FX transactions every day, performance is crucial. Every single participant in the market needs accurate prices and quality executions with minimal slippage.
The real challenge is how to achieve optimum performance at every level throughout design, development, testing and deployment. And the interesting thing here is that it is not necessarily all about trying to make everything go faster, as can be witnessed by the ParFX approach (and that of other major FX platforms) of slowing the market down with the aim of improving execution performance for the majority of participants, or LMAX Exchange’s approach of tuning the levels of data sent to certain clients.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – in conversation with Jonas Bonér, Greg Young, Martin Thompson and Jan Macháček – investigates the core principles behind the recently published The Reactive Manifesto, and its relevance to system design in today’s Financial Markets.
Anyone who has been closely involved in designing and building high-performance trading applications within the last ten years or so, will no doubt be familiar with the principles of asynchronous, concurrent, distributed programming, and the need for incorporating scalability and fault tolerance into their designs. Such principles form the basis of reactive applications and reactive frameworks, and successful trading systems have always needed to follow this approach in order to be able to rapidly react to events, react to load, react to errors or failures, and react to users.
It is these four principles of system design – event-driven processing, scalability, resilience and responsiveness – that are at the core of The Reactive Manifesto, a document authored last year by an eminent group of technologists, including Jonas Bonér, Erik Meijer, Martin Odersky, Greg Young, Martin Thompson, Roland Kuhn, James Ward and Guillaume Bort.
The principles outlined in The Reactive Manifesto are of course not new. They have been used in a number of industries dealing with low latency, high throughput applications for some time. In fact, the Erlang programming language developed by Ericsson in the mid-1980’s was specifically designed to be event-driven, scalable and resilient.
“ The important point is that you can build reactive applications in any language, using almost any tool. It’s more about adhering to the core principles and thinking about application design and application architecture.”
Jonas Bonér, CTO of Typesafe
So why is there a need for a new manifesto to describe these traits? Jonas Bonér, CTO of Typesafe, inventor of the Akka framework and the driving force behind the Reactive Manifesto, explains.
“Even though these are old principles, the need for the manifesto is there”, he says.
“We live in a world now that requires this approach for almost any application, particularly if it is to be put up on the Internet; if it needs to drive mobile devices; if it is to make the most of multiple processing cores; or if it needs to fit cloud computing infrastructures. The need to go reactive is even greater when the application needs to provide sub-second (or even sub-millisecond!) response times.
“But there is no single tool or single language that will do all of the jobs”, continues Bonér.
“At least not in a moderately complex or advanced application. As a result, a lot of sub communities have built up around certain tools and languages and they’ve developed their own strategies for dealing with specific challenges. We wanted to look at the whole landscape across all of these communities to see how people are really solving these problems, to try to distill the core principles behind all of these languages, features, tools and frameworks.That’s how we ended up with these four key traits of The Reactive Manifesto: event-driven, scalable, responsive and resilient. And the important point is that you can build reactive applications in any language, using almost any tool. It’s more about adhering to the core principles and thinking about application design and application architecture”, says Bonér.
The first version of The Reactive Manifesto was published in June 2013. Since then it has undergone various iterations and is now starting to be embraced particularly warmly by the .NET, Java, JavaScript, Haskell and Erlang communities. The Reactive Manifesto providesa common vocabulary to describe the widely differing terminology that each language and runtime brings.
But how does it apply specifically to the financial markets? Independent consultant and serial entrepreneur Greg Young, who coined the term “CQRS” (Command Query Responsibility Segregation), and has a background in designing and building gambling and algorithmic trading systems based around reactive principles, throws some light on this.
“I’d be willing to bet that in any financial institution the accounting and the back office systems are not written in a reactive style, even though they should be.”
Greg Young
“What’s interesting to me about The Reactive Manifesto in the finance space is that most people who’ve been building trading applications would look at it and see it as being self-apparent, because they’ve been doing things this way for decades”, says Young.
“Taking The Reactive Manifesto and bringing it to algorithmic traders is preaching to the converted. But there are a lot of other systems in financial organisations that absolutely need to realise what we’re talking about when we talk about reactive. I’d be willing to bet that in any financial institution the accounting and the back office systems are not written in a reactive style, even though they should be”.
How can more mainstream applications in banks and finance houses benefit from the reactive approach?
“In any financial organisation, everything can be seen as a series of facts that happen at points in time. Regardless of whether it’s a trade occurring, a position being cleared or a settlement failing, all of those things are facts that are happening at points in time. And they all fit very well into the reactive model”.
And in response to possible difficulties, Young adds:
“With back office settlements for example, you could have a bunch of events that sit in a batch-processed queue. And if you look at a large batch operation in the back office of a bank, adjusting that process for near real-time is inherently more complex than writing a stored procedure that gets run on a scheduler at midnight.
“But in order to completely re-design some of these applications, and to move from a synchronous approach to an asynchronous, parallelized-type design, you need what I would call higher-minded developers”, says Young.
“And then you will start finding all the same benefits of why people are building trading systems this way. How much cooler would it be if you could actually trigger operations in near real-time as opposed to having users running reports at the end of the day and looking for discrepancies? Particularly in areas like risk and compliance, especially counterparty risk”.
Jan Macháček, CTO of Cake Solutions, an enterprise software solutions provider that specialises in reactive application design, explains that it is not just the wholesale and investment banking sector that can benefit from The Reactive Manifesto approach, there are many advantages to be gained in retail banking too.
“Reactive applications play a significant role in retail banking, particularly in mobile”, he says.
“The customers expect information delivered to their mobiles quickly & reliably. Whenever a mobile banking or mobile payment system fails, the damage to the bank’s brand can be far-reaching. And so, the systems that drive these front-line applications have to follow the reactive principles.
“The pressure to adopt the reactive programming style grows even bigger when one considers the possibilities and challenges of new currencies and new patterns of customer expectations”, continues Macháček.
“ Understanding and applying The Reactive Manifesto to retail applications allows the banks to offer new and innovative services.”
Jan Macháček, CTO of Cake Solutions
“The underlining principle is again responsiveness & reliability for the consumers, which means these systems need to be scalable and event-driven.
“Understanding and applying The Reactive Manifesto to retail applications allows the banks to offer new and innovative services, particularly in the AML and KYC arenas. Identifying and knowing your customer needs to be seamless: no one wants to type passwords and memorable words on the go, and yet the banks need to be certain that the customer’s money is safe. One can think of these challenges as events—pieces of a puzzle—that must fit together perfectly; these events must be processed in a timely manner by a system that is resilient and that can scale up and down to cope with the demand”.
So what are the key challenges to wider adoption of The Reactive Manifesto principles across the financial industry?
High-performance & low-latency computing specialist Martin Thompson, who designed the Disruptor open-source concurrent programming framework and is another co-author of The Reactive Manifesto, shares his insight on this topic.
“There is an obsession in our industry with synchronous design, which fundamentally limits so much of what we do”, says Thompson.
“But getting people out of that is an interesting and very tricky challenge, because it becomes so instilled and indoctrinated in people. Synchronous designs tend to be quite quick to get the first iteration of something up, but then they don’t cope and they don’t scale beyond that point. For example, if you get error conditions, synchronous designs tend to get a lot messier in the code, and then the code gets more coupled, it gets more complex as you deal with error conditions and side effects. As you scale up the development cycle, things gets slower and slower, from both the delivery and the system performance perspective. Whereas with asynchronous system design, you start off with it being a bit harder because you’ve got a few more things to contend with initially, but once you’re over the initial curve, you get a nice decoupled design, you’ve got the ability to handle failure and the discipline that comes behind it. Once you’ve got the basic fundamental building blocks in place, your delivery cycles get faster and you can deal with failure and with performance issues better. But there’s a greater activation energy needed to begin with”, he says.
“It’s like many things. We seem to live in this instant gratification society, where people take the easy option to begin with, but it’s actually the harder option in the long term”.
Does that mean that the reactive approach applies to how you run your teams and manage your development projects as well as how you design your code? It certainly has interesting effects on it, according to Thompson. He refers to a theory known as Conway’s Law, developed by computer scientist Melvin Conway in the 1980’s, which states that any system takes on the shape of the organisation that built it.
“Whether you’re building systems in software, or building organisational systems to build that software, or building systems that interact with the business in some way, it’s all the same, you’ve got to look at the system holistically and work out how to make it more efficient and more effective in how it works”, he says.
“It’s interesting that the more mainstream development could benefit hugely from The Reactive Manifesto, but it’s almost not in their interest because they don’t have the extreme requirements that bring a lot of these issues to the surface, and they tend to employ developers who are driven by what looks good on their CV. And unfortunately as a result, with Conway’s law coming into effect, we get systems designed from CV-driven development”.
Bonér agrees with this analysis.
“With Conway’s Law, the application tends to take on the shape of the company’s organizational structure, so if you have a distributed team or a large team that’s split up, then you need to have really good protocols so that those components can talk to each other. And that’s also something that having truly reactive, event-driven applications enables”, says Bonér.
With over 4,000 signatories since it was published, The Reactive Manifesto is certainly building momentum and starting to gather a following. But where does it go from here?
“The Reactive Manifesto should be – and is – a living document”, says Bonér.
“We want it to evolve over time; we want people to contribute to it, and that’s why we have it on GitHub, where people can create pull requests. That’s already happening from other communities, like the Erlang people for example.
“The idea of having people sign the manifesto is to try to create a passionate community. By signing it and adding a ribbon to your website, you are showing that you actually take a stand, that you believe in this way of doing things and believe that this is the future, and that you seek change. The more people who are into it, the easier it is to create this movement. That’s how humans work, we want to gather around things, we want to feel that we have a higher purpose, and we like communities”, says Bonér.
“People ask if this is just the latest fad, and that’s a fair question, so we need to look at how we educate people on this, so they start to realise what is important and why.”
Martin Thompson, High-performance & low-latency computing specialist
From Martin Thompson’s perspective, The Reactive Manifesto is a starting point.
“I see this as a set of principles that starts a discussion about what’s important”, he says.
“People ask if this is just the latest fad, and that’s a fair question, so we need to look at how we educate people on this, so they start to realise what is important and why. I think this is going to be one of the challenges in describing something like The Reactive Manifesto, in that it has a lot of really important concepts in it and when you put them all together you can build very effective systems that are well-suited to the finance space.
“It’s interesting that the people who are very competent in financial technology, for example people who have built exchanges or trading systems dealing with high volume flow, they instinctively already get this because they’ve been through a lot of the battles. And the feedback I get from some of those people is that The Reactive Manifesto is a nice articulation of all of the important issues one faces when building those systems.
“But the subtlety of understanding as to why these things are important in a wider context is the next challenge”, concludes Thompson.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – talks to Mike Schonberg of Quincy Data, Laurent de Barry and Nicolas Karonis of Enyx and Henry Young of TS-Associates, about how and where FPGA technology is increasingly being used in low-latency trading operations, beyond the traditional areas of market data acquisition and distribution.
Introduction
FPGA (Field Programmable Gate Array) technology, having been used by latency-sensitive practitioners in the financial markets for well over five years now, can probably be considered fairly mature. The performance and reliability of FPGAs is well proven, particularly around market data handling, as processes such as acquisition, filtering, normalization and distribution of market data can all now be handled in ultra-low latency using a variety of FPGA-based solutions.
But beyond market data, how is the usage of FPGA technology evolving in the world of low-latency trading? What else are firms doing – or trying to do – with FPGAs? And how are they going about it?
FPGA in microwave networks
One area where FPGA technology is starting to become prevalent is in microwave networking, particularly as more and more such networks are springing up between major trading locations in both the US and Europe.
“ You can use FPGAs for implementing novel compressiontechniques, for compressing Ethernet headers for example, and for having different encodings of data on the network that are more efficient from a bandwidth perspective.”
Mike Schonberg, Director Market Data Technology at Quincy Data
Firms that rely on low-latency connectivity are attracted to microwave networks because they provide the fastest point-to-point path between trading venues. However, there are numerous challenges associated with microwave. Not least the fact that bandwidth is so constrained compared with fibre. So how can FPGAs help firms make better, more efficient use of that bandwidth?
efficient from a bandwidth perspective”, explains Mike Schonberg, Director Market Data Technology at Quincy Data, a specialist provider of ultra-low latency market data services.
“Also, you might potentially need to share this very limited amount of bandwidth in a completely fair way between multiple end-users. Although there are commercial networking hardware solutions available that address this problem, they don’t work particularly well in this environment because they’re not optimised for low latency, which is why we use FPGAs for this task”, he says.
“FPGAs provide the flexibility to implement your own hardware to addresses this issue. They can also work with the rather unique network topologies associated with microwave and wireless networks”, he adds.
With a range of FPGA-enabled network hardware now available – Arista and Solarflare are two examples of vendors offering switches and network adapters containing FPGAs – Schonberg points out the importance of how and where the FPGA fits into the overall network infrastructure.
“The way the FPGA ties into the architecture is an important factor because if you make the wrong decisions early on, you can potentially restrict what you are able do with the technology going forward”, he says.
“When you have an FPGA built into a switch, for example, much of the behaviour of the switch can’t necessarily be altered. If a switch is expecting normal Ethernet packets, you couldn’t do something like header compression for example. Whereas if you build an FPGA solution from the ground up, you have complete control over how the data moves through the FPGA and what you do with that data”, says Schonberg.
Using FPGAs to control and allocate bandwidth in this way can benefit both the operator of the microwave network and its customers, by enabling the limited bandwidth to be shared across multiple users and customers both fairly and transparently, SLAs offering greater performance can be clearly defined and met.
FPGA in OS Kernel and TCP Bypass
The key factor that allows FPGAs to offer such massive improvements in performance in electronic trading, is that they enable processes traditionally handled by software to run directly in hardware on the chip itself, effectively enabling these processes to run at wire speed. This contrasts with software, where there is typically an operating system (OS) to contend with and an OS kernel that controls access to CPU, memory, disk I/O, and networking. In software, if the OS decides it needs to do something important, it can interrupt running jobs and introduce unpredictable delays. This is one of the reasons why it is very difficult to obtain jitter-free, deterministic performance in software.
“ Most of today’s TCP offload solutions remove the operating system kernel from the critical path by putting all the stress of handling the network protocols on the server’s CPU. However, this approach to kernel bypass is not a miracle cure because it remains CPU-intensive.”
Laurent de Barry, Co-Founder & Head of Application Engineering at Enyx
With an FPGA on the other hand, there is no operating system, so those types of problems do not exist. Wherever the FPGA can be used to bypass the kernel, big improvements in performance – and more importantly, deterministic performance – can be achieved, for example when using an FPGA to bypass the OS kernel when handling the TCP/IP stack.
Laurent de Barry, Co-Founder & Head of Application Engineering at Enyx, a provider of ultra-low latency solutions based around FPGA technology, explains some of the recent advances in full TCP kernel bypass via FPGA.
“Most of today’s TCP offload solutions remove the operating system kernel from the critical path by putting all the stress of handling the network protocols on the server’s CPU”, he says.
“However, despite the marketing messages you might hear, this approach to kernel bypass is not a miracle cure because it remains CPU-intensive. Some network hardware vendors now use kernel bypass technology in their ‘low-latency’ NICs to try to avoid bottlenecks by taking the whole network stack out of the kernel and into the user space. But the problem with this approach is that the network stack is still running on the CPU and is therefore loading the CPU”, says de Barry. See figure one below.
“Everything you can offload from the CPU helps improve atency and – more importantly reduce jitter”, continues Barry. “So our solution to place the full TCP stack in hardware. That way the CPU doesn’t have to worry about TCP any more as all of those processes are offloaded to the FPGA”. See figure two below.
The main advantage with this approach is that we don’t use the CPU at all, it’s all done on the FPGA card”, says de Barry.
FPGA in pre-trade risk
Another area where FPGAs are being used effectively is in pre-trade risk. Increasingly in today’s electronic markets, and particularly since the introduction in the US of the SEC’s “market access rule” 15c3-51 a few years ago, orders are required to go through multiple checks to satisfy risk profiles before they are sent on to trading venues. FPGAs provide the ideal architecture for this because dozens of different pre-trade checks on a single order can be computed in parallel, all in less than a microsecond.
Nicolas Karonis, Business Development Director at Enyx, explains how this works.
“When firms originally started doing pre-trade risk checks via FPGA around three or four years ago, they jumped on the SEC regulation 15c3-5, which mandated a simple list of ‘fat finger’ checks based only upon information that was held within the order itself, i.e. quantity, price, total value of the order and so on.”
“Our approach here is to have the full order book managed by the FPGA, allowing the complex compliance needs requiring calculations on positions, computing with external arrays or cross-correlating between assets, all to be handled pre-trade.”
Nicolas Karonis, Business Development Director at Enyx
“However, it gets more complex when you need more than that. What you can’t check by just looking at the order itself is what other orders are already on the book, what executions you’ve done previously, etc”.
This is the real challenge, according to Karonis.
“Our approach here is to have the full order book managed by the FPGA, allowing the complex compliance needs requiring calculations on positions, computing with external arrays or cross-correlating between assets, all to be handled pre-trade.”
“That’s not straightforward. You need to make sure that all of the information, all of the time, can be accessed within the FPGA without you having to go out and look up a database for example. Now with properly designed solutions in hardware (with clever integration of Quadruple Data Rate memories that allow for simultaneous Read and Write) and properly optimized VHDL code these problems are overcome allowing essentially the same flexibility as software solutions for Order Management”, he says.
Measuring FPGA latency
With more and more processes moving from the CPU to the FPGA in the race to ever lower and more deterministic latency, one of the challenges that firms now face is how to measure point-to-point latencies in these nanosecond domains.
The answer lies in instrumentation at the FPGA firmware level, according to Henry Young, CEO of TS-Associates, a supplier of precision instrumentation solutions for latency sensitive trading systems.
“In the good old days of single core servers, you could have each functional block in the trade flow – your feed handler, client connectivity gateway, algos, SOR and execution gateway – all on their own dedicated servers”, he says.
“Then along came multicore and shared memory communication. So all of a sudden you lost visibility, because traditional instrumentation techniques are based around network taps or SPAN ports, i.e. looking at packets on network connections. If you don’t have physical network connections between these various components – because they’re doing shared memory communication on the multicore server – you can’t peer inside. That’s why we launched the Application Tap, which accurately timestamps metadata within the applications themselves”, he says.
“The idea is that you can now get instrumentation that zooms into the individual functional sub- components that are all sharing the same FPGA silicon. Which is tremendously exciting.”
Henry Young, CEO of TS-Associates
Young and his team at TS-A are now working with Enyx to embed the Application Tap functionality into the FPGA itself, defining instrumentation hooks in the FPGA firmware to get emit time-stamped events that are compatible with the Application Tap instrumentation format. The main advantage of this approach is that it gives clients real granularity of visibility right down at the FPGA level, says Young.
“The idea is that you can now have instrumentation that zooms into the individual functional sub-components that are all sharing the same FPGA silicon. Which is tremendously exciting for people who care about this stuff”, he says.
What next?
Ever since FPGAs were first introduced into high frequency and low latency trading environments, it has been the dream of many such firms to have the entire trading platform running on the FPGA, at wire speed.
The last bastion is having the trading algorithm itself running on the FPGA. This has always presented something of a challenge as, for a variety of reasons, FPGAs do not lend themselves to running any kind of complex trading logic.
This could be about to change as FPGA hardware evolves. The latest generation of FPGAs now come equipped with their own co-processor, the ARM core, which can be programmed – in an accessible way – to do a variety of tasks that until now have not been possible on FPGA.
It will be interesting to see how far firms can innovate utilising this new generation of FPGAs, particularly around the trading algorithms themselves.
For more information on the companies mentioned in this article visit:
In this article, Mike O’Hara, publisher of The Trading Mesh – discusses the subject of DevOps with Jim Davies of MoneySuperMarket.com, Peter Evison and Ani Chakraborty of Cake Solutions and Independent Consultant Tom Stockton, looking at some of the best practices firms are using to bridge the gap between development and operations.
Introduction
Cloud-based infrastructure is evolving rapidly. Compared with just a couple of years ago, there is now a much wider range of options around how software can be deployed in the cloud. While this has created a wealth of new opportunities, there are also some emerging challenges, not least of which is the general shortage of people who are skilled in deploying applications on these new platforms. In some senses, the technology available has overtaken the skills of the people on the ground, resulting in a gap between development and operations, the so-called ‘DevOps skills gap’.
The net result of this is twofold. First, application developers are not able to get their innovations to market quickly enough; they might have features and functions ready but may struggle to deploy them. And secondly, cloud providers are becoming frustrated by the fact that they have all of these new infrastructure initiatives and innovations but their clients are unable to take advantage of them.
“Moving to the cloud brings a lot of benefits, but sourcing the skills for that to happen takes us away from the more traditional skills base, so in that sense when we started it was initially quite challenging.”
Jim Davies, Infrastructure Tech Lead, DevOps/Sysadmin at MoneySuperMarket.com
The DevOps skills gap
It is probably fair to say that there has always been a gap – some might say a gaping chasm – between development and operations. That, in itself, is not new. What is new, is the speed at which new technology and infrastructure platforms are becoming available – particularly in the cloud – and the fact that decisions need to be made regarding deployment much earlier in the development cycle.
Jim Davies, Infrastructure Tech Lead, DevOps/Sysadmin at MoneySuperMarket.com, the leading price comparison portal, explains how things are changing within his organisation as it increasingly adopts cloud-based infrastructure.
“Moving to the cloud brings a lot of benefits, but sourcing the skills for that to happen takes us away from the more traditional skills base, so in that sense when we started it was initially quite challenging”, he says.
The approach taken by Davies and his colleagues at MoneySuperMarket was to put together small delivery teams known as ‘squads’, able to operate fairly autonomously, each containing a technology lead, an architect, a group of developers, a couple of QA/testers and sometimes a business analyst.
“Previously when anyone wanted infrastructure services, they would come to the central Infrastructure & Operations (I&O) team, where resource time from sys-admins and server engineers would be ‘rented out’ to build servers, put network changes in place and build out the applications on the servers, before handing over that service”, explains Davies.
“Although that’s the way things were set up traditionally, the goal now is to have the sys-admins & systems engineers, the DevOps function, within those small delivery squads”.
What sorts of skills are typically required in these roles?
“I’ve found that it’s mainly Java developers, classically trained software engineers, who tend to be the strongest candidates. Managing infrastructure today is not about servers, not about cables, not about contracts, it’s about managing the code base, that’s the be-all and the end-all”, say Davies.
“Understanding inheritance and the way we bootstrap services into new servers, the way we use tools like Puppet, that’s how the skills are changing”, he says.
Cross-functional approach
Having smaller teams that take care of everything – including developing the features, deploying the applications and maintaining them on a cloud-based infrastructure such as AWS – rather than having dedicated teams performing a single function can make a lot of sense, according to Peter Evison, Commercial Director at enterprise software solutions provider Cake Solutions.
“We recognised that DevOps needed to be adopted and be seen as a skill within our development teams and not a process undertaken by a department or specific individuals.”
Peter Evison, Commercial Director at Cake Solutions
Evison agrees that DevOps is an area where good talent is hard to find. This prompted the firm to develop a solution delivered in-house through its Cake Academy process.
“We recognised that DevOps needed to be adopted and be seen as a skill within our development teams and not a process undertaken by a department or specific individuals”, says Evison.
“Once we understood and implemented this, the reliance was reduced and the service levels increased”.
Evison’s colleague Ani Chakraborty, Technical Director at Cake Solutions explains how Cake enables its team members to become Cross Functional Engineers.
“All our teams have DevOps skills within the team and can deploy, integrate and maintain their clients’ applications independently from the rest of the company, almost like its own start-up company”, he says.
“We’ve achieved this by developing a rock solid and cloud agnostic CICD environment that empowers our engineers to deploy directly to the client’s chosen environment without risk. We also adopted new tools that helped us to monitor and audit this, which make the whole process quicker and more robust than traditional methods. Finally, we set about training our teams on DevOps skills and cloud environments, thus giving them a fuller appreciation and understanding of the end-to-end development pipeline”.
According to Evison, this is an evolutionary process.
“I don’t believe in the word ‘perfect’ so our process will continually improve and evolve”, he says.
“I do however appreciate that gaps in skills or resource can be dealt with in many ways and the best approach is to put bleeding edge technology in the hands of a highly talented team”.
Encouraging collaboration
Independent Consultant Tom Stockton, a specialist in DevOps, points out that although it is effective to have cross functional teams, it isn’t effective to have every member of the team working on creating DevOps tools. His view is that DevOps specialists can create tools that can be shared with the wider team, using their feedback to enhance and modify the toolset as required.
“You do need specialists in certain subjects”, he says.
“However, it’s probably more important to put the effort into developing some standard tools and then to correctly document and distribute that knowledge around the team by having good workshops and a good interface to the tools that you’ve written”, he says.
This is particularly important as cloud-based infrastructures are constantly evolving, according to Stockton.
“Awareness of DevOps is important across the organisation. If you don’t have that awareness then it’s going to be very difficult to get your application out there because of all the traditional reasons why code doesn’t get deployed.”
Tom Stockton, Independent Consultant
“With AWS for example, you might develop a solution that works but then you may have to re-develop it because they’ve changed either the service or the API. You may be given some notice on it but you can end up re-developing a solution that you maybe only worked on three or six months ago. So you have to be ready and you have to be pretty agile. By writing good code in the first place, it makes it easier for you to maintain that code well. And you’re better coming at it from a development point of view than you are from a systems admin point of view. A developer will always write better code than a sys admin can”, says Stockton.
Awareness of DevOps is important across the organisation, according to Stockton.
“If you don’t have that awareness then it’s going to be very difficult to get your application out there because of all the traditional reasons why code doesn’t get deployed. If there’s a disconnect between development and operations then you’re losing the benefit of writing all that code”, he says.
“But the level of understanding can be different for different people. For example, I would consider myself a DevOps engineer and I’ve developed a certain set of tools but I don’t necessarily need to be the person that always runs those tools. I can take them to the developers who I consider to be better or more technically capable than I am and they can easily go and run them, be effective using them and even give me feedback on them if they need to. To me, it’s about collaboration, sharing the knowledge and using the great community and open source tools that are out there in the right way”.
Bridging the skills gap
Ani Chakraborty of Cake Solutions sums things up.
“The key point about DevOps is that it allows businesses to deliver value to their end customers faster, maintaining the flow of delivery”, he says.
“It enables the infrastructure, operations and development teams to communicate and collaborate better together in delivering software. In that way, it’s actually a software development methodology that helps to deliver real value, starting from development of the feature to taking that feature up to production”.
“The key point about DevOps is that it allows businesses to deliver value to their end customers faster, maintaining the flow of delivery.”
Ani Chakraborty, Technical Director at Cake Solutions
The key to bridging the skills gap is to focus on value, according to Chakraborty.
“The important thing is having teams that deliver software, tools, techniques and ways
of working that empower other teams to deliver value”, he says.
“Rather than having set tools and techniques, look at the context. Look at the organisation, its structure, its motivation of what it’s actually trying to achieve and most importantly what is the business value that needs to be delivered. Everything else fits around that”, says Chakraborty.
Conclusion
Is it possible to fully bridge the DevOps skills gap? Almost certainly.
Although gaps have always existed and will undoubtedly continue to exist between development and operations, it is the firms that adopt the four key elements of collaboration, communication, taking a cross-functional approach through appropriate training, and focusing on value in the software delivery chain, who are able to most effectively cross
the chasm.
For more information on the companies mentioned in this article visit:
Getting to grips with Non Functional Requirements
03/02/2015
In this article, Mike O’Hara, publisher of The Trading Mesh…
Fixing fixed income: taking a leaf out of social media’s playbook
11/12/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
Addressing the challenges of Post-G20 Interest Rate Hedging
31/10/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
The four pillars sales of productivity at Global Investment Banks
04/09/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
Are Financial Markets facing a Requirements Crisis?
30/07/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
Performance considerations in FX trading platforms
15/06/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
The Reactive Manifesto in Financial Markets – Trend or Fad?
09/06/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
Bridging the DevOps Skills Gap
14/05/2014
In this article, Mike O’Hara, publisher of The Trading Mesh…
Transforming quantitative trading applications with persistent flash memory
21/03/2014
In this article, Mike O’Hara, Publisher of The Trading Mesh, looks…