Compared with other industries, the financial markets sector has been relatively slow to adopt cloud. Much of the inertia has been due to concerns – whether real or perceived – around issues such as complexity, security, regulation/compliance and data sovereignty.
Recently however, with these concerns steadily being addressed, cloud has become much more viable in the capital markets industry. Adoption of cloud-based solutions is rapidly accelerating for a range of business applications, across the back office, the middle office and increasingly the front office.
One noticeable outcome of this trend is that both vendors and financial institutions are increasingly bringing products to market that are offered ‘as-a-Service’, requiring negligible up-front investment in infrastructure by end-customer firms.
In this Financial Markets Insights report from Equinix and The Realization Group, Mike O’Hara learns from Judith Swan of Microsoft, Catalina Vazquez of Refinitiv, Seetharam Gorre of Pico, Felix Grevy of Finastra, Gordon McArthur of Beeks Financial Cloud, Isabel Pitt of Lloyds Bank, Amit Bothra of PwC and James Maudslay of Equinix about the challenges and opportunities of utilising public, private and hybrid cloud to build out unique ‘as-a-Service’ offerings.
Introduction
Talk to any technology or solutions vendor in the capital markets space and they will tell you the same story, that the move towards cloud in the sector has been rapidly accelerating in the last couple of years, and this looks to continue.
Undoubtedly, in 2020, this trend has been expedited by the global pandemic, as firms realise that they need to have technology available for staff working from home and collaborating remotely via cloud-based services. As a result, the cloud conversation has shifted rapidly from being technology-driven to high-priority discussion at executive management and board level. And migrations that might previously have been planned to take a number of years are in many cases now happening in a matter of months, as the possibilities that cloud offers in terms of operational resilience have never been more important. But this is by no means the whole story, as most financial markets institutions had already started to adopt the cloud for various purposes prior to the onset of Covid.
So where are financial markets firms currently in their adoption of cloud and ‘as-a-Service’ products, and how is that journey evolving?
The Cloud Journey
There’s still a long way to go, particularly for larger institutions, according to Gordon McArthur, Chief Executive Officer of Beeks Financial Cloud, a company that offers low-latency compute, connectivity and analytics to global capital markets and financial services firms.
“Cloud is still in its infancy in certain parts of capital markets,” says McArthur. “Some of the smaller firms have already embraced cloud Infrastructure-as-a-Service or Software-as-a-Service, but a lot of the bigger firms are still only starting the journey. But every big organisation in our space that has not already started this in earnest is looking at how they take those first steps. And the start of the journey is the most difficult bit because they have these big legacy footprints that are business mission-critical.”
Felix Grevy, Vice President, Product Management at Finastra, one of the world’s largest global fintech companies, agrees that the move towards the cloud is accelerating. “When we started discussions two or three years ago, most of the banks were saying it was not about if they were going to go to cloud but when. There was some resistance, but also acknowledgement that this is the future,” he says. “Then we started to see banks move some non-business applications, like Office, to the cloud as a first step, while keeping core systems on-premises, managed by their IT. That is now changing.”
The buy side has been faster to embrace cloud within the financial industry, according to Seetharam Gorre, Chief Information Officer at Pico, a global provider of technology services for financial markets.
“The buy side initially started using cloud in the front office for data crunching,” he says. “Until a few years ago, if firms needed to analyse big data sets like alternative data, credit card data or geo & weather data in powered compute, and run analytics against that data, they would need a whole data farm. Time to market was a problem. And because those workloads are discrete and not continuous, investment in compute did not make any sense.”
This became an ideal use case for the cloud, says Gorre. “Because there’s no IP in these third-party data sets, the buy-side started leveraging cloud heavily on the quantitative side by shipping their binaries – not their source code – to the cloud, doing whatever compute they wanted to do, bringing the results set on-prem, and then doing their analysis in-house.
“It makes no sense to have a data centre running all of this stuff,” continues Gorre. “One, it’s not so time sensitive. Two, you can benefit from the elasticity that you get from the cloud, because you probably want to size the system to have at least 300, 400 percent head room. And that’s a lot of compute sitting in the data centre idle, and depreciating at a fast rate.”
Factors to Consider When Moving to the Cloud
Clearly, there are many benefits to utilising the cloud. But what are the key factors that determine which applications, data or services are best suited to a cloud environment?
Suitability
“Firstly, we have to recognise that cloud is not suitable for everything,” points out James Maudslay, Senior Manager, Segment Marketing at Equinix, the world’s digital infrastructure company. “It always interests me when I hear firms say they’re going ‘cloud first’, because there are certain applications that, unless and until they are rewritten, cannot go in the cloud. Also, you have to consider performance. To what extent can you meet the performance needs that the company has for the system, and can cloud do the job?”
The chance of success is not a factor that people often talk about, says Maudslay. But in a market that operates 24/7 it would be a very brave CIO who starts moving all the complicated applications in the earlier stages, he says.
“The complexity of the system, the interconnect that it needs to verify the parties, all of that has to be considered. What are the components that makes the system function? Where are all the other systems it has to interact with, and does that put constraints on whether you can move things to the cloud, or not?”
Need for compute power
Maudslay continues. “Firms are putting things like analytics, Value at Risk calculations and so on into the cloud, to take advantage of the almost unlimited processing power. With risk modelling, if you’re thinking of throwing heavyweight compute at it, cloud is absolutely perfect. On the other hand, if it’s something that might require access from thousands of users across a distributed network, cloud may not be as appropriate.”
Felix Grevy cites the example of multi-terabyte CVA & XVA reports, which are extremely heavy in terms of computation. “Vendors are now providing this as a service, which means that banks don’t need to run a huge CPU farm and reports don’t take all day and all night to run. Banks don’t want to own that and would rather use a service.
Flexibility requirements
“A lot of the move towards cloud is driven by banks’ transformation programmes, but it’s also cost-driven, it’s regulatory-driven, and there’s a need for workload flexibility as well,” says Gordon McArthur. “With applications that are variable in their usage, things are easier, because with cloud, you get the flexibility that allows you to add and delete resources as you see fit for application loads that need to grow or reduce over time. The cloud gives you that elasticity.”
Deployment speed
Speed of deployment is another factor, continues McArthur. “At some banks, it takes months to buy a server and to get it through procurement. That level of inflexibility when you need compute resources to go up and down is hugely problematic. So, the level of variability in the application set is one of the catalysts that makes it a no-brainer for a big institution.”
Cost
Amit Bothra, Senior Manager, Cloud Transformation Programme at PwC, the multinational professional services firm, believes that one of the biggest factors when determining what applications to move to cloud is cost. “The cloud vendor firms are proposing that your costs will definitely go down significantly.
“As systems, interfaces and architectures become more and more complex, firms are asking if they really need to carry on focusing on these areas and providing resource and effort, when someone else can do it on their behalf, at a lower cost,” continues Bothra. “You can compare it to twenty, thirty years back, when outsourcing was the ‘in’ thing, outsourcing of processes, outsourcing of services, and so on. In a way, cloud could be considered an outsourcing of compute, of maintenance, of hardware, of infrastructure, of network security, and all those things.”
Leveraging cloud architecture
Cost and cloudburst are certainly drivers, but should not be the primary ones, argues Judith Swan, Director, Financial Services at Microsoft UK. “Cloud should always be considered as an enabler to business outcomes, like being able to offer a enhanced service or agility in your product go-to-market,” she says. “How do you increase your
value, how do you increase functionality, how do you make products more accessible?” “Other factors that can determine which applications or data can go onto the public cloud are generally around latency or underlying database architectures. Careful consideration should be given to what you would like to be able to do with that application in the future. Look at the architecture behind what it is you are trying to build and consider, for example, if a microservices or container-based architecture could help be the foundation of that.”
Most financial services institutions have legacy applications that don’t enable them to build effectively on top, adds Swan. “So, they can lack the agility and efficiency to go to market with new products, to be able to add new features and functionality, to be able to react to new fintechs in the marketplace that don’t have the burden of that legacy architecture.” “When you are creating a new product or service and you create it on the cloud,” Swan continues, “the ability to do real-time decisioning, to scale up and add new features, and importantly to iterate and evolve, is why cloud-based applications can be so agile versus their legacy counterparts”
New products and services
Most financial services institutions have legacy applications that don’t enable them to build effectively on top, adds Swan. “So, it is very prohibitive for them to go to market with new products, to add new features and functionality, to be able to react to new fintechs in the marketplace that don’t have the burden of that legacy architecture, for example.”
“When you are creating a new product or service and you create it on the cloud,” Swan continues, “the ability to do real-time decisioning or analytics there on the spot is why the cloud architecture of those applications is so important versus the legacy stuff, which can be quite prohibitive when operating in a new world where speed is king.”
Adding value and driving efficiencies
Catalina Vazquez, Global Head of Historical Data at Refinitiv, a global provider of financial market data and infrastructure, contends that when firms are deciding which workflows to move to cloud, they generally look at two areas.
“First, what value would cloud bring above the line? Will it open up new business opportunities, will it help drive innovation, untap the machine learning and AI capabilities that cloud providers offer, for example?” she asks.
“Or second, is it something below the line, which makes their operation more resilient, especially important in these times? How can their operations benefit from not having to move the data around or store petabytes worth of market data on site, making sure they still have access to query-ready data as and when they need it?”
Case Study – Historical Tick Data in the cloud
At Refinitiv, cloud is allowing firms to unlock additional value, says Catalina Vazquez, Global Head of Historical Data.
“We have our historical tick data archive which goes back to 1996 and traditionally our clients would be downloading, storing, and managing it themselves. This meant that in the past, in some cases, we would have shipped physical discs to the clients, to be able to move this data around,” says Vazquez.
“But we now have the possibility of leveraging cloud to transform our offering and Google Cloud BigQuery presented a great opportunity for us to do that.”
Vazquez explains how instead of having customers come to Refinitiv and download, store and manage the data themselves, they can access a managed database of this tick history data via the cloud, and directly query the specific data they need to do their analysis and their modelling without having to worry about the extract, transform and load process which is time and resource intensive.
“If you combine that with the machine learning and AI capabilities that Google Cloud can offer, the ability to run clustering analyses to identify patterns in order book data has never been more attainable than it is with these new technologies,” she says.
Vazquez mentions that Refinitiv recently undertook an industry study, which suggested that for every dollar
a client spends on market data, there is typically an additional eight dollars that goes into processing, storing, and transforming that data.
“Previously,” says Vazquez, “you had to call your market data head, make sure that the data was procured, the hard discs were shipped, the storage in an on-prem database was secured, the ETL process was managed, the results were tested and only then would you be able to access it. This whole process could take anything from days to weeks before the end user could make use of the data.”
“Think about the additional value that we are now unlocking, because the quants and data scientists can now access the tick history data, and it’s a just a matter of permissioning their IDs.”
Data handling
“Big data has changed how computation is happening,” says Seetharam Gorre. “Before, you sent the data to the compute, whereas now, with the cloud, the compute goes to the data, so you don’t need to move the data, it’s right there, you can load it into whatever computation you need to do, and then shut it down. This trend of compute going to data is pushing people more towards cloud. And there’s less network overhead with that approach too. Moving that much data over any network is costly and takes time. Pico is focused exactly on this area where clients need to run latency and speed-sensitive workloads on-prem, with the option to dynamically ramp compute and data-intensive tasks, such as risk analysis, derivative pricing, back testing etc, in the cloud.”
A key issue is which data sets are involved in the specific use case, says Velazquez. “A bank for example might have a vast amount of market data, and market data tick history could be considered a utility. Fundamentally, everyone is consuming the same data set, so it’s quite low risk moving that into the cloud, versus private, personal data that banks may want to keep very safely on-prem.”
Case Study – Kubernetes-as-a-Service
Seetharam Gorre, Chief Information Officer at Pico, explains why Pico is tapping Kubernetes to enable clients to deploy and manage cloud-native applications anywhere.
“Increasingly clients are asking for containerized compute on-prem, with elastic overspill to the public cloud providers. Generally, they want a mix of on-prem and public cloud providers for a variety of security, regulatory, performance and app specific requirements and in some instances, to secure commercial leverage over the public cloud providers.
At Pico, our goal is to make technology easy for our clients. Heralded as the new operating system of the cloud, Kubernetes has become the de facto standard in container management. By streamlining management complexities, it unlocks the potential of containers. That’s why we are leveraging it for our managed containerized compute products, which span on-prem, hybrid-cloud and multi-cloud deployments.
Capital markets are cautious cloud adopters yet are under enormous pressure to leverage the benefits of cloud native technologies. So, why not deploy on private and then scale on public when needed? Through orchestrating on-prem containers with Kubernetes, when clients are ready to go to the cloud, Pico provides the bridge for seamless and rapid migration.
Running Kubernetes on-prem is a specialist area. To achieve this, we’ve built our solution on a platform of continuous integration and continuous delivery (CI/CD) tooling. This enables automation of the workload with the ability to run the same workload on-prem or in any public cloud. The solution is integrated with managed Kafka for durable application messaging, managed ElasticStack for log aggregation and managed Prometheus stack for cloud native monitoring, telemetry and alerting. We’ve also solved the hard problems of on-prem Kubernetes – distributed storage, and load balancing at both layer 4 and layer 7 – which gives our clients a truly cloud-native experience when running on our Kubernetes-as-a-Service platform. Whatever cloud environment clients choose, the solution is underpinned with the same high levels of security, compliance and resilience they have come to expect from Pico.
We’ve invested in cloud engineering and Site Reliability Engineering (SRE) operations so clients can quickly take advantage of the cloud without the need for re-training and recruiting within their own teams.
Cloud computing will continue to be a key force in capital markets technology. By supporting incremental adoption, we are paving the way for banks to take greater advantage of hybrid and multi-cloud models.”
Security and Regulatory Implications of Cloud
The ability to ensure that customer data is safe and secure is essential, says Isabel Pitt, Senior Digital Product Leader at Lloyds Bank. And there are still question marks regarding cloud in this regard.
“There’s not enough evidence yet to change current mindsets,” she says. “You need it to be fact-based and evidence-based. We haven’t reached a level of maturity yet of use cases to allow us to examine the evidence, like for like, and be able to make that assessment. But there is a much more open dialogue, and an understanding that mindsets do need to change. But how do we do that, particularly from a risk and security standpoint? And where does the regulatory accountability sit in this new construct?
“From a regulatory perspective, it all comes down to accountability and ownership,” Pitt continues. “Right now, it’s very easy. if you’re a bank, you are fully accountable for all of your customers’ data, all of your transactions, all of their money, because it sits with you.
When that model starts to change, where does that accountability lie? Those are the questions that need to be addressed as we move through this, because that is going to be something the regulators will be very keen to understand ahead of supporting any of these transitions to the cloud.”
From a service provider perspective, there has to be a ‘no-compromise’ on security, says McArthur. “When we’re engaging with a client, we spend a lot of time going through the security model, to get them comfortable with going down that cloud journey with us,” he says.
“The key elements are the privacy aspect of the network, encryption, logging and monitoring. Delineating very heavily between the customer’s environment and what we’ve got access to. How do we protect the environment, is it segmented from other customers’ environments, not only from a security perspective, but also from a latency perspective and from an operational perspective?”
“Google, Amazon and Microsoft all have extremely high standards in security and indeed, higher than the banks in some cases,” says Felix Grevy. “I would argue that the level of security in the public cloud is higher than an on-prem data centre owned by a bank, because these cloud vendors have invested many millions into security and all those vendors are working closely with the local regulators.”
“We’ve spent a lot of time over the last few years focusing on the regulatory implications that other industries have not had to face,” says Microsoft’s Judith Swan. “In fact, at Microsoft we have a chief regulatory officer as part of our Worldwide Financial Services team, that’s how important the regulatory implications are for cloud.
”Swan adds that the reason the CRO role is so helpful is because it looks across all the regulatory regimes worldwide. “Most financial institutions don’t operate in regional silos,” she says.
“They operate in Singapore and New York and London, etc., so they have a much more complex regulatory regime than a regional retail bank would have, for example.
From a UK perspective we deal regularly with the FCA to make sure that we understand what they are looking for from cloud service providers. For example, what do firms need to look at in terms of terms and conditions, audit rights, how they set up security and networking, etc? We help customers match that so that they can feel comfortable.”
From a regulatory standpoint, the jurisdiction is an important factor, says Seetharam Gorre. “There are still concerns around a physical barrier versus a logical barrier. When you’re on the cloud, you need to know exactly which cloud instance you are on, and then you have to make sure you have the data in the right place. Whereas when you’re on-prem, you know exactly which data centre, which cabinet, which server you have, you have full control over where that data is transferred, and what applications have access to it. Cloud makes things a little harder in that respect, because you’re bringing a physical concept into the virtual.”
Access to data is also a key factor, says Amit Bothra. “If my data is somewhere I’m not aware of and not under my control, then I don’t know what is happening with it. Maybe somebody else is able to see it, maybe somebody is selling it, we hear these horror stories from time to time. So, I need to know who has access to my data, in both my own organisation and other organisations.
“You have to have segregations of duty, limitations to what one can do across the length and breadth of the organisation,” he continues. “And that can be complex. You need the right solutioning around entity and access management, role design, separation of duties, conflicts, and so on. That all takes effort and time. In the US for example, you would expect some sort of Security Operations Centre (SOC) framework to apply, to ensure that data is residing in the right location, it’s properly handled and properly managed. And there won’t be any hacking going on, there won’t be any inappropriate access being made to that data.”
Public, Private or Hybrid Cloud?
With ‘cloud’ being such a generic term, how do firms decide what flavour of cloud is the right solution? Although it’s common to equate cloud with public cloud, firms can have private, dedicated cloud services within, or provisioned by, a public cloud vendor or specialist third party cloud provider.
“The public cloud has a security model, but the banks also have their own security models that they can’t compromise on. That’s where the bigger banks go into the private cloud arena,” says Gordon McArthur. “At Beeks, we can build them a dedicated environment with all the benefits of the cloud, but we take their security model and input it without change, without compromise on that dedicated infrastructure, so they get the level of comfort they need. This hybrid cloud solution gives the best of both worlds, a low-latency compute environment with the rich connectivity that Equinix offers, but deployed with their own security model that meets all their requirements, so there is no compromise.
“18 months ago, we didn’t have a single connection to AWS, or Google, or Azure, but every environment we put out now has this hybrid multi-cloud connectivity piece,” continues McArthur. “People realise now that there’s no one-cloud-fits-all. The AWS’s and the Googles of this world won’t ever be there for the real low-latency trading environments that need the kind of infrastructure that sits within an Equinix ecosystem, but firms do need to exchange data between that trading environment and the public cloud, to archive data, to talk to other applications that are not latency-sensitive, and so on. That’s why the secure, private connectivity between clouds is becoming more and more important.”
Case Study – Analytics-as-a-Service
Gordon McArthur, Chief Executive Officer of Beeks Financial cloud, explains how a recent acquisition has helped his firm offer a new ‘as-a-Service’ product to financial markets firms.
“Beeks bought a company called Velocimetrics recently, which is a trading analytics company that does full trade lifecycle monitoring. That was originally offered as an appliance that sat in a customer’s datacentre or a customer’s environment. We’re now launching Beeks Analytics-as-a-Service in two Equinix datacentres. We’ve brought together this software platform and our infrastructure platform to provide sell-side banks with analytics-as-a-Service, including web console, for a monthly fee, with no infrastructure deployment, offering immediate time to value.
“This type of service is hugely business-beneficial to banks and brokers. But it’s always been expensive and difficult to deploy. A lot of banks themselves would like to offer that out to their clients, but the deployment model had not previously allowed that. By offering it as a SaaS platform within the Equinix environment, our early adopter customers are already thinking about how, as well as consuming it themselves, they can now package it up as a service to their end clients.
“Every client we’ve talked to about this, we’ve asked if they want to go the traditional model, where we sell them a server and they run it, or do they want us to run it for them? And every one of has chosen the latter.
“And because we can now connect directly to any third party in the Equinix ecosystem, all of a sudden, this product has gone from a complex environment that firms had to manage themselves, to being able to cross-connect into Beeks as the service provider, where we consume the data, analyse it, and offer it to them and their clients on a monthly-fee-based system.
“That’s a very powerful and strong story on why this SaaS financial ecosystem cloud enablement brings immediate business value, with cost reduction to banks, brokers and their clients.”
Seetharam Gorre of Pico stresses the importance of selecting the appropriate cloud deployment model for the workload. “If you’re latency sensitive, you need on-prem, because the physical distance is the physical distance. If the cloud is in Virginia and your matching engine is in New Jersey, the time it takes to travel on fibre is the time it takes, there’s no way around that.
“But firms need to be smart about this and to look at the different kind of workloads,” he continues. “If it’s a continuous workload and if you have to go and reserve an instance in the cloud, what’s your true TCO of on-prem versus in the cloud? Depending on how you are comparing, on-prem can actually come out cheaper. Whereas with discrete or volatile workloads, where you don’t know how much capacity you need, in those cases people need to figure out how much they should do on-prem, and use cloud either for discrete or bursty workloads. That’s where hybrid cloud comes into the picture.”
Gorre further explains how Pico creates a hybrid cloud blueprint to ensure that the various infosec, compliance & risk groups are comfortable with the setup. “You have to get the right connectivity, ensure your perimeter is configured correctly, figure out your infosec packages and policies, etc. If you manage all of that internally, it’s harder to know who has access to what, maybe they can create more instances than they should. It can be problematic for example if someone configures a hundred-node cluster and forgets to shut it down. That’s where we come into the picture, with our governance policies. And we’re an independent entity, so that gives firms further comfort.”
Catalina Vazquez explains why Refinitiv takes a multi-cloud approach. “We want to make sure that we meet our clients where they want us to be. When it came to making the decision of where to make our historical data available for example, we realised that the capabilities that Google Cloud offered allowed us to target very well the data science workflows, because of their superior analytics and machine learning capabilities available through Google BigQuery, so that’s the reason why we chose Google Cloud in this particular case.” From a multi-cloud standpoint, Refinitiv also works with other cloud platforms from Microsoft and AWS in areas like quantitative analytics and real-time data delivery.
James Maudslay of Equinix points to another cloud deployment model, bare metal, which is gaining traction particularly among latency-sensitive firms or those requiring dedicated compute resources. “With bare metal, not only do you get deterministic performance, but you also achieve cloud-type cost structures without having to physically invest, without the capital expenditure. This year in particular, financial services firms have been restricted in their ability to send out physical engineers. With bare metal, you no longer need to do that. Also, if you want to test something and try it out, such as a new trading server in a new location, bare metal will allow you to do that without having to put capital investment into the equipment that you otherwise would have to purchase.”
So what next?
With any cloud deployment, the key question is, what is the business problem you’re trying to solve? At a macro level, firms are generally trying to develop opportunities, grow the business, and save costs. On a more micro level, firms need to look at specifics around where they have current infrastructure capacity constraints in order to decide what to deploy strategically, all the time optimising, measuring, and doing things in the most efficient, cheapest and most scalable way.
Over the next five to ten years, we would expect to continue to see the transition towards cloud in capital markets, because historically, systems were built in very siloed fashions, and in the future, that will no longer be sustainable. Maintaining that type of legacy infrastructure will be prohibitively costly from both a technology and a human resource perspective.
Cloud, in all its formats – public, private or hybrid, is a massive enabler because it moves things from the physical to the virtual, meaning that over time, firms will no longer need to rely on these legacy frameworks.
But challenges still remain. In the capital markets space, cloud is still in its infancy and only over time will the industry have a much clearer idea of what works and what doesn’t.
About Equinix
Equinix (Nasdaq: EQIX) is the world’s digital infrastructure company, enabling digital leaders to harness a trusted platform to bring together and interconnect the foundational infrastructure that powers their success. Equinix enables today’s businesses to access all the right places, partners and possibilities they need to accelerate advantage. With Equinix, they can scale with agility, speed the launch of digital services, deliver world-class experiences and multiply their value.
For more information, please visit www.equinix.co.uk
About Financial Markets Insights
Financial Markets Insights from The Realization Group, is a series of interviews with thought leaders in financial and capital markets. The purpose of the series is to provide exclusive insights into industry developments, through in-depth conversations with C-level executives and key experts from banks, exchanges, vendors and other firms within the financial markets ecosystem.
For more information, please visit www.financialmarketsinsights.com