Deploying HPC within Financial Services: Which path should your firm follow?

Written by Stef Weegels – Director of Sales, Verne Global

Against a backdrop of accelerated industry change, forward-thinking banks and fintech firms are increasingly turning to high performance computing (HPC) to secure a competitive edge and mitigate risk. The heightened focus on HPC is also reflected across other industries with global IT forecasts predicting that the global HPC market will expand by 5% year-on-year to more than $46 billion in the next decade . The financial services industry is one of the key sectors fuelling such growth.

Across the board many firms now view their HPC strategies as a means of addressing both commercial threats and lucrative opportunities. Computationally intensive workloads such as grid compute, big data analytics, artificial intelligence (AI) and associated machine learning applications are gaining massive traction in response to these ambitions, and HPC is providing the raw ‘horse power’ behind all of them.

In the coming weeks our partners at The Realization Group will publish two Insight Papers focused on optimising HPC within financial services:

  1. The first paper will look at the infrastructure choices and considerations when deploying high intensity, HPC workloads.
  2. The second paper will delve deeper into specific uses cases around powering AI in quantatative analytics.

Together, these papers will tackle some of the most pressing strategic questions firms face when deploying HPC and deciding between data center hosting, in-house or outsourced, and utilisation of the cloud through  either virtualised or bare metal servers.

So why is an optimised HPC strategy so critical for the sector? For a start, pre-trade analytics, quantitative analysis and risk management functions as well as many other niche forms of computational assessment require vast amounts of compute power and specialised infrastructure. As markets have become more complex, asset classes more diverse and regulatory requirements more intense, this has added to the importance of having a coherent infrastructure strategy when it comes to HPC. Recent advances in, and adoptions of AI and machine-learning-based applications only bolster the case.

In the case of AI, this is fast becoming one of the prime areas of focus in financial markets. Quantitative investment firms are always on the hunt for new sources of alpha or ways to manage risk more effectively. To power quant analytics, they are using more data than ever and increasingly that means alternative data sets. How do firms ensure the infrastructure they are running is fit for purpose, is operating most efficiently and is keeping the total cost of operations (TCO) in check so as to maximise reward? Can firms even spark a leap in performance by utilising more innovative deployments like physical bare-metal approaches? Are there security implications?

There are also broader questions to consider based on firms’ business models. For instance, ultra-low latency applications are by default co-located in exchange data centers and latency sensitive applications in proximity. With the rise of quantitative strategies and AI techniques such as machine and deep learning, HPC optimised locations weight more in the hosting strategy, to ensure compute resources are managed cost-efficiently and are scalable.

The infrastructural choices firms make can have profound impacts on performance, reliability, flexibility and scalability, to name just a few considerations. As firms juggle long lists of commercial and operational priorities – from time to market to customer requirements to balance sheets – the decisions they take early in the process become crucial.

Please be sure to catch The Realization Group’s upcoming Insight Papers and industry round table events, where you’ll hear from a range of senior industry figures and HPC specialists.

**********

Upcoming Insight Paper publications

Thursday 1st March: Optimising Compute in Financial Services
  • Options for deployment: pros and cons from cloud, colo, bare metal and hybrid solutions
  • Speed to market, performance, reliability, flexibility, scalability, access to the cloud, CAPEX & OPEX
  • Performance considerations of Infrastructure-as-a-Service versus bare metal
  • Types of jobs being run and their influence on infrastructure decisions
Wednesday 18th April: Powering AI in Quant Analytics
  • Deploying the right hybrid cloud infrastructure to maximise use of alternative sources of data and AI
  • Types of jobs being run in terms of workload, data sets and compute resources required
  • Factors such as costs versus rewards, timeliness of results and data security

Based in London, Stef is Verne Global‘s Director of Sales and heads up the company’s work within Financial Services and Capital Markets. You can contact him at: stef.weegels@verneglobal.com or Follow him on Twitter @StefWeegels