Video: Powering Up – Handling HPC to boost Alpha and Risk Management

 

Overview:

The complexities surrounding the Banking, Finance and Insurance sector today have led to a significant growth in the use of grid computing and high-performance computing for computationally-intensive tasks. These are many and varied, and include areas such as derivative pricing, risk analytics, quantitative modelling, portfolio optimisation, and bank stress testing.

What are the key considerations that firms should take into account when putting together the necessary infrastructure to support their computationally-intensive needs?

Featuring:

Alastair Houston – Nvidia
John Holden – Numerical Algorithms Group
Robin Mess – big xyt
Leon Lobo – National Physical Laboratory
Stef Weegels & Lewis Tucker – Verne Global

Hosted by The Realization Group

For more in-depth discussion read our Financial Markets Insights article

Mike O’Hara: Hello and welcome to Financial Markets Insights. The complexities surrounding the banking and finance sectors today have led to a significant growth in the use of high-performance computing, HPC, for a wide range of computationally-intensive tasks. This trend is forcing banks and other financial firms to make some far-reaching decisions regarding their technology infrastructure. First, what are the areas where HPC is being used?

Alastair Houston: We’ve seen regulation pressure on the banks as a significant driver for our business, particularly over the last two to three years in the area of credit risks, CVA, counterparty risk. A number of our customers have taken advantage of our technology to really be able to accelerate and reduce the model run-time for that type of very heavy duty computations in Monte Carlo and very intensive so regulation, from a counterparty risk and credit risk in recent times.

Topical at the moment is FRTB, which is a market risk computation. We’re expecting, we haven’t quite seen it yet, we’re expecting a significant ramp in demand for HPC from the FRTB internal model requirement, which we know a number of bigger banks will be implementing.

That’s one trend. I think another trend, we’re beginning to see, albeit in the last few months if not weeks, a real take-up in interest around the concept of deep learning as distinct from machine learning so we’re now engaging a whole raft of new banks wanting to understand the GPU proposition around deep learning.

Mike: GPUs, Graphics Processing Units, originally designed for accelerating the creation of images and rendering video, are particularly useful for anything that requires massive parallel processing but not everything fits into that category. What are some of the other HPC technologies being used?

John Holden: If I start with FPGAs, we’ve seen some banks adopt it and then throw it away again and we’ve seen other hedge funds run with it. Typically, FPGA’s really quite niche. It’s specialised hardware and most of the use we’ve seen of that is in the high-frequency trading area. Outside of that, we haven’t really seen very much and not every institution has chosen to adopt that.

More broadly, GPUs versus x86, that, again, has been really interesting because we’ve been following that area for a while. We were one of the innovators working with Mike Giles from Oxford University to communicate what was possible with the GPU, it no longer being a gaming card but, actually, you could do HPC on that.
However, in the early days, the infrastructure and the tools supporting that environment were really quite weak. Nvidia have invested heavily in addressing the ecosystem and they’re really now a market leader in the GPU space. Then you’ve got people, such as AMD, who are looking to get back into both the x86 market and the GPU market.

It’s going to be really interesting over the next two years on whether they’re able to grab x86 market share from Intel, whether they’re able to grab GPU market from Nvidia and then will a company, such as ARM, and their partners, such as Cavium, be able to make any penetration into the market at all.

Mike: High-performance computing is not just about the calculation element. There’s also the issue of data.

Robin Mess: Computation is one thing. Computation can be speeded up by polarisation. It can be speeded up by smarter computational units, like GPUs. It can be speeded up through architecture approaches, like FPAs, but, at the end of the day, you require data to be fed into these notes, into these individual notes. So how do you manage data for these thousands or tens of thousands of calculation notes? This is something that the industry has identified already a few years ago that this is the usual bottleneck.

So how can I scale my data engine properly in order to make sure that I can leverage these new technologies, like GPUs, the best way? That is exactly something where I believe the industry can benefit from the expertise of third parties because they provide the economies of scale that can help these applications, whether that’s in risk management, whether it’s in trading requiring real-time analytics, it doesn’t matter. It’s always the same capabilities that are required.

Mike: Bringing together the necessarily data and compute resources to run HPC applications can be a complex process. One of the important elements of this is the traceability of events in terms of what happens when.

Leon Lobo: Essentially, as the amount of calls increases and the number of servers in a cluster increases, the element of causality becomes a bigger and bigger issue, particularly if data has been transferred at very high rates between the calls in order to perform that truly parallel activity. For any element of data transfer that’s occurring continuously between all the calls, it’s very important to actually determine that causality.

It’s to do it in a way that makes it much easier for the system, not only to have, as you could say, confidence in that data, but also to perform real-time analytics, for example, because suddenly, not only is the data as you’d expect to be in terms of order of events but it’s also in the order related to when it actually happened in the real world.

If, from the point of view of, for example, risk evaluation, everything from algo playback, algorithm optimisation, the element of using real-time data to perform the real-time risk evaluation becomes really powerful if the time element is also consistent with everything else. That’s where, I think, the element of feeding in, not just a time from the point of view of a local clock but the time, which is traceable to the global timescale UTC, for example, can be very powerful.

Mike: Another element to consider with HPC environment is power utilisation.

Lewis Tucker: One of the key considerations for HPC is the significant amount of compute that requires and that has a knock-on effect on the amount of power that firms will typically consume in that scenario. The key thing for them is to understand the power consumption available in the data centre. Have they got enough, not just for the initial term they’re looking at but also potentially for expansion?

The trends around HPDA and deep learning and that type of thing are driving significant changes in a way that organisations are using their data in ways they probably haven’t even thought of now. The most pressing concern, I think, they should consider is the power consumption, not just in terms of what they need now or in the next couple of two years but also to make sure the organisation they’re going to contract with have enough power to supply their needs in the future.

Mike: Taking all of this into account, firms are faced with a number of decisions around what kind of infrastructure they should put in place for their HPC activities. One of the key decisions is around deployment on what could or should be run in a dedicated environment versus the cloud.

Stef Weegels: Organisations should look at the actual hybrid approach because you want to build up a base within a data centre, either own it or lease it or work through a many service partner but have that actual dedicated core platform for your base compute, which you can actually run hot but then scale into or burst into a cloud platform, which sits on campus, which has a dedicated HPC capability and can support to, actually, take the burst out of it.

If you don’t consolidate towards one place or you don’t take those peaks out of it, you’ll probably have times where your platform sits idle and that’s where it actually becomes expensive and that’s where you need to be looking at adopting an actual hybrid approach and bursting that into an on-demand model, for which you probably pay a slight premium but it still offsets the cost of building it in-house.

With a hybrid approach, it also gives you the option to burst into the cloud but then when a burst becomes consistent, you can take it back and build up in house and then absorb further peaks. A lot of the financial services firms today have a cloud first approach so looking at a hybrid hosting offering where your partner for data centres can actually help you burst into a platform as well for on-demand compute is definitely crucial.

Mike: As we’ve seen, there are many factors to take into consideration when high-performance computing is applied within finance but maybe one of the most important is to ensure that the right infrastructure is in place, not just for today but also for the future. Thanks for watching. Goodbye.

Join
Your
Financial
Markets
Community!
Menu