Video: Building the Future of Finance with AI and Machine Learning

Overview:

The use of Artificial Intelligence is becoming increasingly widespread in the financial services industry. Whether it’s in the research and development of trading strategies, analysis and management of risk, or assisting with regulatory and compliance functions, there are a growing number of use cases for AI and machine learning.

Featuring:

Stef Weegels – Verne Global
Michael Cooper – BT Global Banking and Financial Markets
Vincent Kilcoyne – SAS UK & Ireland
Taras Chaban – Sybenetix
Harald Helnwein – Novofina

Hosted by The Realization Group

For more in-depth discussion read our Financial Markets Insights article

Video transcript:

Mike O’Hara: Hello and welcome to Financial Markets Insights.

However you choose to define artificial intelligence, its use is becoming increasingly widespread in the financial services industry. Whether it’s in the research and development of new trading strategies, analysis and management of risk, or assisting with regulatory and compliance functions, there are a growing number of use cases for AI and machine learning. The implications of this from an infrastructure perspective are significant, particularly for trading or investment firms that use AI to determine when, where, and how to trade.

Harald Helnwein: I think that artificial intelligence can play – and will play – a major role in the financial services industry. We are very much focusing with our algos to move away from discretionary decision-making or “gut feel” or “shooting from the hip”, towards algorithms that try to produce reproducible objective facts for trading decisions, and we’ve been using artificial intelligence for let’s say over 10 years now, and basically machine learning for our filters.

And what we learned, based on a long time in the industry, is to use your brain upfront what you want to figure out, so not just to do unsupervised machine learning – and you need a lot of tools and power – but to first consider what is really required.

And so, what I want to say is you can do that with machine power, you could increase machine power by 10 times to get to the same result. Or sometimes you just take a look from a different angle and then still you need to know what to do with the outcome of that, of the artificial intelligence, the result you get.

Mike O’Hara: Of course, it’s not just in the development of trading strategies where AI is being used in financial markets. Increasingly firms are using AI and machine learning techniques to help in areas such as risk and market surveillance. So what are the steps involved in adopting this kind of approach?

Taras Chaban: You start with a clean data set, so that’s very important. How do you collect the data, clean it, validate it and so on, in order to build good models and good analytics from it? So that’s very important for us. And then as a company we also combine it with other disciplines – like behavioural science, like neuroscience, like machine learning – in order to bring value to the forefront.

So all of that is important for us: data is very important; cleanliness of data is very important; distributed way of accessing this data is very important; but then also tools to build the best analytics is very important for us as well.

Mike O’Hara: Regardless of where in the organization these AI techniques are being used, one of the most fundamental aspects of all of this is data

Vincent Kilcoyne: When you look at the process of building and deploying an AI model, it’s actually a very interesting world, because if you start off trying to build and trying to create and craft machine learning models – AI models – you need an enormous amount of data to create, craft, test, validate, calibrate, etc. But then in reality, you need a much smaller world or universe of data to run it on a daily basis.

So from a bank’s perspective, you need to have an enormously elastic, cost controlled, efficient environment to mine for calibration, for creation purposes, for you to be able to create these models. Then when the rubber hits the road, you can have a much smaller, more dynamic, more discreet universe of data. So you can have these running, but for creation purposes you need the terabytes and petabytes; you don’t have to have that on a daily basis

Mike O’Hara: Aside from data the other important elements in AI machine learning is compute power

Michael Cooper: Where can you economically and effectively provide a compute – an aggregated compute most probably – that enables you to efficiently process data? And some of that will be dependent on what’s the process that you’re applying.

And very clearly in the financial markets context then, a compute for an instantaneous decision as part of a trading strategy is one thing, a compute in order to analyze data, apply intelligence to that data – perhaps to develop strategies or to elicit insight or trends out of data – either because of the volume that you’re processing or the multiplicity of data sets that you’re aggregating and combining that intelligence – creates some interesting decisions to be made.

So one, how do you get the data to the right places, where that’s not going to be a single location? How do you maintain that data there? And economically and just in a processing sense, there will be data centers which are, from an estate perspective, more optimized (and probably more expensive as a consequence) versus others which are going to offer economics and scale and are going to attract data for a different reason. You’ve got a distribution model in that regard.

Mike O’Hara: This optimization of infrastructure is a key point, because many firms who are using these techniques are not necessarily optimized as well as they could be

Stef Weegels: When you look at the artificial intelligence and machine learning for example, most of the bank’s infrastructure was always built on lower-density compute, and a lot of the banks and asset managers are now faced with the challenges of being able to support applications that require a lot more processors that will also be a lot more a lot more power hungry.

At the moment, most of the data analytics takes place in financial services data centers. These data centres come with a certain price tag because space is limited, those locations are prime locations and often have expensive rates for power, which isn’t green. Therefore, when you look at the actual trading life cycle, the actual ultra-low latency is only at the execution, but if you look at trading algorithms doing research, doing analytics, basically using machine learning to get smarter before you actually trade with it, that is something that you can move away from financial centers. And there’s multiple locations globally – such as Iceland – where compute can be powered a lot more cost efficiently, and green.

Mike O’Hara: We’ve seen that AI and machine learning techniques are being more widely adopted in the financial market sector and that the data and compute requirements around this are significant.

It’s clear that firms who want to make the most of these techniques in the future will need to consider how best to optimize their infrastructure, to enable them to do that in the most cost-efficient way.

Thanks for watching.

Menu