Mike O’Hara: Hello and welcome to Financial Markets Insights. Artificial Intelligence is a very hot topic in the finance industry today and perhaps that’s not surprising given the potential it offers using modern technology but there are a number of operational and practical considerations around the employment and use of AI in the financial market sector. So what are these considerations and how can they best be addressed?
Charles Platt: One of my key observations of this area is that there is an awful lot of focus that goes into thinking what AI can actually do and what it can achieve and how it could be used in the business context. Less thought, I think, goes into how we’re actually going to do it and how we’re actually going to make it work. Innovation is only innovation if it’s actually in production.
What we need is a platform and a mechanism for a centralised way of deploying this stuff so it can be managed and controlled and monitored and dealt with and maintained in a practical way going forward. There are a number of different ways you can achieve that with standards-based analytics, with platforms that can execute all of those with standard integrations across the whole platform.
Mike: More standardisation of things like analytics and integration components makes sense, particularly when you want to roll out AI and Machine Learning algorithms quickly but there are other factors to take into account as well.
Rael Cline: Different algorithms are good for solving different types of problems. If you have something that’s called predictive analytics, for instance, you want to predict whether someone is going to repay a loan, you get thousands or hundreds of thousands of historical examples of people paying back and not paying back. You train the algorithm to predict whether someone’s going to pay back, you can understand what attributes are important and what has predictive power and what is the weight of that.
There’s some auditability in those sorts of algorithms. Other techniques, and you’re getting into clustering, unsupervised learning or reinforcement learning techniques then it becomes a different kettle of fish. Right now, most of the success from Machine Learning has come from the first category, supervised learning, predictive analytics and I think there’s a level of comfort over the degree of auditability on that.
If you’re going into these other techniques or these other categories then it becomes much more of a problem. I think we really haven’t seen much in the way of, at least, regulated industries or high product risk categories figuring out a way to audit that. They tend to be in things like recommendation engines and so, if you get a product recommendation wrong or it’s not accurate then it’s not the end of the world as opposed to a self-driving car or something like that.
Mike: This issue of auditability should not be understated. Regulators, particularly under MiFID II, are looking for more transparency in the markets, even to the extent of identifying algorithms used for trading but what if those algorithms are programmed by machines?
Jonty Field: One of the challenges with that is a regulatory challenge. If the car hits someone, whose fault is it? Why did it do what it did? Do you have the right to find out how it came up with that decision? I think that’s an important question and, in finance, if you affect the market of market impact, you do have a responsibility to demonstrate why it did something. Just because something’s clever and does a great job, why did it do it?
You can interrogate a human. You can hold them account. How do you hold a machine to account? I think there are some fundamental legal and interaction questions that we need to deal with and I think those questions are being dealt with on an ethics perspective in self-driving cars, for example, aeroplanes and so forth. I think there’s a natural learning there in terms of how we listen and understand them. I think those would be helpful in our financial industry quite directly.
Charles Platt: Can a technology like AI be deployed in the same context as a chatbot that’s giving some customer services advice and a trading platform that might have a more systemic impact? I think the answer is probably no, they can’t be deployed at the same sort of level, in which case what can we do in the meantime? I think there’s this confusion as to what’s predictive analytics? What’s Machine Learning? What’s AI? What elements of those could be deployed as a phased approach to getting to the AI world?
I think few people in the world would deny that AI is going to become much stronger in our lives. We already use it every single day today, whether it’s in Netflix or in our Garmins or whatever it might be. It will become more prevalent in our lives but I think the route to that is still very unknown.
Mike: From a practical and an operational perspective, it’s worth keeping in mind that some of the key factors around deploying AI-based solutions are to keep things simple, standardise as much as possible, don’t underestimate the importance of control and monitoring and be prepared to deal with some challenging regulatory questions. Thanks for watching. Goodbye.