Deep learning, or the ability of machines to learn tasks using artificial neural networks, has been around since the mid-1960s. In recent years, however, with advances in computation and storage technologies enabling the large quantities of data required for effective neural networks to be rapidly processed, the use of deep learning is increasingly being explored across multiple industries and geographies. In this article, Mike O’Hara and Joel Clark of The Realization Group discuss the possible use cases of deep learning within financials services, as well as the challenges that arise from an infrastructure and data storage perspective. They talk to Verne Global’s Mark Reece, Greg Keller of R Systems, Wei Pan of the Thasos Group, Terence Chabe of Colt Technology Services, Asaf Wachtel from Mellanox and Invesco’s Bernhard Langer.
Artificial intelligence (AI) and deep learning may offer the potential to replace the need for human beings in certain parts of the financial services sector, bringing unprecedented innovations, efficiencies and cost savings to bear. But the successful implementation of such technologies requires unimagined innovation from humans, such that it may yet be a long time coming.
This irony at the heart of deep learning was articulately expressed by Haruhiko Kuroda, governor of the Bank of Japan, in a speech in April 2017. “If there is any risk the role of human beings are overwhelmingly replaced by AI, that would be when human beings stop thinking independently and autonomously,” he said.
“From a financial perspective, it is most important for us to think independently and positively on how to make an efficient and effective use of new technologies such as AI and big data analytics to further develop and improve financial markets and services.”
Deep learning, an offshoot of the broader family of machine learning and AI methods, is not a new concept and has existed in some form since the 1960s. It is based on the notion of learning tasks using artificial neural networks inspired by the biological nervous system. The technology is highly advanced, and requires vast volumes of data and compute power.
Machine learning methods, like deep learning, construct predictive models from sample input extracted from large data sets. These models can provide data-driven algorithms which perform better, or are more easily constructed, than traditional programming techniques. Example applications include classification problems like e-mail spam detection or image recognition, as well as the predictive analytics such as credit score computation or financial performance.
With the rising complexity of financial markets and the increased technological sophistication of securities trading and processing, it should come as little surprise that there is now so much focus on possible use cases for deep learning. If such methods have the potential to navigate complex markets and regulations, boost profits, cut costs or bring other efficiencies to bear, they naturally merit exploration. But it remains very much a work in progress.
Wei Pan, co-founder and chief scientist at New York-based AI specialist Thasos Group, sees the strongest use case for deep learning in the hedge fund space. “Fundamental and quantitative hedge funds are now looking to extract information and alpha from data, and deep learning techniques can be used to do that more effectively. But there is a lot of work to be done to make the deep learning model work for the finance sector,” he says.
“Fundamental and quantitative hedge funds are now looking to extract information and alpha from data, and deep learning techniques can be used to do that more effectively. But there is a lot of work to be done to make the deep learning model work for the finance sector.”
Wei Pan, Thasos Group
The potential use cases for deep learning extend well beyond hedge funds, however. Cyber security and market surveillance are two functions that are occupying the hearts and minds of many market practitioners and regulators as the need to protect from hackers and monitor internal staff activity increases. Both require the real-time monitoring of vast volumes of data and information that could theoretically be aided by the use of artificial neural networks.
But this is not an area for experimentation. So grave is the threat of cyber attack that no one wants to leave its detection to chance, and deep learning would need to be fully tried and tested before it could be implemented as a first line of defence against cyber attack at a large financial institution.
“In theory there are many highly relevant use cases for deep learning around cyber security, fraud detection and consumer insights, but there is a big learning curve for these companies and they need to decide where they will see the highest return on investment,” says Asaf Wachtel, vice president of business development at Mellanox Technologies, an Israeli supplier of intelligent interconnect solutions.
The complex regulatory environment adds further opportunities for the deployment of deep learning techniques. While some regulations are principally focused on market practices and surveillance, many of the new Basel Committee capital requirements require a much greater volume of data and compute power than in the past.
The Fundamental Review of the Trading Book (FRTB), which overhauls the market risk capital framework, is a case in point. The use of internal capital calculation models, which is critical for many banks to ensure a sensible level of capital is held, becomes much more complex under FRTB and requires vast troves of previously uncollected data to be analysed.
“If we look at the changes in regulation, there are a couple of areas where we may start to see deep learning being used, such as monitoring and maintaining historic data. FRTB requires a huge amount of historical market data, sometimes going back an entire decade. That has to be worked through and cleaned, and we may find deep learning can be used to spot discrepancies in the data and issues to be investigated,” says Mark Reece, senior enterprise architect at Verne Global.
Despite the multitude of possible use cases for deep learning, from alpha-seeking hedge funds to cyber security, market surveillance and FRTB data mining, the associated infrastructure requirements remain a major barrier to entry. The hurdle may not be insurmountable, but it certainly threatens to delay progress in the near term. The technology requires a level of infrastructure build and computing power that is simply not widely available today.
“Once firms start thinking about deploying the technology for a real use case, it becomes very quickly obvious that this requires a different compute environment”, explains Wachtel. “Large financial organizations have lots of compute and storage resources available, but those are not designed to the extreme demands of deep neural networks in terms of scalability of the models and data sets. This is where decisions of buy vs. build need to be made, and the large cloud providers are ramping up their own unique offerings to address this gap”.
“Once firms start thinking about deploying the technology for a real use case, it becomes very quickly obvious that this requires a different compute environment.”
Asaf Wachtel, Mellanox
The infrastructure requirement falls broadly into two categories. Firstly, deep learning requires
a vast quantity of data, which in turn often requires external storage. Many firms already use data centres to house parts of their infrastructure, but the storage requirements are likely to rise steeply if deep learning techniques are being deployed. Secondly, the compute power required to train systems in deep learning is of an order of magnitude higher than traditional requirements.
“The data sets can be huge so there needs to be large quantities of data storage, and it also requires a lot of compute power to train the systems. You need to think about where you can afford to do that – probably not in the premium co-location data centres.”
Mark Reece, Verne Global
“Training of systems in deep learning tends to be quite intensive,” says Reece. “The data sets can be huge so there needs to be large quantities of data storage, and it also requires a lot of compute power to train the systems. You need to think about where you can afford to do that – probably not in the premium co-location data centres. They’re not like trading systems that need to be within a few metres of an exchange system; this data can be held anywhere on the planet.”
“The vendors operating these deep learning platforms obviously want to make it as easy as possible for their clients to connect.”
Terence Chabe, Colt Technology Services
Firms that are serious about implementing deep learning in some form will need to consider whether they can afford to hold all of the necessary data within their own infrastructure, or alternatively if some type of cloud-based repository may be more realistic in the long term.
Despite the proliferation of cloud-based services, some firms remain uncomfortable with storing sensitive client and transaction data beyond their own firewalls. This may turn out to be a stumbling block in the effective deployment of deep learning, as some practitioners believe it would be impossible to store of all of the data that is required without outsourcing to the cloud.
“The vendors operating these deep learning platforms obviously want to make it as easy as possible for their clients to connect, and the cloud is the natural place for the data to be hosted. People have been using the cloud for a while now, and they expect to see the same kind of flexibility across the network,” says Terence Chabe, business development manager at Colt Technology Services.
Reservations about cloud-based data storage are not confined to data security or sensitivity, however. There is also a concern about the accessibility of data when it is needed, and while cloud services have clearly advanced a great deal over the past decade, firms cannot afford to lose control over their data if they choose to store it in the cloud. Moreover, as the data requirements of advanced machine learning applications expand, firms may soon be forced to make an “all or nothing” decision, committing to both compute and storage in the same location, with significant cost, accessibility and security implications.
Networks often represent an additional challenge for participants wanting to exploit deep learning opportunities in finance, adds Greg Keller, co-founder and principal at R Systems, a high-performance computing provider. “At the network level, there is a limited pool of talent and a million options available. The users of a system rarely want to understand how it works or why it works that way. They just want to get on with it, so they need reliable third parties that understand the network issues.”
Looking forward, it seems clear that financial market participants must continue to explore the opportunities associated with deep learning, but it remains to be seen how quickly progress will be made. Bernhard Langer, chief investment officer of Invesco Quantitative Strategies, believes proper exploration of deep learning opportunities cannot be achieved in isolation, and firms need to source relevant expertise from across the industry if they are to achieve anything meaningful in this area.
“Experimentation in the use of neural networks in the 1990s didn’t work out but this time we have much better data, machines and applications.”
Bernard Langer, Invesco
“Experimentation in the use of neural networks in the 1990s didn’t work out but this time we have much better data, machines and applications. At Invesco, we have an internal research team that works in this field, and we are also working with academia and several start-ups in Silicon Valley to develop machine learning,” says Langer.
But without widely accepted use cases for deep learning, it is more difficult to concentrate industry attention around specific problems. There is clearly no shortage of functions that have the potential to benefit from this kind of technology, but not everyone agrees on where it could be most effectively applied.
“The users of a system rarely want to understand the 1990s didn’t work out but this time we have much better data, machines and applications.”
Greg Keller, R-Systems
“Deep learning is not a technology that is going to work with everything,” warns Verne Global’s Reece. “It is based on large quantities of data and the aim is to train those systems to come up with the same sets of responses every time. This could be very powerful in the conventional systems we used to use, such as form filling or supporting queries from support teams, but it’s much harder to see how it might be applied to trading.”
Perhaps the most important thing is not to let the demand for deep learning solutions fade, even if there are currently more questions than answers. Charles Platt, sales director for financial services at Software AG, believes it is important to take a long view about where the quest for deep learning solutions might ultimately lead.
“We’re very good at looking at the distant future and determining what could happen, but we’re not so good at looking at the very short term to see how we will get there,” says Platt. “What do we need to do over the next 12 months to be in a position to do these amazing things in five years’ time? Deep learning needs colossal amounts of data, but no one has unlimited money so you need to be able to demonstrate value up front rather than just stashing away data in the hope you will get some value from it one day.”
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.