Raising the Bar: Responding to the RegTech data management challenge

Banks and brokers might feel they are being pushed and pulled in different directions by multiple post-crisis regulatory reforms, but a common requirement across the new rules is for more detailed and timely data of known quality and provenance. To meet regulators’ expectations, banks must store, access, and process data on a scale never previously anticipated. As such, their technology executives are re-examining their operating infrastructures to ensure they have the storage and compute capabilities to operate effectively in the new, but still emerging, environment, looking not only to restructure internal resources, but augment them with third-party capabilities, as managed services and ‘RegTech’ innovations mature. In this article, Mike O’Hara and Chris Hall of The Realization Group discuss the key challenges and considerations for banks and brokers as they upgrade their data management strategies, with Stef Weegels of Verne Global, SAS’s Vincent Kilcoyne, independent consultant James MaxfieldAssad Bouayoun of ScotiaBank, Eduardo Epperlein and Steven Jamieson of Nomura, TraderServe’s Nick Idelson and Yousaf Hafeez of BT’s Financial Technology Services division.

 

Company logos of the colaborators

It may have taken the best part of a decade, but the core pillars of the post-crisis global regulatory framework for banks and brokerages are now at least known, if not fully in place. Collectively, G20-mandated efforts to increase investor protection and reduce systemic risk have profound strategic implications. Many firms are reassessing the services they provide, the assets they trade, and the customers they serve in light of new rules that increase not only transparency and accountability, but also costs and obligations, while restraining risk-taking and capital deployment. But the emerging regulatory framework has equally far-reaching effects on banks’ and brokers’ operating infrastructures.

This is partly due to a significant paradigm shift from pre-crisis regulatory regimes. Whereas regulators were previously responsible for identifying non-compliant behaviours and practices, increasingly banks and brokers must demonstrate their suitability, capability and capacity to supply trading and investment services in more detail and with greater regularity than ever before. From executing and reporting transactions under MiFID II and calculating market risk under the Fundamental Review of the Trading Book (FRTB) to risk reporting and aggregation under BCBS 239 and conducting stress tests set by central banks, the bar is being raised in terms of provision of data as evidence of ongoing compliance.

“The incoming regulations require firms to improve their data analytics capabilities substantially, including their ability to collect, store, and analyse data, as well as the compute power to run the greater volume of calculations and to drive testing and quality assurance,” says Stef Weegels, Business Development Director, Verne Global.

The various regulatory requirements of the post-crisis era impact individual banks and brokers in different ways dependent on client, product and geographic focus (as well as legacy factors arising from past M&A activity etc.). But common operational implications include the dismantling of walls between asset and product silos; greater coordination across front- middle- and back-office functions; enterprise-wide approaches to data management workflows and processes; and increased investment in the supporting infrastructure required to deliver reports and execute calculations efficiently, quickly and accurately with minimal manual intervention. Increasingly, the new landscape is one in which banks and brokers rely more heavily than previously on partnerships with third-party providers, enabled by managed services and multi-cloud strategies either on-campus or over cloud connects.

 

The incoming regulations require firms to improve their data analytics capabilities substantially.”
Stef Weegels, Verne Global

 

Regulatory obligations

A review of banks’ and brokers’ regulatory obligations confirms the need for a new approach to data management infrastructure.

Effective January 2016, the Basel Committee on Banking Supervision’s ‘Principles for effective risk data aggregation and risk reporting’ (BCBS 239) provided global systemically-important banks (G-SIBs) with a common framework for managing and reporting risk (extended to domestic systemically-important banks in several key jurisdictions). BCBS 239 forced affected banks to implement governance and architecture to support data aggregation and reporting practices to improve risk modelling, decision-making, and strategic planning.

Often adapting principles from BCBS 239, national and regional banking supervisors have been monitoring banks’ capital adequacy and stability to assess their ability to handle future shocks via stress testing exercises such as the Comprehensive Capital Analysis and Review (CCAR) in the US, the Firm Data Submission Framework (FDSF) in the UK and the European Banking Authority’s stress tests. Collectively, these requirements for greater transparency and clear accountability oblige banks to deliver more timely, accurate and detailed data on an industrialised scale.

In parallel, banks have been adapting to the evolution of the over-arching global regulatory framework set by the Basel Committee. While Basel III, set for global implementation in 2019, strengthens regulatory capital ratios and recalibrates capital and liquidity requirements, the FRTB (issued 2016) updates Basel 2.5’s market risk capital framework, prompting an estimated 40% weighted average increase in total market risk capital requirements. Moreover, as EY has noted, “The required data and technology changes needed to support analysing the coverage of risk factors in risk and pricing model architecture and enhance market data observability processes under the internal models approach are significant (1).” Banks must take decisions and allocate resources soon, as FRTB’s implementation deadline is January 2019, with reporting required under the new standards by year-end.

Meanwhile, other aspects of the post-crisis regulatory settlement have increased the compliance and reporting requirements. Dodd-Frank and the European Market Infrastructure Regulation (EMIR) have radically overhauled the trading, clearing, collateralisation and reporting of OTC derivatives. From January 2018, MiFID II not only extends European trade and transaction reporting requirements beyond equities, but also introduces new testing requirements for trading algorithms among a wide-ranging raft of measures.

Taken as a whole, the need for a scalable, holistic and above all a highly automated approach to data management processes and infrastructure becomes self-evident. But wave after wave of deadlines, plus uncertainty caused by numerous regulatory re-thinks, have encouraged banks to take a tactical approach, shifting resources and focus at short notice, opting for quick fixes. Unsurprisingly, few are fully compliant with BCBS 239, and thus able to use it as a stepping stone to an enterprise-wide data management strategy.

“FRTB represents a paradigm shift in how banks analyse, compartmentalise and drill down data in order to do scenarios and simulations. Many banks don’t have a central data store for trading data or for risk data. Existing verticals are a substantial barrier to storing the structured data needed for FRTB and the un/semi/structured data needed for MiFID II,” says Vincent Kilcoyne, Capital Markets and FinTech Innovation Lead at risk analytics solutions provider SAS.

 

Enterprise-wide data architecture

What kind of data management architecture is needed to tackle this landscape? As a critical first step, banks must increase data storage and computational capacity requirements. Kilcoyne says this effort must be strategic and programme-based.

 

“Banks should attempt to put in place an infrastructure that gives them the flexibility to respond cost effectively to evolutions in regulation.”
Vincent Kilcoyne, SAS

 

“Banks’ BCBS 239 efforts have shown that a project-based approach to compliance is not working. Regulatory compliance is a continuous journey. If you take a project-based approach you engineer a very large cost into every discrete piece of regulation and every change associated with it. Instead, banks should attempt to put in place an infrastructure that gives them the flexibility to respond cost-effectively to evolutions in regulation,” he explains.

Overall, new regulations are pushing banks and brokers to take a more front-to-back approach to data infrastructure. Data sets must be deep and accurate and their contents must be able to be interrogated, viewed and manipulated much more flexibly than previously, with different latencies and for different purposes. Ultimately, if firms can efficiently supply the data that demonstrates to regulators that they are managing their market risks and capital levels effectively for their chosen business models then their regulatory burdens (including capital charges, operating costs and regulatory fines) will be much lower, providing a potential competitive advantage.

“Given the nature of the infrastructure within most banks and brokers, the fundamental challenge is being able to access, manipulate and manage data in a way that is very different from how they’ve used that data in the past,” says independent consultant James Maxfield.

Incoming regulations pose new requirements in terms of capacity, latency and accessibility. Whilst EMIR or Dodd-Frank required end-of-day or longer time spans – thus permitting manual checks or processing – MiFID II for example reduces some reporting requirements down to minutes, thus demanding extensive automation. Capacity is the major challenge with FRTB as banks and brokers must hold substantial amounts of historical transaction and related data to run and operate the risk models mandated under the FRTB framework. This may require some banks to invest in new hardware and data storage capabilities, while others will need internal reorganisation especially if they operated the kind of federated data infrastructure that does not lend itself to a consolidated ‘golden source’. “In terms of accessibility, banks are likely to have to revisit decisions about where data is located, consolidated and managed,” adds Maxfield.

 

Calculating the cost

FRTB creates new standards and criteria for inclusion of balance sheet positions in market risk capital; introduces a new methodology for calculating capital charges for lines of risk-taking business; and provides for a more structured approach to risk modelling. Importantly, it updates and standardises banks’ use of proprietary VaR models for risk calculations, mandating the use of the sensitivity-based approach, also known as the standardised approach, while incentivising use of an internal models approach (IMA) via lower capital provisions. Banks must submit calculations for capital charges for public disclosure, month on month, desk by desk.

“The number of calculations is going to explode”, warns Eduardo Epperlein, Global Head of Risk Methodology at Nomura. According to EY, banks must compute at least 79 different calculation inputs for each sensitivity class for risk computation under the standardised approach. “The new prescribed risk factors and liquidity computation may lead to as many as 12,000 calculations per trade, compared to the current 250 to 500 calculations per trade under earlier Basel 2.5 regulations,” the consulting group added.

 

“The number of calculations is going to explode.”
Eduardo Epperlein, Nomura

 

To handle such a leap in calculation volumes, banks are exploring a wide range of options, including software-based approaches, hardware-driven accelerations and access to distributed, remote storage and compute capabilities via cloud-based connectivity.

“There are many ways to approach this, just getting more hardware is one”, continues Epperlien. “You could also use more sophisticated optimisation of front office risk models. It could be as simple as detuning the pricing models so that they’re still accurate enough for risk calculations although maybe less adequate for front office hedging and pricing. And then there are other techniques that maybe involve some clever algorithms for interpolation, for finding ways of rebalancing through interpolation.”

At Scotiabank, Director of XVA Quantitative Analytics Assad Bouayoun has been exploring variations on adjoint algorithmic differentiation (AAD). This mathematical technique speeds up risk calculations by computing the derivatives of all outputs with respect to all inputs once, rather than just calculating the outputs for each sensitivity. This allows multiple sensitivities to be computed for a modest increase on the resource required to achieve the initial calculation result.

Such capacity and accuracy-boosting programmes are increasingly run on graphics processing units (GPU) to further accelerate calculation processes. Bouayoun argues that the tools and skills required to build the required underlying infrastructure are generally beyond the internal capabilities of banks and suggests partnerships with specialist providers are an inevitable part of computation management frameworks that will in future combine in-house, in-sourced and remotely accessed managed services.

 

You need to build-in scale and flexibility from the outset if you want to be able to switch from one FRTB model to the other.”
Assad Bouayoun, Scotiabank

 

FRTB allows banks to select between the standard and IMA-based approaches on a desk-by-desk basis, but Bouayoun suggests banks should plan for use of both, as bolting on capabilities at a later date could prove costly. “You need to build in scale and flexibility from the outset if you want to be able to switch from one FRTB model to the other,” he notes. In light of the as-yet-incomplete nature of the FRTB framework, a number of banks are weighing up the final elements of the supporting architecture to deliver for the required market
risk calculations. Nevertheless, Epperlien’s colleague Steven Jamieson, FRTB Programme Manager at Nomura says the far-reaching impact of preparing for the new regulation is having a harmonising impact.

“Collaboration has definitely increased and there’s a lot of excellent engagement from all the departments,” he says. Jamieson is cautiously monitoring the need to deploy the “brute power” of GPU-based high-density compute capacity to tackle FRTB’s heightened calculation and processing demands. “A lot of firms are focusing on the requirements to comply on a full revaluation basis, but there are alternative, potentially smarter, hybrid means,” Jamieson adds. “We’re assessing our options at the moment, trying to adopt as smart an approach as possible, because moving forward on a full revaluation basis could be prohibitive in terms
of cost and effort.”

 

“Collaboration has definitely increased and there’s a lot of excellent engagement from all the departments.”
Steven Jamieson, Nomura

 

Verne Global’s Weegels agrees that the cost of processing is going to make location and capacity a significant factors in banks’ future data management infrastructures, taking them beyond existing data centres and supporting facilities. “Whilst it’s true to say that evolving demands on infrastructure are rapidly making past investments by banks redundant, the geographic location of data centres may prove more important than whether they’re in-house or outsourced. In locations such as Iceland the cost of power is far lower than more established financial markets and there are additional benefits such as free cooling and being able to lock in costs long term,” he explains.

 

A testing environment

Hidden among MiFID II’s extended reporting obligations lie new algorithm testing requirements which underline the trend toward regulators demanding greater process transparency and accountability. Similarly to other jurisdictions, notably Hong Kong (which focuses on market integrity) and the US (which is bringing in rules about Algorithmic Trading Disruption), MiFID II requires that (even the very simplest) algorithms used to trade listed instruments are certified not to cause market disruption on European exchanges or trading venues, to minimise the possibility of another ‘flash crash’. Banks and brokers that use algorithms or supply them to buy-side clients must conduct a new level of testing and provide certification to individual exchanges, demonstrating that an algorithm will not create or contribute to market disorder, either in normal or stressed market conditions. Exchanges offer conformance testing facilities to help firms hone their protocols, but these don’t allow algorithms to interact realistically with other orders or run test scenarios including the combinations of stressed market conditions and ‘anatagonistic’ algorithms that can reliably test the stability of firm’s algorithms and crucially whether they could contribute to market disorder.

 

The volume of data and compute power needed for algorithm testing can enormous… locating your testing capabilities in low-cost locations allows expenditure to be controlled whilst still having emulation of all the exchanges, the whole market ecosystem, together with the relevant latencies and jitters.”
Nick Idelson, TradeServe

 

As such, banks and brokers must either develop the necessary testing capabilities or work with third-party specialist providers. Recreating live conditions and recording the entire ecosystem during each test – including changes in market volumes and latency speeds – requires substantial data storage and compute resources. “The volume of data and compute power needed for algorithm testing can be enormous. If you try to conduct the new algorithm testing in exchange-based colocation facilities, the costs can very quickly spin out of control, not because of the cost of rack space required but the power consumption,” says Nick Idelson, Technical Director at TraderServe, a specialist software vendor and consultancy firm.

Some firms are enhancing internal testing capabilities, others are turning to tools such as TraderServe’s AlgoGuard, which enables algorithms to interact with emulated order books for instruments linked across multiple venues based on historical microstructure and test scenarios suitable for the algorithm being tested to create a realistic market ecosystem. Demand for such tools is high and increasing: without a certificate that explains in detail how it has been tested against market disruption, no algorithm will be permitted to trade under MFID II.

AlgoGuard’s approach is to use ‘disorder provocation measures’ which allow it to fail in testing all algorithms identified as majorly responsible for the 2010 S&P and the 2014 US Treasuries flash crashes based on the generic ways an algorithm can interact with the market. These measures allow single session test scenarios to pass/fail an algorithm for optimal efficiency. “But it’s still a lot of processing power, and there’s still absolutely no reason to co-locate that in an exchange,” says Idelson, who says testing environments can and should replicate variable latency conditions to be realistic. “Locating your testing capabilities in low-cost locations allows expenditure to be controlled whilst still having emulation of all the trading venues, the whole market ecosystem for any asset class, together with the relevant latencies and jitters.”

 

 

Accessing all resources

The data storage and compute requirements implicit in the emerging regulatory frameworks far outstrip those historically required by banks and brokers. This is encouraging their technology executives to adopt a wide range of approaches to data management infrastructure that aggregate a range of solutions and capabilities, collaborating with providers of managed services to take advantage of the explosion of ‘RegTech’ innovation in recent years.

 

Banks’ cloud-based solutions are now very often created in partnership with third-party vendors and service suppliers.”
Yousaf Hafeez, BT’s Financial Technology Services Division

 

But with banks and brokers starting from many different positions and with varying priorities, they are inevitably taking a number of different paths. According to Yousaf Hafeez, Head of Business Development, Financial Technology Services, BT, some firms are evolving gradually, while others are taking a more transformational approach, deciding they must replace an infrastructure that is no longer fit for purpose. Hafeez also sees a number of related trends, such as: firms decentralising their infrastructure to enable some reports to be run locally; separating out different elements of the overall IT infrastructure, for example distinguishing their data aggregation infrastructure from that used for calculations required for specific regulations; and greater use of big data analytics to structure and transform data.

On this latter point, SAS’s Kilcoyne notes the growing role of big data analytics capabilities. “As a matter of urgency, banks must start exploring cheaper methodologies for storing, retrieving and processing data,” he says, recommending Hadoop-based strategies to deliver the greatest ‘cost per terabyte’ benefits over traditional storage methodologies and techniques.

Increasingly, firms are accessing capabilities across the cloud to orchestrate the necessary resources and capabilities, while maintaining their traditional focus on security. “Firms may have their own cloud-based solution, but it tends to be proprietary and dedicated solution, very often created in partnership with third-party vendors and service suppliers. This might involve third-party network providers, data centres and software tools, but we don’t see these calculations being done on public cloud services yet,” says Hafeez.
Maxfield predicts a key role for managed services and hosted capabilities, which have advanced in scale, flexibility and sophistication in recent years. As such he sees a greater willingness of resource-constrained banks to leverage third-party expertise. “Progressive organisations are taking a modular approach to transformation, typically working on a multi-vendored, multi-service-provider basis, as opposed to using an all-encompassing ‘bank in a box’ approach,” he says.

 

Progressive organisations are taking a modular approach to transformation, typically working on a multi-vendored, multi-service-provider basis.”
James Maxfield, Independent Consultant

 

Regulatory project implementation deadlines are still coming thick and fast, with MiFID II and FRTB looming largest. But common threads are becoming more sharply defined across all regulations, helping regulated firms to put together a strategic rather than tactical response.

“Use of third-party solutions and capabilities make a lot of sense in this regulatory context. Banks and brokers require highly scalable solutions to handle peaks in demand and resource-intensive tasks, but will not want to own capability that may lay idle at other times,” says Weegels at Verne Global. “Moreover, the evolving regulatory framework is one of several sources of uncertainty that can be hard to handle without flexible access to a range of resources. With firms remaining highly capex conscious, we see an increasingly hybrid approach deploys a mix of in-house and third-party data management and compute resources, with outsourcing playing a key role in providing capacity above and beyond that of banks.”

 

(1) FRTB – The revised market risk capital framework and its implications – February 2016


 

For more information on the companies mentioned in this article visit:

Menu