Let the Data Flow

By Mike O’Hara, The Realization Group and George Andreadis, TreoTrade

Data is becoming increasingly important in the new economy and has been called the world’s most valuable resource. So how can we help financial services firms unlock this value?

Legacy systems

Over the last two decades or so, as banks and asset managers have tried to put systems and processes in place to build business value, their legacy technologies have often been a limiting factor, in particular because of their siloed approach to data.

For example, a bank might want to develop an application for interrogating cross-asset positions in a portfolio, to calculate net asset value and better manage risk. However, the bank might find that the information needed is not stored in one central location but instead is held by different parts of the organisation, in different systems, and in different formats.

This is a common situation in our industry. Legacy technologies will have been built up over time in order to accommodate siloed business needs. For example, a derivatives team will have evolved with different needs to a cash equities unit and their systems will have been developed accordingly.

This of course results in ongoing difficulties when trying to develop any new applications that require data from more than one of these siloed systems.

Open source

In contrast, outside the financial services sector, Web-scale firms such as Google, Amazon and AliBaba have prioritised data and designed and built processes to interrogate that information efficiently in open source databases such as Postgres, to the huge benefit of their businesses. And instead of tying up and licensing that technology for huge sums, they have encouraged open source communities, where anyone can develop improvements.

Given the situation that financial institutions face today with their legacy, closed systems and their data silos, how can they move towards this ideal?

A bottom up approach

The key to migrating from legacy systems that were created around silos, to new architectures that are built around data, is to take a bottom-up rather than a top-down approach. Basically, this means tackling one silo and one business function at a time, rather than trying to go for a ‘big bang’ type approach.

The first step is to understand all the information that is required for downstream processing, both in the existing architecture and for the new application to be built. Where does the data in the legacy system come from and where it is stored? How does that system work? How is the data structured? How can it be pulled out? Does the data need to be scraped or can it be pushed out somehow?

The next step is to store the requisite data in an open source database (such as Postgres), so that it can be formatted, augmented and indexed with appropriate metadata, enabling it to be accessed via standardised APIs.

The power of the API

The standardised, or RESTful (Representational State Transfer) API is a key aspect to all of this, as it provides interoperability between the legacy systems, the open database and the new components to be built. In essence, it provides the ‘glue’ between the old and new systems, standardising the way the two systems work together.

Of course, APIs (Application Programming Interfaces) are nothing new. They have been around in some shape or form for as long as computer applications have needed to communicate with each other. But today, with the open source nature of collaboration, we are seeing APIs become more standardised and therefore more efficient.

Taking a standardised API-based approach means that the migration challenge can be tackled one silo at a time. Once you know what data is needed, where it comes from and where it needs to go, you can use the same API to extract it and feed it into a new system. Of course, the most difficult part of the challenge is not actually creating the API, but understanding where all the necessary data is currently held internally and where it is used by existing downstream systems.

However, by breaking down individual siloes one at a time, you can extract the necessary information from those legacy systems without switching them off, especially as they may still need to be used by other parts of the organisation. This means you can address one business function at a time without taking down the whole system for a ‘big bang’ type migration.

Another benefit of this API-based approach is that a proof of concept can be delivered quickly, to demonstrate that there is a business case for making changes in one specific area before being rolled out to other business functions.

What about top down?

So, what needs to happen from ‘top down’ perspective, i.e. at the business level, the management level and the cultural level within a firm, to enable this approach to be taken?

In short, you need the buy-in of senior management, which, in terms of silos, often means the people looking after those legacy systems. It’s all very well if your bottom-up approach works technically and you have something up and running, but if there’s no buy-in from the senior IT people, or the group IT people in that particular silo, then your project could be dead in the water. The key here is to demonstrate value, so that there is proper commitment to that change from higher levels within the organisation.

In conclusion, by using an open API-based approach, financial services firms can not only extract valuable insight from the data within their current legacy systems, but can also migrate more easily to platforms architected on next generation technologies. By doing this, they will be well placed to meet the business challenges of the future.

***

For more information on TreoTrade visit www.treotrade.com or Follow on LinkedIn