Banks have a lot of data. Transaction volumes have soared over the years, driven by factors such as payment cards, internet banking, online shopping and the shift to direct debits. Now add in the calculations and statistical models that are being run on the data for things like regulatory reporting and risk and profitability analysis and you have some seriously large data volumes. And that's before considering the system and data impact of new regulations such as Dodd Frank and Basel III which will require lower levels of granularity and an awful lot more number crunching.
Not content with having some of the biggest data around, banks spend a large portion of their IT budgets moving it around, joining it back together, adding it up, reconciling it and hunting down discrepancies. This has become the norm in most banks due to system proliferation driven by factors such as mergers and acquisitions, new product offerings and new analytical reporting requirements.
Yesterday, all my data seemed so far away
Current bank architectures still largely reflect the technology limitations of yesterday. Slow links with branches result in thousands of batch jobs at the end of the day to collect and post transactions. The recent problems at RBS were caused by problems patching a batch payment processing system. And Interfaces between operational and analytical systems still rely heavily on overnight batch processing.
Traditional databases and hardware are also partly responsible. Take for example the data warehousing industry, which has grown from the inability of databases to cope with the demands of inserting and updating high volumes of records at the same time as being asked tough analytical questions. The solution up until now has been to create copies of data on separate systems, manipulate, augment, cleanse and then store it in a format optimised for "decision support".
Most bank processes and thinking are still organised around these limitations.
It doesn't have to be this way
In-Memory computing platforms such as SAP HANA have the potential to disrupt and make fundamental changes to underlying data architectures. What if core banking systems were able to be both transactional and analytical at the same time? What if these systems were fast enough to allow a real-time view of customer profitability or risk? That would reduce the number of additional systems and databases needed, and in turn, the need to move data around. This would lead to fewer copies of data and lower the potential risk of introducing data quality problems and numerical discrepancies and errors. And the end result is that that banks would have less complexity and be a few steps closer to the dream of "one version of the truth". The cost savings on storage alone should be significant.
Admittedly, this vision is going to take some time to become reality but for the smaller, newer banks, it isn't that far fetched.
For the big guys, a more realistic start point is on the analytical side where much of the complexity and inefficiency lies. Removing years and years of 'kludge' will not be easy or for the faint hearted. Some of my banking clients have decided to steer clear of touching certain management information systems because they have become too complex or are not well understood. Or usually both.
How SAP HANA can help simplify bank architectures
Simplifying and rationalising these banking systems is like peeling an onion because of the different layers involved. In many cases, it might actually be easier to start afresh and build up new parallel architectures and systems based on current and future requirements rather than trying to unravel and reverse engineer what these systems do at the moment and why.
What SAP HANA offers is the ability to re-design and re-think the analytical landscape. If you've got a system that's capable of processing billions of granular records in near real-time, you don't need to create snapshots and aggregates. You can have a number of different analytical views or windows onto the same data, which update automatically when the source data changes. No more batch. No more aggregates. No more reconciliation.
The potential rewards for those willing to rise to the challenge are significant. They include:
lower support and maintenance costs as a result of reduced complexity (data flows, batch jobs, systems, reports, consolidations)
ease of meeting new requirements and regulatory reporting requirements
significantly reduced storage costs
greatly improved data quality and integrity.
SAP HANA needs architects to think differently too. Undoing years and years of conditioning and "best practice", is not going to be easy. We saw this recently on a credit risk proof of concept where the client's immediate inclination was simply to underpin the existing solution with SAP HANA. Yes, this would have led to some immediate performance benefits, but to really get the best from this amazing technology, we needed to simplify, rationalise and remove the redundant copies of data and superfluous layers. This quote came to mind at the time:
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away."
Antoine De Saint-Exupery