Why Banks Bank on Data Lake for Turning Big data into Intelligence?

Home » Blog » Why Banks Bank on Data Lake for Turning Big data into Intelligence?

Why Banks Bank on Data Lake for Turning Big data into Intelligence?

A renowned bank in Singapore has found a way to cut down customer wait time – Seamless replenishment of ATMs has made this possible. This is one of the many data and analytics stories that has illustrated what data can do to an organization.

As an industry where data is growing at a phenomenal rate, banking industry is also moving steadfastly towards making the most of this growing data, monetizing data. Where there is no dearth of big data, banks face the imperative of raising a robust infrastructure, eliminate data silos and maximize value from data.

Entering the big data universe, banks have now come to rely on data lakes to bring all the data together and accelerate the data-to-intelligence-to-action cycle. Here’s how a financial institution made the transition to Hadoop-based Data Lake and rode on the back of the transition to nurture profitable business outcomes.

What was the imperative?

The leading bank offered comprehensive financial services covering corporate banking, personal financial services, commercial banking, private banking and investment banking to its customers. Embracing the data warehouse concept and with more than 50 source systems, the bank took longer time than it had envisioned to go from data to intelligence. As data volumes grew enormously with each passing day, the current setup failed the bank in handling huge data volumes economically.

Stepping into the on-the-go analytics era, the bank grasped the essence of the barrier-less data paradigm – That would equip the bank with the centralized data storage and speed processing infrastructure to make the best use of big data to reap value from all the business intelligence and analytics initiatives.

The Caveat – ‘T’ with ‘EL’?

What’s looked at as trivial in most cases attracted the attention of the financial institution. The institution sought answers to the pre-transition query – Is there a need for ‘data transformation’ before data gets ingested into the Hadoop-based data lake, as it is usually done in the data lake, and does it really matter?

With the attention turning to some data values, data transformation becomes relevant even before the data gets ingested into the data lake. Some of the data values that prompted the need for ‘T’ with ‘EL’ include:

  • ASCII & special characters
  • JUNK Characters
  • New line Characters
  • Disparate Date format

Considering the debilitating effect the data values would have on the overall data quality, grained data resulting from data transformation, achieved while extracting data and loading into the data lake, proved valuable to ensure top data quality and get the most out of data.

What did the Data Warehouse to Data Lake transition yield?

The Hadoop-based data lake synched with leading visualization and analytics tools, with the bank improving its outcomes from business intelligence and analytical initiatives, accelerating the pace to go from data to insightful reports. The most notable transformation came by way of data exploration and agile analytics.

Building the Hadoop-based data lake has allowed the institution to find easy and quick access to data to accelerate the data to intelligence cycle. With centralized data storage, the bank could explore and make use of the right data and data analytics, data science and visualization tools to reap rich insights and drive successful business outcomes.

When one of the leading banks in Singapore relied on our big data expertise, Saksoft helped the bank eliminate data silos, build a Hadoop-based data lake and bring about 98% reduction in report generation time.

By |2018-07-09T15:15:59+00:00July 9th, 2018|