Hadoop news


hadoop Authors: Liz McMillan, Greg Schulz, Jim Kaskade, Jnan Dash, Dana Gardner

Related Topics: Apache Web Server Journal, Web Analytics, Cloud Data Analytics, Big Data on Ulitzer, Hadoop

hadoop: Article

Big Data and Master Data Management

Single Source of Truth in Big Data

Master Data Management (MDM) is a very important data governance aspect in enterprises whereby MDM enables the development of a "Single Version of Truth." MDM establishes Single Version of Truth by providing common descriptions for enterprise-wide entities.

Need for MDM in Big Data Processing
Before Big Data, enterprises generally managed their transaction data in traditional relational databases. One of the biggest strengths of relational databases is their ability to enforce constraints like check constraints, primary key, foreign key, etc., which ensure that the data captured is of the highest quality.

In spite of such support for data integrity, enterprises had duplicates in their master data that resulted in inaccurate results in analytics on that data. For example, an enterprise may target an expensive advertisement campaign for a new product to its existing customers; however, due to the fact that a particular customer may exist with different IDs across multiple systems, the enterprise may be sending its campaign materials to the same person multiple times.

Similarly, a manufacturing enterprise may be analyzing the problem and complaint records from their customers, but a lack of uniformity between the product codes across the regions and a lack of uniformity across problem types may result in inaccurate quantification of the issues.

Enterprises traditionally attack the Master Data Management by implementing following measures.

  • Enables development of a "single version of the truth" by establishing common descriptions for core business entities across multiple systems.
  • Assess current master data maturity across the enterprise, identify target maturity and identify gaps
  • Master Data Management Tool Selection
  • Master data models and cleansed data
  • MDM governance and stewardship
  • MDM Strategy to tackle mergers and acquisitions

With the advent of Big Data processing, enterprises started analyzing massive amounts of unstructured data from unconventional sources, which means the inconsistencies across the data is increasing and the level of validations that are performed at the data capture is very limited when compared to the traditional relational data capture.

For example, if the enterprises wanted to target customers on social media with the potential for one customer represented in multiple social media forums in different names, the chances of the campaigns either overreaching a person or not reaching at all is very high. The same is true when microblogging sites are used to analyze the voice of the customer and categorize complaints across products. There is high possibility that customer misspell the product names or use some local naming conventions for the same products that will prevent an effective analysis.

Master Data Managemet in Big Data
The following are some of the approaches of integrating MDM data quality solutions in Big Data Processing so that the true insights on the massive quantities can be generated and these insights can really be accurate for the enterprises.

  • Adopting Hybrid Big Data Solutions: As highlighted In my last article on Hybrid Big Data Solutions, integrating Big Data with the existing relational data which is likely to contain MDM source data bases is one of the easiest ways to ensure data quality on the big data sources.
  • Matching More Than Keywords: The massive quantities of unstructured data bring together a greater level of ambiguity about classification and relevance of the documents, and hence a mere key word matching of entities to get the subject of interest is not enough. Most of the current examples on Big Data is more about utilizing standard regular expression functions, however the true potential of Big Data in conjunction with MDM can be achieved if Text Analytics is adopted on Big Data more than standard regular expressions.
  • Adopting a Data Virtualization Layer: Data Virtualization platform provides a common hub for capturing data across traditional and big data and hence the business rules can be managed at this layer which will ensure the data quality across disparate data sources.
  • Utilize the Power of Hadoop Database Extensions: Big Data frameworks like Hadoop provide the ability to keep the data in their own file system HDFS without transforming them, and the data can be accessed using SQL Like languages. For example Hive allows to read the data in Hadoop file system using SQL Interface. Similarly HBase is a columnar database implemented on top of HDFS file system. These implementations have support for imposing constraints on the underlying Big Data. For example Hive supports JOIN across tables which will go a long way in checking for integrity with respect to MDM.

While enterprises continue to adopt Big Data as part of their data management the biggest challenge will be the data quality. The RDBMS have done a great job on the data integrity and Big Data should support the same. Implementing the traditional Master Data Management on top of Big Data / Unified Data will go a long way in providing the correct insights from the Big Data processing.

More Stories By Srinivasan Sundara Rajan

Highly passionate about utilizing Digital Technologies to enable next generation enterprise. Believes in enterprise transformation through the Natives (Cloud Native & Mobile Native).