Despite the extensive investment in enterprise data management software over the last couple of decades, in most large enterprises it is still extremely difficult to get an answer to some quite innocent-sounding questions. Which are my most profitable customers? Which are my best channel partners? How much do we actually make on Product X? Those seem like very basic business performance questions, yet in fact the answers often remain elusive at the enterprise level.

This e-guide collates a group of examples of big data technologies in use, such as how Mercedes-AMG Petronas Motorsport are looking to gain an edge on the competition in the Grand Prix season. Also see how big organisations are managing their big data operations and their data analytics programmes and teams through some high profile case studies.

For more on enterprise data management

The cause of the difficulty is that many large organizations do not have a master computer system that is the one and only source of undisputedly accurate data on customers, products and suppliers.

One 2009 survey by my firm, The Information Difference, found that the average large -- that is, more than $1 billion in revenue -- company has six different systems that all think they are the master source of customer data and nine that are creating and maintaining product data; 13% of the 115 companies that took part in the survey had over 100 such competing systems. That can make it hard to pin down exactly what Product X really is, given the potentially different definitions that could exist in the various systems strewn across an enterprise.

It is a similar issue with how to allocate true costs to a specific transaction or customer. In one company that I previously worked at, an internal review revealed 27 different definitions of “gross margin” in use, and that is a supposedly unambiguous accounting term.

The broken promise of ERP for enterprise data management Attempts to remedy the situation over the years have mostly not succeeded. ERP systems were sold partly on the promise of doing so, yet in reality even the most comprehensive enterprise resource planning (ERP) systems only cover part of the scope of a large enterprise. In one company I worked with, 175 separate interfaces remained after every module of a major ERP system was implemented.

Even today, most large companies will admit to having many hundreds, even thousands, of separate applications, only one of which is their chosen ERP system. (Assuming they have one; I worked with one medium-sized company that had software from over a dozen ERP vendors deployed.) Moreover, for the largest companies it is common to have dozens, even hundreds, of instances of an ERP system deployed across the various geographies of an international organization, and ERP consolidation projects aimed at reducing that sprawl are extremely costly and rarely get down to a single instance. (And even if they did, don’t forget all the other non-ERP applications that remain.)

This data quagmire has led to a separate attempt to improve matters. The discipline known as master data management (MDM) focuses on producing consistent definitions of the key shared data in an enterprise: customer, product, supplier, asset, personnel, location and so on. There are different approaches to MDM, but the process essentially consists of mapping the current competing definitions in the various deployed applications and from them producing a single “golden copy” record.

In more ambitious MDM programmes, the idea is then to switch over to the shiny new master records throughout the enterprise, with MDM hubs supplying the consistent set of master data back to the core operational systems, possibly by means of an enterprise service bus architecture. That is a major undertaking, since many existing applications were not designed to take their master data from a source external to them -- and if there are indeed hundreds of deployed applications, the magnitude of such an effort can easily be imagined.

A less golden copy A less ambitious approach for enterprise data management leaves the existing data in operational systems in place but takes a shadow copy of the competing master data from various systems, then resolves it into a golden copy, but that’s purely for business intelligence uses. In this scenario, the master data hub supplies the enterprise data warehouse with its key “dimensions,” at least allowing consistent enterprise reporting even though the core problem of inconsistency at the operational level is left untouched.

The latter approach is much easier than full-fledged MDM but does have a major drawback: It does not address operational data quality. It turns out that data quality is the dirty little secret of most large corporations. Even in organizations where many people know data quality is bad, typically no one wants to point that out to senior management. And hardly anyone dreams of being promoted to data quality manager, so quality problems often remain unresolved. One recent (2010) Information Difference survey found that just 9% of the 134 companies surveyed actually enforced their business rules regarding data at the source, which is really the only effective way to try to eradicate data quality problems.

Data quality goes beyond familiar issues, like having customer addresses listed in multiple systems. One UK bank discovered that 8,000 of their customers were, according to their systems, over 150 years old. This only came to light when a major project to cross-sell life insurance to current account holders had to be pulled in light of that fundamental data quality problem.

The business must own the data quality problem Implementing consistent master data and fixing data quality is only going to happen when businesses take ownership of their data, rather than just shifting responsibility to their IT departments, which typically do not have the authority to force business units to change their ways. The need for business involvement has led to rising interest in data governance programmes. Data governance is a process rather than a technology (though technology can help) and is fundamental to getting a grip on enterprise master data and improving data quality. Without business ownership of data and the governance processes being put in place to resolve disputes about key data definitions, master data management projects will have at best limited success. Indeed, there is a danger of them just creating a new set of master data silos in addition to the application silos that so many companies currently have.

The IT department cannot fix data quality after the event. Business users are the ones who create data about customers, products and assets, and those business users need to reclaim the responsibility for ensuring that the data being created is of sufficient quality to run the business effectively.

ABOUT THE AUTHORAndy Hayler is considered one of the world’s foremost experts on master data management (MDM). He is co-founder and CEO of analyst firm The Information Difference and a regular keynote speaker at international conferences on MDM, data governance and data quality.

1 comment

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Much like DR, most companies don’t think about data governance until the impact on data quality reaches monolithic proportions. I’ve seen situations where the data quality was so poor that, when migrating to a new systems, the business decided that the best option to abandon the old data, which contained years of historical information, and re-key it in the new system.