Recent Comments

The Ties that Bind – (Code Catalogs and Data Mapping)

Have you ever tried to map data between two or more systems? It is difficult because the values and descriptions often differ. So what do you do? Many people have found success by using a spreadsheet with the integration/interface points listed from each product that is sharing data. But there are a few things you should know before you list a value of the same name into the corresponding fields in your spreadsheet to ensure you are building a successful interface or integration. This post will tell you what you need to know to make sure you are setting up the data transfers appropriately so you can reduce your testing phase.

If you are looking for a solid data map, the best method to ensure you end up with quality data for the end-user at the receiving end of the integration or interface is to do these things:

Involve the business and super users when mapping the data. Do not make decisions about the data strictly from an IT perspective. Ultimately, the end-user is the customer and IT needs to work for them to meet their needs. (It can be a long, slow process but putting in the time to plan is well worth it in the end)

Have all the vendors provide the most current data structures for their APIs that your version of the product represents.

Identify code catalogs that affect the interfaces/integrations and where they exist in the target application for the end user to view.

Codes and Descriptions are two different things. Help the business and super users understand the difference and what they need to define is usually the definitions in the new system to codes coming from a legacy system.

If the vendor is providing a base set of codes, review these with both IT and business teams to decide what values can be “inactivated” and what can be left to fit the map.

Identify what code catalogs cannot be changed but need to have an external translation map.

Products that have both inbound and outbound data feeds must have data tracked from upstream to downstream in the process to identify the code catalog data that must pass through the new system and still produce the proper information to the downstream systems.

While it is not necessary, I have found it extremely helpful to document the codes and descriptions in a shared space with only one or two people allowed to actually change values in the documentation. While many people will make decisions and want changes to the code catalogs of a new product being implemented, if more than a small responsible group are allowed to make changes there will be many issues to recover from on both the IT and business. Once a sample set of data is produced from these mappings, it should be easy to validate the data against the documented code catalog values.

So, by including the business and super user teams in the data mapping process and using the vendor provided API doc with the field information on the release you will be implementing or currently have from legacy systems, you can confidently map data between systems and reduce rework of code catalogs, interface/integration coding and work arounds.

We would love to get your feedback and find out any data mapping tips you can provide to our followers. What has worked well for you?