They say ‘timing is everything’. I don’t necessarily agree that it’s everything, but it is an important ingredient in the design of your integration. It’s always my first reaction to go ‘real-time’ with an integration project. Scoping and requirements gathering will then bring me to a place where a part of the integration may not work designed to run as real-time. Every business is built on processes that serve them well, or at least served them well back in the day. There are times when transaction volume will preclude using real-time methods for translation. Running the process, because of the volume, may have such an impact on the system, that user can not tolerate the sluggishness of the system during business hours. Perhaps there is a business process that does not make data available until after business hours. There can be any number of business process scenarios that would dictate utilizing a batch integration process rather than a real-time integration process.

Business processes are not the only consideration when designing your integration. The environment, and specifically the hardware that the integrated systems reside on, can play a key role in determining the integration process to use. For example, if your integration design requires polling a record set for the net-change data, that polling can effect performance. But to a greater extent, the record set that is returned will typically populate any RAM that is available, and if there is not enough RAM to hold the entire return record set, then it will occupy static drive space. Depending on how large the net-change record set is, that’s been returned, stealing all available RAM can seriously impact performance of the system. Conversely, you may have designed a multi-threaded integration process, such as utilizing a message queue as a pickup point for the extracted net- change data. Where running a multi-threaded process, you are able to translation a larger volume of data in a shorter period of time, but, that process will be very CPU intensive.

So, when developing your integration design, keep in mind that batch processing is memory intensive and multi-threaded processes are CPU intensive. Depending on the environment you’re working in, you may have the inclination to build one type of integration process, but the impact of that process would be too costly in terms of system performance and end-user satisfaction.

Ok, so you have one part of your integration setup, great. You’re integrating customers from your CRM system to your ERP system, fantastic. You turn on the integration, and not too long afterward, folks from the finance group are complaining that there are customers in ERP that don’t belong in the system. “What’d?, you say. I thought you wanted customers integrated in both system?”. The response from finance is that some of these aren’t customers, they are only prospects. Ah, so not all customers are the same in the CRM system.

This is a very typical scenario. The CRM system breaks down ‘customers’ into different customer types. Only ‘customers’ that have actually bought something are to be integrated into the ERP system. So, you need to be able to filter records in order to meet the requirements of this business rule . That filtering can take place in two different places, a) at the time of discovery of the net-change data or, b) during the translation process of the net-change data. If you have used a query to discover the net-change data, you may only need to add something to the WHERE clause of the query, to ensure that only records that meet the business rule criteria are discovered. If the application has it’s own net-change method, but cannot be modified to filter the records, you will need to build the filtering into the translation process rather than in the discovery process. There can, however, be some advantages to filtering at the integration process level. Let’s say that you only want purchasing customers being integrated into you ERP system, but you would like to see an aggregated view of all new customers that have been added to the either system. When you filter out the customers at the translation point rather than the discovery point, you have the records discovered in the net-change process, so, the entire record set can be used to create customer, by type, reporting.

Another. more complex example would be; sales orders being placed in the ERP system and then translated to one of several warehouse systems for processing. In this case, you not only have to filter the record set, but you also have to determine what translation processes will be used to ensure the sales order record is consumed by the correct warehouse system. As you can see, business rules will have a huge impact on how you might design your integration process. The entire solution may incorporate many different net-change discovery methods as well as many different data translation methods. Don’t get caught building a useless solution, wasting time and money. Dig deep into the requirements in order to develop the best integration processes for the given scenario your are working under.

As promised, this series is intended to be somewhat high level, but, I hope through some of the topics covered you have become a little more familiar with Application Data Integration. It can be tricky stuff to get your hear around; there are a lot of issues to consider. Not digging deep enough for the requirements can be very costly, not only monetarily, but also with adoption by your end-users if your integration is tied to a new business system implementation. Knowledge is power, so I hope I’ve charged your batteries a little.