I want to know about the best practices of Integration Testing for a middle-ware product. It includes several backends and also I want to test it over multiple platforms. The features of Clustering is also available.

Irrespective of the product I basically want to know about the best practices of integration testing.

1) What does this middleware do? Is it a message bus? Process scheduler? Or something else? 2) What type of systems it connects?
–
dzieciouAug 17 '12 at 6:36

What continuous integration has in common with your question?
–
dzieciouAug 17 '12 at 8:35

This middleware connects different backends like Oracle, SAP, Siebel and many more.
–
AuraAug 17 '12 at 8:39

1

This system sounds quite broad much like the question itself. Do you have a specific question about testing it or challenges with testing it? What have you tried? There is no way anyone could answer it in its current context.
–
Steve MiskiewiczAug 19 '12 at 1:43

3

There are no "best practices", there are only good practices in context.
–
Bruce McLeod♦Aug 20 '12 at 14:06

2 Answers
2

Welcome to SQA, Aura. Testing (and supporting) middleware is often complicated by the size of the compatibility matrix: between vendors, releases, and configuration settings, there can be an overwhelming number of combinations to test. One good practice for integration testing, then, is to use something like an All-Pairs strategy to reduce the number of combinations you need to test. See http://sqa.stackexchange.com/search?q=combinatorial for more about combinatorial testing.

Of course combinatorial testing involves running the same tests over and over again, but with different values for your variables. If it takes a non-trivial amount of time to set up your tests (e.g. populating your database, creating accounts, installing servers), you may want to automate those tasks.

Whatever strategy you come up with, automate it. Please, for your own sanity, automate it. @user246 has some really good suggestions about combinatorial testing. I second that suggestion.

Since middleware, by definition sits between two systems that need to talk, your testing strategy will need to be able to ensure that your inputs to the middleware actually make it to the backing systems the way they're supposed to. You may end up writing your own testing middleware to send inputs to the system under test then read the results from the backing systems.

Testing clustering means that you'll probably be manipulating real or virtual hardware depending on your environment. I would check into Puppet or Chef for infrastructure automation to make sure that your environment is configured exactly the same time, every time you run your tests.

Only short general answers can be given here since the field of SQA is large enough to occupy the learning of many lifetimes. Good luck!