I am trying to automate integration testing in our team, and I am wondering if making tests with parameters is good or bad design.

My problem is that the integration tests should run some perl scripts from our codebase that work with database and compare the data in database before and after the test. I would like to set a flag for all automated tests to revert the database changes, so different tests will not interact with each other. But I also want to disable the flag on purpose, so I will be able to manually check the data so the test does not overwrite them back immediately after finishing.

Is there some cleaner/more common solution for this? I know about database and data mocking, but I can not use that.

This is not what we commonly mean when we speak of running tests with parameters. Running tests with parameters means running 100 tests with A=5, immediately followed by running the same 100 tests with A=6, and so on. What you seem to need is to run all your tests just once, but with specific configuration. So, all you need to do is have each one of your tests read a certain configuration file at start up, telling it whether it should revert, whether it should erase, etc.
– Mike NakisMay 27 '15 at 13:19

@MikeNakis Personally, I think that comment sufficiently answers the question and I would upvote it if it were an answer.
– durron597May 27 '15 at 14:05

@durron597 thank you for voicing your opinion on the matter. Let's see if anyone else agrees. (If so, I might do it; if not, I will let it pass.)
– Mike NakisMay 27 '15 at 15:20

@MikeNakis I agree with durron, that looks like an answer to me.
– IxrecMay 27 '15 at 16:53

1 Answer
1

When we speak of running tests with parameters, what we commonly mean is running a bunch of tests with A=5, immediately followed by running the same bunch of tests with A=6, and so on, all of this together constituting a single test run.

What you seem to need instead, is to run all your tests just once, but with a specific configuration, which may change from run to run.

So, all you need to do is have each one of your tests read a certain configuration file at start up, telling it whether it should revert, erase, etc.