Fail instead of Skip a Test when TestNG’s DataProvider throws an Exception

A quite tricky problem I have been faced with was a failing TestNG-DataProvider which threw an exception because of corrupt test data. The critical point was that neither the Maven Surefire Plugin nor the Eclipe-TestNG-Plugin failed dependent tests. These tests were only skipped. That’s problematic because when the test data gets corrupt (e.g. due to an update) I actually want to be informed explicitly—this information shouldn’t just be swallowed. But the only indicator for a failure was the amount of skipped test which Surefire prints after each run. A position which could be missed easily in a large console output.

The problem

So I constructed a minimal working example with a DataProvider that just throws a NullPointerException and a test that does nothing but depending on the broken DataProvider. Then I took the sources of TestNG and turned on my debugger. I finally got to the class org.testng.internal.Invoker: there’s a method invokeTestMethods(..) in which I found the following code:

Now consider the bag-instance has errors (and the internal hold errorResult-instance also says explicitly that the status is “failure”). The method call registerSkippedTestResult(..) changes that to a skip! Obviously I located the reason for my problem—although the intention of this code is still not clear to me…

Solution 1: “Selftest”-Method in DataProvider

The (easiest and somehow most naive) solution is to provide a self test which invokes the DataProvider directly. The tests which depend on the DataProvider will still be skipped but the additional self test bypasses the TestNG-mechanism and throws an exception directly. Hence TestNG fails the test run as it should be.

@Test
public void selftest() {
TestDataProvider.createTestcases();
}

The benefit of this solution is its simplicity and its robustness: no future releases of TestNG can break this approach (except a change which ignores exceptions occurring in test methods, but I can’t hardly believe that will ever happen). The drawback is that you have to modify your code because of an issue in TestNG (I really don’t like being forced to make changes in my code because of problems in third party libraries). Furthermore there are additional runtime costs to consider—the DataProvider is called at least once more than actually needed. Depending on the DataProviders logic a more or less critical fact.

Solution 2: Return an empty array

Cédric Beust (the developer behind TestNG) gave in [1] a solution how to handle this issue. The trick is to surround the failable code in the DataProvider with a try..catch and return an empty array in the catch-clause:

The big pro of this approach is its lightweight. There isn’t much code to write and the solution is somehow easy to adopt for other DataProvider. Again the drawback is that you have to modify your code because of an issue in TestNG. Furthermore without any documentation every developer would wonder about the strange looking statement inside the catch-clause, as well as about the widely disfavored coding style “catching a Throwable”. Finally the answer why this solution works is something I guess only Céderic Beust understands. By the way: using an empty list doesn’t interestingly do the job for me…

The pros and cons of this solution are mainly equal to solution “Return an empty array” as the approaches are very similar. One benefit is that it is more stable against changes/updates of TestNG. If in some future release the reason whyever solution 2 works gets broken, this solution will still going to perform well. A disadvantage is the approach itself: an Iterator which only throws Exceptions isn’t a piece of code one could be proud of.

Solution 4: FailListener

TestNG gives the ability to register listeners to your test execution [2]. So you can code a “FailListener” which switches every skipped test to a failed one:

The listener can be attached to the test class like this (for some other ways, see [2]):

@Listeners({FailListener.class})
public class TestNGTest { .. }

One of the big benefits is the loose coupling: test code and workaround-code are separated in two independent classes. This also supports DRY (“Don’t repeat yourself”) since no try…catches-blocks (like in the other solutions) are needed. But on the other hand setting the ITestResult to fail like this must be named a dirty hack. What if in a future release the given TestResult is a clone of the original one? Changes on that instance wouldn’t be recognized by TestNG and the whole solution could be dropped. Furthermore all skipped tests are marked as fail even if they are supposed to be skipped. So you ban the usage of skip indirectly in your test environment. That’s especially problematic because although the listener is attached to a class it is called for all tests in the same test suite!

Conclusion

In this post I discussed 4 different approaches to solve a TestNG-issue related to exceptions occurring in DataProviders. Depending tests are skipped instead of being marked as fail. All approaches have their pros and cons so that the given circumstances will determine which solution helps the best. For me solution 2, “Return an empty array” did the job.

Rolf Engelhard

Failure in a dataprovider is not a test failure, it’s a configuration failure. It doesn’t mean there’s something wrong in your code, just that your testing harness failed to initialize properly. This is why it’s marked as a SKIP and not a FAIL.

First of all, welcome to my site and thank you for sharing your point of view.

Of course I agree with you that a failure in a dataprovider and a failure in my code (detected by a failing test) isn’t the same. But as you said: a failure in a dataprovider is a configuration *failure*. And a tester want to know about occuring failures. A SKIP doesn’t satisfy that fact.

Furthermore I can skip tests by myself – and that’s not an occuring configuration failure (well, most of the time ;-)). So how should I decide that a SKIP is wanted (= ‘manually’ skipped) or an outcome of an error in my dataprovider?

If there were e.g. an additional state “CONFIGURATION_FAILURE” besides SUCCESS, FAIL and SKIP I could react on that state properly. Or an additional attribute for the @DataProvider-annotation in which I could explicitly declare that I want to have a FAIL instead of a SKIP…. Christmas is just around the corner, maybe there’s a nice present in one of the next TestNG-releases for me? 🙂

Well, at first I couldn’t reproduce your example. I build a “broken” DataProvider which simply throws a NullPointerException and a test with an empty init-method (annotated with @BeforeClass). No fail.

Then I called the DataProvider directly in the init-method – which leads me to the same result you show in the attached image. But writing an init-method which isn’t needed for the test setting and calling the DataProvider without the need of receiving the testdata isn’t really a ‘clean’ solution for me. Maybe I misunderstood your suggestion?

This is still present in the current TestNG version (6.8.5).
Failures in data providers are reported as test skips and nothing different (i.e. not as a configuration failure).
IMHO this is clearly a bug and is not “Working as designed”

About this blog

The intention of this blog is to give clues and solutions to problems I struggle with in my every day work. As not everything is worth to write about it and I also don't want to post stuff which could be found on
every corner of the internet this blog is updated continuously but not regularly.

If this blog helped you in some way or you want to share your point of view to
a given topic—feel free to do so. I appreciate every comment and try to answer
as fast as my spare time allows me to do so.