It takes more time to write the tests for feature XYZ than the actual code. How do you justify spending so much time on the test code?

Ok, let’s start by exploring when you would not bother to write any tests for feature XYZ. If it belongs to your hobby project that neither needs to be maintained over a long period of time, nor needs to be super correct, then skip the tests and hack away. Also, some scripts or promotional sites don’t live long enough to enter maintenance and nobody dies if some defects creep in.

Second, let’s remind ourselves why we write tests alongside code. Apart from the obvious checking aspect and the fact that they ensure testability, they also protect from regressions and serve as documentation. Considering the fact that maintenance accounts for 60%-80% of a piece of software total cost, it may be smart to invest in maintainability offered by testable code.

Having handled the exceptions and the motivation, let’s get to the real issue, which is: It should never be hard to write tests for feature XYZ in the first place! If adding unit tests is hard, it means that testability hasn’t been considered when the code was written. In legacy code, it’s not uncommon that adding a feature can indeed take a few minutes, while testing it may seem like an unsurmountable obstacle. In such cases, it may actually be wiser to skip unit tests altogether and go for end-to-end tests. If integration testing is hard, it probably means that the required infrastructure is missing (again a testability issue) and adding that one test that would tease part of it out may seem like too big a step.

The bottom line is that if testability is taken into account early, there’s no way that the code ends up in a state where adding tests to it is difficult.