I work at a place where we buy a lot of IT projects. We are currently producing a standard for systems-requirements for the requisition of future projects. In that process, we are discussing whether or not we can demand automated unit testing from our suppliers.

I firmly believe that proper automated unit-testing is the only way to document the quality and stability of the code. Everyone else seems to think that unit-testing is an optional method that concerns the supplier alone. Thus, we will make no demands of automated unit-testing, continous testing, coverage-reports, inspections of unit-tests or any of the kind. I find this policy extremely frustrating.

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
If this question can be reworded to fit the rules in the help center, please edit the question.

62

The problem with "forced" unit tests is that they will almost certainly be token efforts only. They will not increase the quality of the work you get, but will only increase the cost. Unless the developers believe/know that unit testing helps them write code, forcing them to do it will most likely be counter-productive.
–
Joachim SauerJun 26 '12 at 12:06

10

Should you perhaps not consider whether or not they already apply testing as part of your decision to select a supplier?
–
BartJun 26 '12 at 12:07

2

Hmm, I feel your pain. If that's regarded as unimportant I'd better hope your supplier offers full support if it doesn't work as desired/expected. I would personally like to see at least some level of proper software development practices without having to force it upon them.
–
BartJun 26 '12 at 12:18

20

I firmly believe, that proper automated unit-testing is the only way to document the quality and stability of the code. - anytime someone states that there is only one way to do anything it raises a red flag. There are plenty of other ways to accomplish this. Including some we have not even thought of yet.
–
ChadJun 26 '12 at 12:50

8

@Chad: That is why I ask this question: to challenge my firm bilief :-)
–
MortenJun 26 '12 at 12:51

17 Answers
17

I firmly believe, that proper automated unit-testing is the only way to document the quality and stability of the code.

The thing is that you won't (or very rarely at least) get proper automated unit-testing by forcing it on people. That's a good way to get shitty tests and drive up the cost of the projects.

Personally, I would look towards some demand or SLA that involves quality; regardless of how it is accomplished. 10 years ago unit tests were infrequent at best. You don't want to handcuff your suppliers in 10 years when we have better methods to ensure quality but your outdated policy requires them to use the old way.

+1 I can write bad unit tests that appear to test everything but do not operate as they should to actually test everything. That does not add quality or prove anything.
–
ChadJun 26 '12 at 12:55

1

@Chad Yes, that's certainly true, but the customer could also demand code coverage metrics and also the source code for the tests, if the customer thinks they can distinguish between a giant integration test that boosts up coverage or true unit tests. Even if the customer demands these things, the developers can still game the system, it just gets a little more challenging.
–
Chris OJun 26 '12 at 14:07

3

@ChrisO - Exactly that it can be gamed proves it does not meet the requirements of the OP.
–
ChadJun 26 '12 at 15:37

1

@JarrodRoberson - Yes, code coverage is only a statistical metric, by no means does it guarantee that the automated tests are in fact good automated tests. Management, and some customers, just love statistical metrics.
–
Chris OJun 26 '12 at 18:01

1

@MatthewFlynn: The tests run with the mock data/structures without causing exceptions. Lots of things run great in the happy path with expected inputs.
–
TelastynJun 26 '12 at 18:41

Personally I think that in your case you should be thinking in terms of acceptance tests instead:

You have a black box given to you, and you expect it to behave in a certain way.

You will not pay until it does.

Write unit tests exercising the behavior you need to see, and if they fail, they have to fix it.

Also note that this is a matter of trust. If you do not trust your supplier then you need to get the source code, inspect it, and compile it yourself. Anything less than that mean that you at least trust them some.

@Morten as you are involved in the design, I would suggest using TDD to design the API's (including error conditions), and then leave it to the programming team to make the tests pass.
–
user1249Jun 26 '12 at 12:39

I firmly believe, that proper automated unit-testing is the only way to document the quality and stability of the code.

It surprises me how common this thinking is. Automated, yes. Unit testing (alone), no. There's way more to automated software testing than unit tests alone. What about integration, system, functional, QA? For some reasons people tend to think, "Ok, so we have proper unit tests. Done with testing, call it friday evening!". Unit testing is just the begining.

Anyways, back on topic. I agree with others saying that forcing anything on anybody will probably yield results opposite to those desired. You never know how team works, maybe they got million dollar worth testing department and never wrote single unit test.

At my first job I used to work in a place where we had 0 unit tests (we were bunch of juniors throw at more or less serious stuff). Believe it or not, it worked. Sure, nobody was confided why this bug got fixed, or what this fix broke, but it worked. There were times when some absolutely random bug would pop out, but baseball bat and risk of having your appartment burnt down some extra hours can work wonders. Maybe your suppliers use similar techniques?

I doubt very much that your management will be willing to pay for proper unit testing in a contract. Proper unit test cost just as much as the code they test, but give little perceived value to the end user so they are not going to be seen as equally valuable. No quality development firm is going to be willing to spend the development effort on unit tests for a lesser cost than other parts, because they aren't hurting for work, they can make more finding 2 contracts that take the same amount of time with no unit test requirements.

Demanding unit tests will likely increase your received quotes to an unreasonable level, and is likely going to be the first concession made to get a lower price.

The cost of not having unit tests depends on how much you will be extending/supporting the code yourselves. Getting to inspect parts of the code to get an idea of quality would also be important.

If you are just buying the projects so you can use them like a 3rd party library and don't believe you will modify them, then the risk of lower-quality code is less, so long as it actually works.

These are ultimately business decisions, although you have to make sure whoever is making the decision is aware of the technical valuation as well. If you need to explain it to management, explain it is like buying a used car. Ultimately it's up to the buyer to decide if it's worth it, but taking it to a mechanic is a good idea so you know you aren't getting a lemon.

You are paying, you can demand whatever you want, including copies/reports of all their unit testing.

You could even write the tests, or at least the test specifications yrself.

I agree with your view in that its a very good measure of code quality. If a supplier refused this demand it would ring alarm bells, why wouldn't they want to do it - they have low quality standards and take shortcuts ?

Would I want un-tested code?? Can anyone produce large pieces of stable, reliable SW without unit-testing ??
–
MortenJun 26 '12 at 12:08

3

@Morten: you don't want untested code - but automatic unit-testing is not the only way to test code. It is just one building block to improve code quality among others.
–
Doc BrownJun 26 '12 at 12:17

3

@Morten: Unit-testing is not the only way to test code. Before 1996 nobody did any kind of formal unit testing but software still worked.
–
gbjbaanbJun 26 '12 at 14:26

1

@gbjbaanb Are you arguing that just because it's new it's not useful? :p I've worked on software with and without unit testing, and the ones with unit tests were significantly easier and faster to write (mainly because finding and fixing bugs was easier).
–
Brendan LongJun 26 '12 at 14:43

1

it isn't black and white, tested vs untested. Lack of automated tests doesn't mean the code isn't tested, just means it isn't automated. I have seen 100's of thousands of lines of automated test code, more than 3X as much as the actual code and the project was riddled with bugs. I have seen 600K lines of complex concurrent code with ZERO automated unit tests that ran in production for years and years with zero downtime or bugs.
–
Jarrod RobersonJun 26 '12 at 15:41

You are absolutely right that your project needs automated unit-testing, continuous testing, coverage-reports, and inspections of unit-tests.

However Demanding things will not achieve the results you desire as others have detailed.

Your challenge is to explain and persuade people - a much harder skills!

I would start initially with management, explaining the pro's and con's of testing and the payoff down the road. Please be careful to not communicate the emotion behind statements like 'I write PROPER unit tests' (capitalization yours). You don't want to 'shout' the words (as ALL CAPS implies) you will to persuade and convince people so that they themselves can pick the right solution.

Ultimately if you can't introduce these methodolgies and get them embraced where you are, plus if you are as passionate about them as you state (which is good!) I would on to a different company as there are many that do value these things and would welcome you on board. Just make sure you are up front about them in interviews so they know where your passions lie and if you'll be a good fit.

Forcing automated testing on someone will not achieve the results you desire as @Joachim Sauer and @Telastyn pointed out in their comments. For a lot of people automated unit testing is a huge shift in their thinking. Because a lot of people write code that works, but is not very testable. I could write a ASP.NET webpage where all the logic, querying the database, business rules, objects, etc. is in the code behind. Will the page work? Yes. Is it testable using automated unit testing? Absolutely not. If a supplier does not have automated unit testing then it will take quite an effort to learn how to write unit tests properly and as a result of learning this, re-write or re-factor their code to make it more easily testable. Chances are they are going to pass that cost onto you.

The fact of the matter is the supplier is giving you a product, be it a .dll or a windows application, and you expect it to work 99% of the time. Sure there are bugs here and there, but for the most part it should work. That is a reasonable expectation, especially if the supplier wants to retain your business. If it is a black box then it doesn't really matter how they get it to work, they could use a human wave of testers, or a room full of monkey randomly hitting keys. As long as it works. But they would need to provide you with some sort of other documentation on how to use it.

However, if they gave you the source code so you could modify it, then I would expect unit tests. I would not work with a company who doesn't supply unit tests. How else would you know a modification you make doesn't hose the whole thing?

Unit testing is an indication of how that supplier handles risks during the development cycle. Those who use unit tests value reducing risk and the quality of those tests is an indication on how much risk has been managed.

With that said, unit tests do not define what level of risk that project is attempting to tackle. It also doesn't play any role in the reduction of risk introduced by bad programming practices.

Therefore, you could have one supplier who has solid testing practices in place but continues to write highly risky code while another supplier who does zero testing but writes low risk code. If the two suppliers offer the same product, then it's best to go with the low risk supplier.

This can only be gauged by interviewing, mentoring and learning about the personalities and skills of the people involved with that supplier.

Maybe management and the vendor had a meeting and the vendor said, the code costs $X and the unit tests costs $Y. Penny pinching management said we just want the code.

The vendor decided to write unit tests anyway (because of quality) and just not share them with you (because of competitive advantage).

Now you need to make some major adjustments to the software. Since you own the code, you could do it yourself or hire it out competitively (not necessarily to the original vendor). But since you did not buy the unit tests, the original vendor will be able to do comparable quality work at a cheaper price to any competitor.

It seems to me that including this requirement would have little practical benefit, since it would be impossible to enforce.

You could demand the code for the actual tests, or a report of what actual tests were run and passed. If it is a proprietary project, the vendor would probably not want to supply that, since it would likely be highly suggestive of the details of the code, and may bristle at being made to provide low-level implementation details and told how to do their jobs. At some point, there has to be some trust.

If you don't, they could blow you off either but just checking the box next to "runs a suite of unit tests" for fulfill the requirement by writing a single test that passes if their utility compiles and runs or a similar half-assed effort.

So while automated unit tests is a commendable practice, I don't think it's practical to insist on it from outside vendors.

As many people have pointed out, forcing venders to write unit tests (or better, a combination of automated unit and integration tests) to fulfill a contract will probably not yield high quality results. On the other hand, I do agree that automated testing is by far the best method currently in use to ensure the quality of code.

I think code written with unit tests is much easier to maintain than code that was written first and had unit tests added later. It forces modularity and minimal dependencies. Integration tests are also necessary for verifying the correctness of the code, but they don't have the same impact on the quality of the code as it is written. In essence, automated integration tests could be added after the fact, but unit tests have the most impact when written with the original code.

My advice for your situation is look for venders who prefer to write code with unit tests, and choose them over venders who tend not to write code with unite tests. If management will pay for it, put automated tests in the contract but ONLY IF THE VENDER IS USED TO WRITING CODE WITH UNIT TESTS.

In your role as a customer your main concern is not unit-testing

If you are using suppliers who produce software for you then you really shouldn't be concerned if they are using one methodology or another. Your stakes is to acquire some kind of a solution that will help achieve your goals. The only thing you should care about is wether or not that solution is acceptable. That's why we have acceptance testing as it lies in your responsibility to make sure that you get what you want. It is at the crucial moment of customer acceptance that money will be transacted from your company's pockets into the supplier's pocket.

You could demand unit-tests as deliverable requirement but there are several inherit problems with them, the most severe is that there is no sure-fire way beforehand to determine the metrics:

What is the acceptable amount of unit tests?

Should there be 10 tests? How about 100 tests? How about 1000 tests? Actually, it is quite difficult to determine in the beginning how many tests you will need. The actual number is indeterminable really... like the halting problem... but we're not solving that problem.

You just want software that has unit-tests so you can continue development. Unit-tests don't tell what you've broken yet, but they're awesomely well suited to tell you when the code has a regression bug.

What is an acceptable level of code coverage?

"100%, of course!" you'd think. Unfortunately that metric is misleading; even if you had 100% code coverage, are you really sure things are working as expected? It is possible to have 100% coverage but not be done.

What you really need to be doing is exploratory testing, i.e. find someone who is really good at breaking stuff and let them do the testing. To find the bugs that no developer has even thought of.

Also 100% is sometimes unattainable with pure unit tests if you have some necessary performance hacks and use design patterns that are difficult to test (search "singleton" and "tdd" in your favorite search engine and you'll find some examples).

You want the delivered software to work and the specification document is your only warranty that it will.

You will need higher level of testing

Your specification document has to be verified somehow. Each point has to be gone through with your suppliers having clear goals and acceptance criteria. A well functioning QA organization (or an awesome tester if you're on a budget and on a limited scope) would provide the test cases to check these acceptance criteria. You also need someone to verify those acceptance criteria.

There are several ways to verify your goals, and if someone tells me that you can't set any sane quality, performance and efficiency goals I will hit them in the head with big and heavy books on exploratory, performance and usability testing respectively. It may be easy to overkill with the goals, but knowledge and communication will help you set realistic goals.

I'm not a lawyer but most project contracts (which is basically the mother of all specifications for the project) I've read usually have a defect ratio criteria that stipulates on how many bugs that are deemed acceptable. The bugs are usually determined through severity, show-stopping bugs that are found by QA have a low tolerance while minor blemishes have a high tolerance. In real projects it is difficult to demand that software has to have 0 defects. Deadlines usually put a stop to that practice. It is at these situations that you have to start bargaining the scope.

Most supplied software I've seen usually aren't delivered with unit tests. You could argue the suppliers should be professional enough to deliver this, however the chief reason you want unit tests delivered to you is to make sure that you don't get regression bugs and also enable refactoring. In real life with projects on tight deadlines both the supplier and customer will be lowering the scope and unit-tests would usually go out the window and be removed from the list of required deliverables.

It is a bit sad that high profile open source software come delivered with unit-tests but a professional software developer can't, right?

So when do I, as a customer, get to care about unit testing?

I would argue that the only time you would truly care about unit testing is if the deliverable software is a self-sufficient component that isn't executed as a stand-alone program, for which the coarsest testing you can to do is a unit test. Class libraries would be one kind of product that can be delivered together with unit-tests.

There are lots of projects that are very successful despite the fact that they don't use unit tests. Just look at the linux kernel, its a gigantic project with a complexity well beyond what you would find in any normal project, and it still works. So if the result is good software, you shouldn't care how they achieved it.