Perhaps programmers have their own unit tests, perhaps as the result of doing Test-Driven Development. They run these tests very often; perhaps every few minutes, but at least a few times each day.

In my experience, QA's automated tests tend to go after bigger chunks of functionality, and are therefore slower and usually less reliable than programmer's unit tests. Still, there is a lot of value in getting the results of these tests in programmer's hands as soon as possible.

Should QA push for programmers running QA's automated tests? Should QA insist that all of their automated tests pass before checking in? Should the nightly build process include a run of QA's automated tests?

(I wrote "nightly build", but that only makes sense if you have a nightly build. I suspect that many shops don't have this. I suspect that how QA is done varies very widely, and that awareness of other ways of doing things is very limited.)

Context for my comment. I am assuming Programmers in your question = developers of application, not testers with programming skills

I think that there is an inherent motivation that testers, (even dedicated software engineers in test) have that developers don't, and that is that writing, running and maintaining tests adds value to the work a tester does, i.e. it can reduce their manual testing workload, and the better it runs the more time they have to do other "testing stuff".

For programmers, it is just another task that they need to do that stops them from adding in new features.

Ideally, you should have testers who are trained developers, and it is part of the daily build and test process, that you can get QA to run. It is in their interest for that process to run as smoothly as possible.

If developers want to run tests, then they can of course, however your testers should have that as a primary responsibility.

One counterargument I've heard in favour of programmers taking some responsibility for maintaining QA tests, is that if someone makes a change that breaks a lot of existing tests, they should feel some of the pain.
–
testerab♦May 5 '11 at 23:30

2

I've heard that but I don't believe in it, if the tests are a shared framework then certainly but if we are talking in the context of QA driven tests then Developers should not be updating them. If the change is a change that is supposed to be there, and hte tests are out of date, QA needs to update with new information. If the change is a wider ranging issue then the Developers should fix and let the test pass.
–
MichaelFMay 10 '11 at 13:11

Obviously it's folly to think that engineers could run tests from some other shadowy team that they have no rapport with. However, if you do have engineers and testers embedded on the same teams than it's more likely that you could have testers and engineers running each others tests.

More importantly than the tests themselves is the output. If the output isn't useful to an engineer then you might not be getting great value from an engineer running your tests. One thing that I've done to help bridge the gap is to tag tests into suites that are going to be useful to engineers. This way they can run a slice of tests that pertain to their specific work without having to run everything.

I would make it automatic post-commit (via a trigger, or preferably a build system) that some kind of test is run.

If the test is too extensive, then scale the post-commit test back to run in a reasonable amount of time (so the dev gets feedback quickly.)

If the longform test (because longform makes it better, right?) takes X hours to run, I would recommend running it every X/2 or X/3 hours on 2 or 3 different build systems. This will give you feedback consistently throughout the day.

The ability to single out those individual tests would be very very useful. You run the longform test, and you find tests 15 and 29 fail. Now, you can run just against 15 and 29 before you commit. Get them running, pass the simple tests, commit and let it run. Sure it's possible you broke other tests. Usually you don't.

I certainly would never discourage the developers from running automated tests, or even manual tests. Honestly, if they can stop a bug before it gets to the testers, that's awesome. That saves the company up to 2 hours worth of time for even a simple fix, when you factor in how long it takes the developer to pick up where he left off when the report finally comes back from QA.

"If the test is too extensive, then scale the post-commit test back to run in a reasonable amount of time (so the dev gets feedback quickly.)" The other benefit to this type of post-commit test is your test engineers know they probably have a good build. You hate to see a team start testing only to find out an hour in the build was bad. A smoke test run at check in can help prevent that.
–
CKleinMay 4 '11 at 13:23

This depends on the organization. Where I have worked, QA teams tended to prioritize breadth over runtime or simplicity of setup. The "QA environment" may require some special settings and configuration for mock objects, fake data, and so on. Developers, on the other hand, prefer tests that run quickly so that they can be integrated into their edit/build/run cycle.

I think a better division of labor is for QA to run the tests frequently (e.g. after every build or daily) and make the results available to developers.

Should QA insist that all of their automated tests pass before checking in?

At my last employer, the QA test suite involved multiple operating systems (and several different service packs of each), so it would not have been possible for devs to run those tests as part of the smoke/tdd tests. Some of the other tests could not be automated as they required additional software to simulate things like "out of memory" or "disk full, try again" scenarios.

Should the nightly build process include a run of QA's automated tests?

Many companies will have QA's developing full functionality automation suits with tools like selenium. Those can be used as automated smoke tests and used with Build integration tools like Hudson and maven . Each time when the developers add something new to the application, they can just trigger the suit and see the applications functionality . This works great for regression testing. So as the previous posts suggest , if the QA developed site is functionally sound they developers can benefit a lot from it

I'd like to pipe in with one smaller piece here for shops where there is not a mature test automation framework, or the test automation takes too long to realistically run every night, or requires manual setup.
I'm not sure about other shops, but my test automation for enhancements and bug fixes is normally done before the application code is ready with the exception of having to add values to a few variables.
Even having the developer run only the tests for the functionality that is being modified intentionally is already a boon, and I don't know many developers who wouldn't be willing to run this before they ever ask to promote to a test environment.

Development Team handles Design, Development and Unit Testing of Code. With SCRUM/TDD I am afraid if they have enough bandwidth to execute Integration/functional test scenarios

If Developers run QA Automation code, We could identify
- obvious bugs which are part of functionality
- cross browser failure issues in case of UI testing
- Any edge cases failing in certain conditions

One option is both developer and tester can sit together and do a round of functional testing at the end of development phase. Earlier the bugs are detected lesser the cost required to fix it

This is more of a mindshift change than identifying/setting boundarines between development and test teams

Automated Tests are mostly written and used for existing functionalities or when a new functionality is developed and handed for testing which has been thoroughly tested for many time manually before the actual automation testing starts. I.e Mostly written for regression test suites. Such automated tests can be used by developer during their unit test to ensure new development has not disturbed the existing features/functionalities

However on the other end, Automation suites will cover testing of the overall feature of Product/Application and it consume reasonable execution time, which can be accepted for Testing Phase. And so, when it is used by developer it will result in increased completion time in development phase. Hence, I would suggest the developers to use QA written automated test suites for the following situations

a. Development of complex feature where resultant development is suspected to impact core feature of application

b. QA Automated Test Suite is available and it contains limited (not very detailed) but yet covering major functional areas covered