6 Answers
6

I say the testers should develop the automated tests - they have the "outside pair of eyes" approach to the code, and will (or should that be just 'may'?) spot bugs that you haven't thought of, let alone handle.

Plus their understanding of the functional requirements might be higher level than the understanding of the developers - so sitting in between the hardcore low-level knowledge of the programmer, but not as high level than that of the BA.

But couldn't they just ask to the developers to write that test case?
–
Jader DiasFeb 10 '11 at 18:39

1

For the reasons I said above, developers would know far more about the internal implementation and would approach the software differently from that coming from the outside.
–
James LoveFeb 10 '11 at 18:41

I can't see how a test case implementation could differ from one person to another.
–
Jader DiasFeb 10 '11 at 18:43

4

@Jader, you want to have different people write the automated tests than wrote the original code. Otherwise the tests will be biased to work with the code as it was written.
–
MarcieFeb 10 '11 at 18:46

2

During the past couple of weeks, I've had a developer helping me with writing fixtures for his code. He's a very good dev, but he definitely misses some nuances of coverage for his code just because he doesn't think about coverage in depth on a regular basis. If devs do help with testing, have a tester review their work.
–
Ethel EvansFeb 11 '11 at 0:15

In my opinion, developers and testers are responsible for different types of tests.

The developer, while writing the logic, should be also writing unit and integration tests. This will allow the dev to make sure that what they've written thus far works as intended. Additionally, these tests will still be around for the dev to run when they make future changes. Once the logic is completed, the dev can be assured that what is written works as they understand the specifications and can pass off to the tester.

The tester from this point should be responsible for writing system wide tests that make sure the business logic works as intended.

Given that devs are often way too attached to the code, testers should be able to help clean up the dev's tests but not vice versa.

Curious about your last sentence - you don't think devs can contribute to functional tests? What if the testers outline the test structure and identify test cases and the devs only help with implementation?
–
Miss CellanieFeb 11 '11 at 6:26

1

I think I'm imagining testers who are developers in their own right and can write their own tests. Their job would be to walk through the requirements and talk to the user to identify the test cases then write the cases. This leaves the logic developers free to be close to the logic as they write it. It also leaves the testers far enough removed from "owning" the logic that they can try to break it with objectivity and without remorse.
–
Taylor PriceFeb 11 '11 at 17:59

In my experience, the way a tester sets up and executes a test case automatically does actually differ from the way a developer does it. Testers will be thinking more about how to get the most information out of the test case, how to execute tests rapidly, how to keep tests maintainable, and so forth. Most importantly, testers will see nuances of test coverage that developers will miss.

If test development resources are low, developers can help out - but a tester should review their work carefully. Devs should work on fixtures and test tools before writing actual test cases, and devs shouldn't ever write test cases for their own code. Having developers help out with testing does have perks - it exposes devs to other pieces of the product, and testing can help them become better developers. However, just as a junior developer's work would never go without a code review, a dev's QA work should get a QA review from someone with more experience testing.

Edited to add: I'm an SDET with 5 years of experience. I work with great devs with 10+ years of experience each, and lately have been working with them to get through a testing bottleneck.

One thing I would really like to be able to see is solid automation tools for testers that will allow them to effectively record their progress through a test script and then permit that script to be run automatically in future. Especially if this also facilitates the same script being run on different operating system versions without the tester having to go through all of them by hand.

Unfortunately, certainly for the product that I work on, none of the tools on the market quite do the job. But it is worth bearing this in mind and looking at what is available on the market in case there is something that that would work for what you are doing.

Visual Studio 2010 (Premium or Ultimate, can't remember which) has something which records screen actions to automate UI testing. I seen a demo of this by Andrew Woodward MVP when doing a show of UI Testing SharePoint, incredibly stuff.
–
James LoveFeb 10 '11 at 22:16

Record & play rightly has a fairly poor reputation. It tends to be pretty fragile, and hard to maintain. Yes, as a quick & dirty "I have to run this on 4 different datacentres, I don't want to keep it for future use", it's fine, but it's horrible to maintain because you end up with tons of repetition. One small element changes - and suddenly you have to update 100 tests. Painful. It also in no way replaces the manual test, which tends to be designed with the assumption that a human will notice all those other things you didn't explicitly check.
–
testerabFeb 13 '11 at 19:36

What would be pretty sweet would be something that could take things to a slightly lower level than moving the pointer and clicking the mouse, so that you actually record what button was clicked rather than where the click occurred. That would make this type of testing more resilient and practical. When you need run every script on, for example, nine different versions of windows it's a nightmare to have to do it manually on all of them.
–
glenatronFeb 15 '11 at 10:26

Identifying button by id rather than location is perfectly possible with some tools. Record and play scripts done like that are STILL horrid to maintain, unfortunately - it doesn't solve the problem of repetition. I don't think there's any getting away from the need to design your test automation carefully, if you actually want to keep any scripts around or create more than a dozen of them. Have you thought of using something keyword-driven like Robot Framework along with Auto-It?
–
testerabFeb 19 '11 at 12:51

One critical distinction that's really important here is this: are your testers simply checking, or are they testing?

This blog post by Michael Bolton explains it better, but in essence: are they looking to merely confirm behaviour, or are they looking to find issues with the system?

I think it's also useful to consider the Agile Testing Quadrants(Brian Marick originally described these, but I came across them in Lisa Crispin and Janet Gregory's "Agile Testing" book: even if you're not following an Agile development methodology, I think the distinction between tests that critique the product, and tests that support the team, is really worthwhile when considering automation, and trying to develop a plan for who does what, and why.

For instance, unit checks written by developers act as change detectors, enabling you to catch regressions early when they're re-run regularly - these are tests that support the team. System level regression checks that are automated so that they can be re-run regularly and quickly also support the team by catching regressions early, and complement the unit testing done by developers. That frees your testers' time up to do testing that critiques the product - exploratory testing, for example. Or possibly applying some of the automated checks to stress test the product.

The other thing I really like about the Lisa Crispin presentation I linked is that it points out that automation can also be used to support manual testing - creating test data, automation used to get a scenario to the point you want to focus on today, for example.

Considering these two articles will hopefully help you to analyse what sort of testing you want to do, make it easier to pick out what might be suitable for automation, and figure out which bits of automation are more suited to be done by testers, and which by developers.