Per Elisabeth Hendrickson, I’m one of the 80% of test managers looking for testers with programming skills. And as I sift through tester resumes, attempting to fill two technical positions, I see a problem; testers with programming skills are few and far between!

About 90% of the resumes I’ve seen lately are for testers specialized in manual (sapient) testing of web-based products. And since most of these resumes are sprinkled with statements like “knowledge of QTP”, I assume most of these testers are doing all their testing via the UI.

And then it hit me…

Maybe the reason so many testers are specialized in manual testing via the UI is because there are so many UI bugs!

This is no scientific analysis by any means. Just a quick thought about the natural order of things. But here’s my attempt to answer the question of why there aren’t more testers with programming skills out there.

It may be because they’re too busy finding bugs in the UI layer of their products.

13
comments:

I've never gotten software from dev that was good enough for my testers to do much more than make sure the UI and the immediate interface with the database was working. It's probably just as well that, in my market at least, that 98% of the resumes I see are hands-on-keyboard-only testers.

Where I am now I am building enough influence (as Dir. QA) to drive greater unit and component-level testing into Dev. The pesky defects aren't happening as often, and it frees us to start, just start, to do more technical kinds of testing -- which is offering opportunities to my testers that have them a little excited. To their credit.

1) if the people have QTP skills, they'd better have some programming skills because at the end of the day, those automated QTP tests are source code. If they are not written/managed properly as source code, it will soon become a big failure

2) I think many people start automating test cases through the UI because that is the most obvious way to do it for non-tech people. If you want to please management, just show them a test that automatically launches your product and click through it. If you show some command-line-cryptic stuff that says "133 tests executed OK", they just shrug and leave the place.

When you do manual testing, you are testing back end and front end integration. This part is the most "sensitive" and user mostly will see UI. I look skeptical at just back-end or front end automation. it looks like you are testing all car parts separated but doesn't check how car will work or or if that part integrates to car or not.

I'm finding this same issue. Either you get all manual testers or you get software devs in QA who really know how to automate, but are not that great in actual QA work. Tough to find someone strong in both skills.

I am not only finding issues with testers being "automated" or "manual", but I am also finding most manual testers stay away from the tools that would help them to do their job better. I use port monitors, tail/grep on logs, and join queries in databases even when I run manual tests. The modern tester can no longer fear technology and be successful.

1. A majority of testers are simply uncomfortable with code and more comfortable at the UI level and that's where they focus their time. I've seen a lot of "If I wanted to dig around in code, I would have become a developer."

2. There is a stigma that only developers can create and understand unit or integration level tests. The creation of Scrum teams tends to blur this line much more and push people outside of their waterfall roles. In this, I've seen my testers expand far outside of the roles that they held in the past and start diving into tasks that "were not their responsibility" in a waterfall team.

3. You test where the logic exists. Even though many teams try to move logic out of the presentation layer of the system architecture, you have to test where the logic lies. If you're creating a lot of dynamic UI components, it's typically easiest to test that at the UI level. Good system architecture will push testing closer and closer to the code itself.

4. Finally, I believe that testers are born with a certain skill set that is tough to teach. The comment was made that "You're looking for testers with programming skills, but what you really need are programmers with better testing skills." I believe that these are two distinctly separate and valuable groups. Programmers with testing skills understand a process and can follow that process to produce better code. Ask them to do some exploratory testing and they'll sit there not knowing what to do. Testers with programming skills can take advantage of their natural testing approach and fortify it with the programming skills to gain better coverage across a system. They will use their programming skills to supplement manual testing and still be able to use exploratory testing to follow their nose and flush out issues that hidden within the system.

Justin, since you've quoted my content I should add some context. In the 19 years I've been testing systems and software there haven't been many programmers I've encountered who have had the finesse to deliver code with a minimum number of defects. In discussing this with others over the years, I don't think my experience is atypical.

This is not to knock programmers. I think a larger issue is the pressure they are put under to deliver code based on unrealistic estimates and rigid iterations.

I completely agree Mark. There are external pressures everywhere and unless the tester and developer are working hand in hand, there are assumptions occurring everyday that are typically only validated through test execution.

While I would love to create developers who understand all of the ins and outs of testing and developer perfect code, this probably will never happen. But moving towards an agile model certainly helps bring the testers and coders closer together where both groups can learn from each other. From my experience, the largest impact we've seen in quality is the continual expansion of our automated tests, including making automated tests required for any customer reported bugs to ensure they never happen again.

We can't expect our dev teams to remember every little issues that's ever come up, nor can we push out testers to run weeks and weeks of regression testing on any little fix. Focusing on valuable automation allows both testers and developers additional bandwidth. Hopefully this bandwidth is used for "deeper" testing of the system and not just increasing team velocity.

Who am I?

My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.