Yet another future of testing post (YAFOTP)

If you're new here, you may want to subscribe to my RSS feed. Thanks for visiting!

I was talking with a colleague of mine this morning about his role and what it meant, and I made a mental note to blog about some of my ideas. Given that my ability to remember anything peaks at about a day, I thought I better write it down now. I predict there will be a lot of holes here – I’ll fill those in later or in the comments – but here goes.

Regardless of how you feel about the health of software testing (which depends largely on your ability to interpret a metaphor), for me, it’s getting easier and easier to see that testing is changing. Granted, it’s changing along with the software under test, so if you’re testing the same sort of desktop software products you have been for years, using development practices even older, the good news is that you’re safe – your testing world probably isn’t going to change either.

The rest of us are working on software that releases quickly and often, using development practices that support that cadence. In our world, it just doesn’t make sense for a test team to invest a bunch of time in functional testing. It’s cheaper and more cost efficient to have the programmers who write the code write tests to verify unit and functional correctness. This eliminates unnecessary back and forth, and forces programmers to write more testable (and often simpler) code from the beginning, resulting in easier maintainability and extension of the code.

Of course, you don’t need to tell me that it’s a long leap from functional correctness to usable software. Given a beer or two, I’ll give you names of products I’ve worked on that have been near functionally perfect, yet near failures in the market. This gap is where my future of test lives. Another colleague (one with no blog) says, “We can define what programmers do, and what program managers do fairly easily. Testers do the remainder.” This statement remains correct – even if “the remainder” is a moving target.

One big role falling into the remainder is that of data analysis / data science / data interpretation / whatever you want to call the analysis of customer data rolling in. As products move more and more into the cloud, there are more and more opportunities to run tests and analysis in production and get data in near real time. I honestly think that the ability to provide actionable product insights from terabytes or more of data is the key to a six-figure plus paycheck for decades to come. Some testers will fit naturally into this role – but I have a hunch we’ll find a lot of people in this role with backgrounds in Mathematics or Statistics than from Computer Science.

When you think about “the remainder”, there’s another big hole. I think we’ll always need people to look at big end-to-end scenarios and determine how non-functional attributes (e.g. performance, privacy, usability, reliability, etc.) contribute to the user experience. Some of this evaluation will come from manually walking through scenarios), but there will be plenty of need for programmatic measurement and analysis as well (e.g. is there real value in manual performance tests, or manual stress tests?). I don’t know if there’d be more or less specialization than there is today, and don’t know if it matters…but it may.

There may be other new roles, while some roles abundant today may go away – although not immediately, as I still see several openings for “Functional Test Engineer” on a popular job site. Short story is that I’m cool with this future. Others may not be, and that’s ok. I’m just happy to ride the wave.

Like this:

Related

6 Comments

Interesting thoughts! Even though I have a development background and have never had any testing functions within an organization or project, my take is that you will always need testers to do functional testing for a variety of reasons: 1) Developers are typically biased about how something works and they may not be open to trying things differently or trying to go out of their way to break an application. 2) Developers do not usually have the patience to thoroughly test different scenarios and try things that could go wrong. I think testers and developers have just different mindsets all together and both of them are really needed on projects. 3) Developers are more expensive and relying on them to do functional testing would not be the best usage of the project dollars. 4) It is always a good idea to have another fresh set of eyes on the application hence the need for testers. One can also claim developers can cross-test some other developers code to serve this purpose, but I think this goes back to number 1 above that even if this is done, after a few releases of software, both developers get biased about how each screen or module works and are not willing to go out of their way to break it.

I think the mindset is overrated – I agree that the approach to the role is different between developers and lifelong testers, but I know many developers who /do/ have the patience to look at a variety of test scenarios, and testers (good testers) who miss some. Additionally, if you’re testing a web service (or anything else you can update frequently like a phone app), once you add the right data collection mechanisms, you can let customers find all of the edge cases for you – for free.

Your point #3 isn’t true everywhere – where I work, testers and developers are paid equally. IMO, that makes it too expensive to pay *testers* to do that work. You’re also not calculating the cost of time in back and forth between testers and developers that occurs when testers have to find unit and functional level bugs. Instead, pay testers to find the issues hiding in the cracks of integration between components, features, and end-to-end scenarios.

Regarding point 4, customers are a great set of fresh eyes (see first paragraph). If you’re working on shrink-wrap software (how much longer will that be around?), you’ll probably have more testers on your team to provide “fresh eyes”, but keep in mind that I mentioned (in the post) that most software teams will have some folks looking at ilities and end-to-end scenarios – those folks will have finely trained fresh eyes and will be valuable.

context in #3 reminds me of some from HWTSWAM, that the code released to test should really be working as expected: read as: all functional tests should have been tested by the development discipline and be coded for the spec’d functionality. There shouldn’t be additional effort for functional testing, rather have the testers to do more Analysis , intricacies of Data Integrity, find Security and Privacy holes, more integration end to end on Scenarios based modeling – applicable for larger transactional systems where the knowledge and expertise are put into right use. #CodeQuality

“One big role falling into the remainder is that of data analysis / data science / data interpretation / whatever you want to call the analysis of customer data rolling in. As products move more and more into the cloud, there are more and more opportunities to run tests and analysis in production and get data in near real time. I honestly think that the ability to provide actionable product insights from terabytes or more of data is the key to a six-figure plus paycheck for decades to come. Some testers will fit naturally into this role – but I have a hunch we’ll find a lot of people in this role with backgrounds in Mathematics or Statistics than from Computer Science.”

I’d like to see you expand on this. What sort of actionable insights can a tester discover? How can a regular ole tester transition into doing data analysis?