In order for the “you build it, you break it” motto to be real, there are roles beyond the traditional developer that are necessary. Specifically, engineering roles that enable developers to do testing efficiently and effectively have to exist. At Google we have created roles in which some engineers are responsible for making others more productive. These engineers often identify themselves as testers but their actual mission is one of productivity. They exist to make developers more productive and quality is a large part of that productivity. Here's a summary of those roles:

The SWE or Software Engineer is the traditional developer role. SWEs write functional code that ships to users. They create design documentation, design data structures and overall architecture and spend the vast majority of their time writing and reviewing code. SWEs write a lot of test code including test driven design, unit tests and, as we explain in future posts, participate in the construction of small, medium and large tests. SWEs own quality for everything they touch whether they wrote it, fixed it or modified it.

The SET or Software Engineer in Test is also a developer role except their focus is on testability. They review designs and look closely at code quality and risk. They refactor code to make it more testable. SETs write unit testing frameworks and automation. They are a partner in the SWE code base but are more concerned with increasing quality and test coverage than adding new features or increasing performance.

The TE or Test Engineer is the exact reverse of the SET. It is a a role that puts testing first and development second. Many Google TEs spend a good deal of their time writing code in the form of automation scripts and code that drives usage scenarios and even mimics a user. They also organize the testing work of SWEs and SETs, interpret test results and drive test execution, particular in the late stages of a project as the push toward release intensifies. TEs are product experts, quality advisers and analyzers of risk.

From a quality standpoint, SWEs own features and the quality of those features in isolation. They are responsible for fault tolerant designs, failure recovery, TDD, unit tests and in working with the SET to write tests that exercise the code for their feature.

SETs are developers that provide testing features. A framework that can isolate newly developed code by simulating its dependencies with stubs, mocks and fakes and submit queues for managing code check-ins. In other words, SETs write code that allows SWEs to test their features. Much of the actual testing is performed by the SWEs, SETs are there to ensure that features are testable and that the SWEs are actively involved in writing test cases.

Clearly SETs primary focus is on the developer. Individual feature quality is the target and enabling developers to easily test the code they write is the primary focus of the SET. This development focus leaves one large hole which I am sure is already evident to the reader: what about the user?

User focused testing is the job of the Google TE. Assuming that the SWEs and SETs performed module and feature level testing adequately, the next task is to understand how well this collection of executable code and data works together to satisfy the needs of the user. TEs act as a double-check on the diligence of the developers. Any obvious bugs are an indication that early cycle developer testing was inadequate or sloppy. When such bugs are rare, TEs can turn to their primary task of ensuring that the software runs common user scenarios, is performant and secure, is internationalized and so forth. TEs perform a lot of testing and test coordination tasks among TEs, contract testers, crowd sourced testers, dog fooders, beta users, early adopters. They communicate among all parties the risks inherent in the basic design, feature complexity and failure avoidance methods. Once TEs get engaged, there is no end to their mission.

Ok, now that the roles are better understood, I'll dig into more details on how we choreograph the work items among them. Until next time...thanks for your interest.

22 comments
:

Interesting. I had never heard of Test Engineer being used as a job title before. Does the addition of the word engineer really represent what the job entails? Or do you think it is a reaction to the typically pejorative title 'tester' or 'QA'.

Note: I write test automation code for a living, and have always wondered what a suitable job title should be.

sounds like the key function performed by Google SET is enabling testability. I think a lot of software companies have SET perform the functions of both SET and TE described here, minus the expectation of code refactoring for the system under test. in my opinion, refactoring code is probably best performed by one closest to code, developers who are more aware of all the implementation subtleties since they coded the subtleties in the first place.

also, testability can be a hard quality to define and measure in some situations. i wonder how Google assesses contributions from SET, thru deltas in code cyclomatic complexity before and after the refactoring?

I wonder how you feel a smaller development shop should approach testing, say a smaller engineering R&D group part of a larger organization. Since we have fewer engineering resources most of them are focused on development and QA is a smaller group staffed mostly with a few testers with less of a computer science background who do more functional and acceptance testing. Do you think this is the most effective way to approach testing or would we benefit from investing resources in order to make our testing department more like Google's?

My experience as a titled Test Engineer at an aerospace company aligns well with James' description with the exception of the statement that I spent most of my time writing automation. A majority of my time was spent working with the customers on how they use the system, researching test methodology & learning/testing the system I was working on in the lab. I typically performed those tasks in that order, over & over again, week by week until the project was complete. The automation I used as part of my testing was to simulate a random input generating live system that communicated with our system via an API. I needed to be able to control the input so that I could test different user scenarios & that's where automation became useful. However, I did not write the base test code, the code was written by an experienced developer who modified an existing script in a matter of minutes to do the basics of what I need. It was a fairly simple program so I was able to expend it to simulate exactly what I needed. The customers were impressed that I was able to not only create scenarios to test individual events & functions but that I was also able to create interesting real world scenarios to show how the functions integrated. I was able to design a large volume of user scenario tests because I was I had the benefit of a development expert modifying a piece of code for my own testing use that I could further expand as my test ideas expanded. A TE should be focused on how the customers use their system first & foremost. Automation is a tool to help them achieve the best possible scenarios in the smallest amount of time.

Big 40 translated: This division of labor pushes me to think that Google is practicing the creation of "Hindu" dev factories. This is also probably slow, when you write code, then wait until someone writes tests to prove that the tests then have to correct you again and waiting for someone. Much more productive to do everything myself, consulting with more knowledgeable colleagues, if necessary.

Thanks for sharing this great post. I'm just curious that if the SETs and SWEs are partnered together, then how do they deal w/ personality or other conflicts should they arise? The reason I ask is becuase in most companies especially the big ones, there's usually some individuals who are harder to get along and work with than others. So I'd just like to know how does Google handle this type of situation? Thanks.

Hi,I am very keen to know how does Google perform its UAT?Considering the shear size of the search engine, how does it actually perform its UAT?Or, does it perform some other tests instead of UAT?Can you please let me know?

The Google average is about 1:10 (SWE:SETI), but it varies based on testability, complexity, and criticality. Some teams have no SETIs (test automation is straight forward), and the highest ratio I've seen is 1:5 (very complex automation required and/or mission critical project).

And of course, there are teams that are just SETIs, because they are working on Google-wide development/test infrastructure/tooling.