Notes on software testing automation and all what is required to do proper Quality Assurance in software.

Author: Marc Andreu

For those struggling to understand what is holding their teams to perform better.

This post is a quick philosophical exercise that helps us understand the fundamental impediment that blocks improvement in software development processes (SDP).

“I think it is important to reason from first principles rather than by analogy.” Elon Musk

The Agile fluency model (AFM) is a “first principles” thinking philosophy about software development models. Taking the point of view of fear as a first principle. We easily understand that AFM is an inverse representation of an organisational fearless approach to software development. From fully terrified poorly managed organisations with zero stars to completely bold and fearless startups that want to change the world.

A quick review of the fear star levels would be as follows:

With zero starsare the organisations that are afraid to allow the technical teams to be responsible for organising internal working processes. SDPs like waterfall or scrum are imposed by a central committee of bureaucrats. Those pure bureaucrats usually do not know what they are actually doing, how come they could allow technical teams to decide the best way to deliver value.

With one star are the organisations that are afraid to allow the technical teams to be responsible for managing all the necessary aspects to deliver value. A typical project manager holding the SDP evolution would say “Let the team play a little bit with such a thing called Agile and let’s see if they are happy enough.”

With two stars are the organisations that are afraid to make the necessary arrangement to align the organisational structure with the needs of the technical teams. Now a top executive would hold the process by saying “Ups, now I need to change those policies that have been in place for the last 20 odd years? NO, while I am here in my comfortable seat!”

With three stars are the organisations that are afraid to change its own culture when facing optimisation challenges. The CEO objects with “Oh, no, our core values are set on clay tablets.”

With four stars are those organisations with the lowest level of fear to operate safely and yet fast enough to be a top market player. There are no people in charge to stop technical teams to innovate. Evolution is constant but yet not chaotic. There is an antifragile culture, respect for processes and all other artefacts in place with just in the right amount to facilitate work oriented to deliver business value.

When some management teams announce “We defined a new strategy!”, our natural reaction could be “Oh really? Finally!”

Soon after, when the project is engaged and there is no room for changes, we realise that what they understood for “strategy” is some kind of master plan inspired by a mythical film “Rise of the Planet of the Apes”.

So next time they present a “New strategy” we shall be a little bit more cautious. For that end, we will review the concept of software quality and assurance (QA) strategy. We will be then more prepared to counter-attack the apes.

We should start doing a quick review of some word definitions. So let’s see what is the meaning of the word “strategy”? The official definition by Cambridge Dictionary is as follows:

“a detailed plan for achieving success in situations such as war, politics, business, industry, or sport, or the skill of planning for such situations”

Therefore, by applying this definition to the software QA situation, we could conclude, after some deliberations, that the strategy definition in this situation would be somehow similar to:

“A detailed plan for achieving success in software development or the skill of planning for such successful software development”

The key word in this second definition is “successful”. In software engineering, we could quickly agree on a simple definition of success which is “achieving the initially wanted or hoped customer expectations”. In a more specific definition, we could admit something along the lines “the team’s outcome needs to match at the end of the development process, with the initial stakeholders’ expectations”.

Hence, the key item for success in any QA strategy is to have a clear vision of the stakeholder’s requirements. This is why Agile methodologies fervently recommend having the three amigos meeting in the early steps of the development process. In this meeting, the stakeholders, the developers and the QA team refine the stories TOGETHER. Refine means that the three amigos talk about the solution and make sure to have a common and clear understanding of the subject without assumptions or vague definitions.

All right then, in regards to software QA strategies, we will question if there would be a user’s request like “I as a user A want X number of issues at any point in time”. Well, it seems a strange request, isn’t it?

ALL RIGHT, GOT IT! A software QA strategy could NEVER be successful if it is mainly focused on counting the number of issues, bugs or failing test cases of our solution.

The quick answer, for the impatient reader, is that the QA strategy should be embedded into the development strategy. The following explanation for the avid reader is based on the concept known as SHIFT LEFT strategy.

We are going to use a simple and commonly overused parallelism in software engineering: the typical construction building analogy. Yes, it is a controversial analogy that falls apart if taken too far. For the purpose of comparing strategies, we believe that this analogy still bring a lot of good resemblances.

Any good project begins with the three amigos meeting in order to define the shape and structure of the solution. Will it be a farm, an office building or a pet’s house? The team gets a clean and well-defined idea of the final solution. At least an initial overview should be defined. The three amigos agree on those basic components of the construction that will hardly change over time. Here we are not yet talking about architecture, it is just a basic agreement of those hard assets like the land required, the orientation of the building or the basic functional features of the building.

Once a team has an initial vision of what it is required to build, QA Engineers should start the next face of the construction. By QA Engineers we refer to the lead developers and engineers responsible for building that initial structure.

The main goal in this step of the project would be to validate that the three amigos happy ideas are sounded. This team of experts work with the common aim to verify that the building is consistent and comply with all regulations. A testing strategy is defined in order to be able to find deviations from the original ideas. The construction fields have to be safe. The builders should have the proper tools set and so on.

The QA engineers BUILD UP the technical part of the strategy while working on the battle field. With hands on the solutions. The technical part of the strategy can not and should not be defined from an abstract management level. The QA engineers team should put their “builder hat” and work on the field finding the actual issues and practical solutions to build up an initial structure with the test harness appropriate for the project. The team prepares the land and builds up the initial structure with a safe scaffolding after the three amigos meeting has defined the basic requirements of the project.

At a practical level, the QA engineers team should write a set of failing test cases from unit to acceptance test level. Also, the team should build an initial and simple working script for an automatic build. The continuous delivery flow should be also completed in this step, even though it is initial with the minimum steps. All functional and nonfunctional features of the solution should be covered by the test harness. This step collects all the requirements as a set of test suites. The team do not work on writing test plans, nor requirements documentation, they just focus on writing a test suite to be executed manually and/or automatically.

In summary, the QA strategy for this face would be just defined with these two actions:

1. Write failing test cases, of the required type, to cover newly discovered bugs at any point in time, from the first line of code all the way down to maintenance face of the project.
2. Write unit, integration and acceptance test cases for new features. Define all the required details in clear BDD and TDD style.

Lead developers implement the initial coding architecture to shape the solution. With sounded foundations and coded architectural services to hold the solution in place. They implement the end to end slides of the project structure in order to validate especially all those new technical requirements never implemented before. This team should cover all this initial architecture with a wide and detailed test suite. This test suite will become the foundations for the well-known testing pyramid.

Once that initial structure and scaffolding of the building are in place we are able then to invite the development team to scale the solution up. In this face, the developers just focus on implementing failing test cases. Make those test cases pass will bring joy to the team and the progress can be monitored and estimated based on how many test cases have been or have to be passed yet.

The development team work with the simple aim to pass all the TCs defined in the test suite. The team implements and prepares all the required mockups and stubs to keep the test suite running as fast as possible. In this fashion, the development team can realise about bad decisions made earlier just by watching the evolution of the test results over time.

The team would be able then to adjust and redefine TCs in order to impact the project’s progress. The team is able to safely grow the solution with certainty and with constant verification that the original ideas (the structure of the building) are within the safe range of possibilities and complies with the initial plans of the building. If the original TCs start failing it means that the architecture needs to be reviewed or the code fixed. Developers focus on implementing what has been requested and nothing else. The programmers focus efforts to pass failing test cases while doing refactoring when needed and evolving the architecture within the allowed boundaries.

As a result, team’s motivation rises. Everyone is comfortable doing what they love. The three amigos write down the requirements in clear text and clean code. Lead developers put the basic structure in place. Developers implement code with the confidence of doing the right thing. As a result, stakeholders get what they originally wanted, even with all the last minute minor updates that disrupt the development team with joy.

What about estimations, planning and delivery? Simple! DO NOT OVERESTIMATE OR PLAN, JUST DELIVER!

In agile methodologies, the development team should make soft plans with potential dates to deploy. Then the team review the plan every week, which is different from delaying the plan one week at a time. Product owners can agree to only add the test cases that cover the required features per sprint. The team can then release the first successful automatic build that achieves the QA definition of done. The definition of done is a key part of the embedded QA strategy. For example, as one of the conditions of the definition of done could be that the test results achieve let’s say 95% test cases pass, of which 0% are crítical, 2% medium and 3% low priority. If the solution has 80% of stability then the team only have to remove features from the sprint in order to release earlier. Just like a vehicle, while driving fast if the vehicle gets unstable it is recommended to reduce SPEED in order to reduce RISK and thus regain control.

In conclusion, a better and more efficient QA strategy consists of shifting left all the QA activities. QA tasks should start at the very beginning of the development process by being embedded into the same process. This means that the QA strategy is embedded in the development strategy by providing, in advanced, clear and practical test case definitions. Without test suites defined in advance, the whole organisation fall down into a chaotic work style where no one knows certainly what is going on. The test suite, NOT THE TEST PLAN nor any other higher level document, can actually work as the road-map of the project. The test suite is the only tangible piece of work that developers can rely on when moving forward the project in order to know what is the actual progress of the project. Once developers are in control of the code, any managerial body will also know the actual status of the project. The opposite is not working at all, development team gets completely lost if plans are done from the stars and no one in the team knows the science to interpret those always changing constellations. Hence, shifting left cures all those bad QA strategy pains with relief and directly attacking the root cause of the issues.

I hope it helps, please leave a comment if you would like to add something.

To make a good use of my time while “waiting for the stars to align”, which means waiting for some managers to remove impediments, I started writing down some real life notes about how to set up things for a miserably “failing” test automation strategy.

Let’s start with the communication details. The first thing you should completely avoid is to talk to product owners or other people holding job titles that have the words like “Manager”, “Director”, “Chief” or “God”. If you are so unlucky that they would come to you and ask about what are you doing, under any circumstance, do not explain what are you going to do and how you can benefit them with your craftsmanship. A good enough reply would be something along the lines of “Oh, well…..blah blah blah” (Put a technical obscured verbose sentence here, which is so complicated that you don’t even understand). And hold your face straight without any doubts or smiles until they fade away thinking that they are so big shots and you are such a big guru in your subject.

The next step after you avoid explaining any detail about what you are going to do is to avoid any collaboration with developers and manual testers. They are so into another league of competences where they will never understand what you are trying to do. Do not waste time explaining them what are your needs and how you could help the team to achieve better outcome of their craft.

The developers are so busy creating what is popularly known as a technical debt. Nowadays everyone likes debt which is holding the world together. If you do not have a debt, you are not human, so coders are full time dedicated to that goal. Do not disturb them.

For the manual testing team, well, they are almost testing a complete different application. Their testing has nothing to do with automation and again a complete waste of your time trying to align with them about what tests should and should not be automated. Do not even think about to disturb the peaceful manual boring procedural testing either.

The last thing you must not do is to spend time trying to convince people on that Test Automation could help everyone on earth. That is like a “Taboo” subject and you risk your job, potentially your life if you even just think about it. If you keep trying to do the right thing, your teammates will see some sort of automation symbol in your eyes and you will be branded as a strange geek.

I will stop here for now because I am getting so excited that I could smash the laptop against the window and…

Keep safe. Keep playing the game. At the end of the day, most of the people are having an immense hilarious amount of unqualifiable fun!

This post is open for collaborations. I will add up any good contributions to this subject. I am sure that after all this time working in our beloved testing projects, we have learnt something, didn’t we?

“Those who cannot learn from history are doomed to repeat it.” George Santayana

For those who want to avoid all these failures, here there is a professional resource to back up our miserable experiences. In chapter 9, RoI Robbers in test automation by Greg Paskal, has a superb list of points that has to be considered when building up a testing strategy.

Update by Francesco Calvino

To help the not-to-do list, make sure that your core application hasn’t got any attribute like:

“readability” (the ability to read the code and understand what the app does just by reading the code)

“testability” (the ability of having elements properly and clearly identified on the page, log files that explain what error you just found has occurred and an application that is not coupled with the test data)

“stability” (the test environment should be stable and not going up or down like a yo-yo)

Also avoid decoupling where the tests should not be depending on the data that feeds said tests, and why not, you should not have instrumented builds. (instrumented is when your application is enriched with 3rd party tools that will enable automation tests to run) Although this is not a bad one, not having an instrumented build is the surest way to know that the application under test is NOT exactly the same as the application that you will ultimately ship to your customers. Thus, all your testing campaigns will be utterly useless.

For now I think it is enough Marc….

Many thanks Francesco. I hope it helps, please leave a comment if you would like to add something.

Last October I had the great chance to attend to JAX London Conference 2016. I will publish a new series of schematic articles with all the notes taken during the sessions. The idea for this series is to simply present a list of links, main ideas and short notes that I believe summarise the content of each session. These notes are just the basic reference from where we could expand our investigations and discussions. Please feel free to add your comments below and I promise you to update the notes with full credits to any contributor.

First of all, I have to reference the JaxCenter as the main publisher for the conference. Specifically, the following link is a good summary of the key insights of the conference.

Notes for the 10th October 2016:

Most of the content of the workshop comes from articles in the Microservices.io, the website with top content about Micro-service architecture patterns and best practices.

The key concept to approach Micro-services architectures with success is to balance out “organization + process + architecture” in order to support distributed solutions philosophy. This idea is expanded in this article.

“Successful software development requires the right organizational structure, development processes, and software architecture.”

Drawbacks:

Using Micro-services architecture is not a free lunch. It is hard and all the drawbacks have to be properly evaluated by the hole team before implementing.

Increase in complexity due to dealing with new issues to solve like inter process communication, partial failure against services not ready, private data per each micro service which requires event driven architecture.

There are powerful tools, PAAS, etc.. .Those require considerable learning curves. It may be required careful coordination across services for some features. And some other risks like high latency issues.

That was all for this session. I hope it helps, please leave a comment if you would like to add something.

“We made the first separate testing group that I know of historically I’ve never found another for that Mercury project because we knew astronauts could die if we had errors.We took our best developers and we made them into a group, it’s job was to see that that didn’t happen. They build test tools and all kinds of procedures and went through all kinds of thinking and so on. The record over half a century shows that they were able to achieve a higher level of perfection but it wasn’t quite perfect then, had never been achieved before since.” By Gerald M. Weinberg at Testing IS Harder than Developing Gerald Weinberg

After over a decade of work experience in software development, it is my impression that Quality and Assurance roles have evolved in a wrong way since the Mercury project. I am still not entirely sure why that devolution happened. However, I suspect that the new age fashion of accounting management style has had a dreadful impact on this subject during the last decades. What I am able to understand is that the metrics of any development effort looks much better with flaky testing strategies.

Good testers are those who become great experts of the actual behaviour of the solution under test. These testers know what is the actual logic and performance. They have real users´ experience on a daily basis. Those testers are the most advanced users an organization could have. Surprisingly though, most of the companies still disregard Q&A teams asvaluable resources for business value. Even worst is the fact that companies do not care much and do not actually support good career paths for Q&A professionals. Some organizations go as wrongly far as taking such careless approach when hiring QA resources. In the end, “Anyone can be a tester.” Right?. NO! Not really! .

Thus, I have renamed these professional “testers” as Business Experts (BEs) LINK PREVIOUS POST!!. They are the ones who know better how the application actually behaves from the user’s viewpoint. This behavior is ultimately the only thing that provides business value or frustration to the world.

In general, most of the current organisations do not really understand where the value of solutions comes from. Having a ton of Business documents and a ton of lines of code does not generate any value “per se”. The only value comes from listening to the users and understanding them very well by becoming one of them. BEs then very quickly and naturally become the user advocates of the application or service delivered to the world.

A great support for the BEs are the Software Developer in Test (SDT) or abbreviated as SWEITs. They also have kind of a negative aurora which can be seen from time to time, when the magnetic poles of the team become visible after a customer burst of sudden set of demands. Usually those roles are considered as half developers. When far from the truth, the good SDTs should be the top developers which can move on to solve wider and more complex issues than coding new features or fixing bugs. Just as Mr. G. M. Weinberg’s quote.

An SDT needs to understand all the development tools and all the business limitations in order to layout the most effective Testing Framework which will allow to seamlessly test the application with the minimum overhead of maintainability.

All things being equal, all roles in a development team require a huge amount of technical expertise. The subtle difference is that SDTs require a much broader set of skills to provide actual value, while DEVs and DEVOPS should be specialized in a focused set of technologies in order to implement high quality implementations.

In the T shape skills analogy, DEVs and DEVOPS would be then the vertical bar and SDTs the horizontal one. Once this relation is understood, there is no excuse to mystify some roles upon the others in a truly collaborative environment. So those companies pressing down the “testers” are missing a huge opportunity to improve their solutions and ultimately their bottom line.

In summary, companies that seek software development performance with high quality results should lay out a Q&A team formed with the best people from all the development teams. This would require a proper environment with an organic hierarchy structure of roles to incentivize respect and healthy discussions.

In the previous post I discussed why we need Quality & Assurance (QA) experts instead of just Developers (DEVs) with QA background. This is a follow up post where we will zoom into the details of the QA team member specializations. The QA team composition has the main goal to leverage the power of an efficient test automation framework, which should be constantly evolving and reporting valuable information about the project.

A usual misunderstanding among many teams is to regard testing as one team without any specialization. Moreover, the biggest mistake that I have seen is to give the same job title to all “testers” regardless their skills and contributions.

Without Specific tasks, there is no Motivation to make Achievable and Relevant work in a Time-bound manner. This was defined long time ago by the SMART criterias commonly attributed to Peter Drucker’s management by objectives concept. (SMART criteria)

So then how do we define specialisations? How should the QA experts collaborate? Well, we could just apply the ancestry principle of divide and conquer. It is a natural condition that individuals feel more comfortable working in a small group with a well defined specialization for a defined period of time. Occasional cross functional collaborations are at the same time healthy and important to avoid knowledge stagnation.

The three types of QA roles

Here, I would suggest the three types of QA roles which can maximise the testing performance.

On the left side of the spectrum, we have the Software Developers in Test (SDTs / SWEITS) which will work closely with the more technically advanced resources in order to build a Test Automated Framework (TAF). This TAF will have the necessary development infrastructure like a continuous deployment workflow. The main goal for the SDT role is to abstract all the complexities of the technology in order to enable the automation testing strategies. A key function will be to provide technical support to the rest of the QA team.

On the right side of the spectrum, there are the Validation Business Experts (VBEs) who are doing manual test explorations and logic validations. They provide user feedback and potential user support. These experts do not require a real technical IT development background and they focus exclusively on domain expertise verification. These are the advocates of the final product or service provided to the customers.

In the middle, the overlayed area is the Automation Experts. This role has a basic IT technical background in order to write automated test scripts using the Test Automation Framework (TAF) implemented by the SDTs. Thus, it is critical that the TAF provided is able to abstract all the high level technical details and hands on a much simplified layer at least one order of magnitude down in complexity, preferably much more. A very good way to achieve this simplification effort is to implement a test Domain Specific Language (DSL) layer. This would allow the AEs to quickly write and maintain all the suits of automated test cases just by learning the DSL instead of all the complexities of software development. The DSL learning curve is much lower because the AEs are familiar with core concepts of the business logic, thus they are familiar with the meanings and language flow.

This layout of roles should not be in silos. Instead, collaboration and freedom should be the norm in order to move from one side of the spectrum to the other, as people feel comfortable and capable to do the job. For example, an AE could implement new features or fix the TAF core solution if the technical skills are sufficient to perform the task. Or an SDT could and should do exploratory testing at a regular basis in order to get better knowledge of the business domain which in turns will help a lot when expanding the scope of the TAF’s Domain Specific Language.

Those three specializations of QA roles are the most effective way to build a Test Automation team that perfectly covers the main two pillars of automated testing. The first one is to have a high quality technical solution for testing and the other one is to make actual validation of the software under test. Test Automation does not directly transform into better Software Quality without the right set of roles taking care of running the show effectively.