Automation Journalhttps://www.automationjournal.org
Notes on software testing automation and all what is required to do proper Quality Assurance in software.Thu, 17 Aug 2017 19:35:27 +0000en-GBhourly1https://wordpress.org/?v=4.8.1Fearless approach to software developmenthttps://www.automationjournal.org/2017/06/17/fearless-approach-to-software-development/
Sat, 17 Jun 2017 16:31:39 +0000http://automationjournal.org/?p=278For those struggling to understand what is holding their teams to perform better.

This post is a quick philosophical exercise that helps us understand the fundamental impediment that blocks improvement in software development processes (SDP).

“I think it is important to reason from first principles rather than by analogy.” Elon Musk

The Agile fluency model (AFM) is a “first principles” thinking philosophy about software development models. Taking the point of view of fear as a first principle. We easily understand that AFM is an inverse representation of an organisational fearless approach to software development. From fully terrified poorly managed organisations with zero stars to completely bold and fearless startups that want to change the world.

A quick review of the fear star levels would be as follows:

With zero starsare the organisations that are afraid to allow the technical teams to be responsible for organising internal working processes. SDPs like waterfall or scrum are imposed by a central committee of bureaucrats. Those pure bureaucrats usually do not know what they are actually doing, how come they could allow technical teams to decide the best way to deliver value.

With one star are the organisations that are afraid to allow the technical teams to be responsible for managing all the necessary aspects to deliver value. A typical project manager holding the SDP evolution would say “Let the team play a little bit with such a thing called Agile and let’s see if they are happy enough.”

With two stars are the organisations that are afraid to make the necessary arrangement to align the organisational structure with the needs of the technical teams. Now a top executive would hold the process by saying “Ups, now I need to change those policies that have been in place for the last 20 odd years? NO, while I am here in my comfortable seat!”

With three stars are the organisations that are afraid to change its own culture when facing optimisation challenges. The CEO objects with “Oh, no, our core values are set on clay tablets.”

With four stars are those organisations with the lowest level of fear to operate safely and yet fast enough to be a top market player. There are no people in charge to stop technical teams to innovate. Evolution is constant but yet not chaotic. There is an antifragile culture, respect for processes and all other artefacts in place with just in the right amount to facilitate work oriented to deliver business value.

When some management teams announce “We defined a new strategy!”, our natural reaction could be “Oh really? Finally!”

Soon after, when the project is engaged and there is no room for changes, we realise that what they understood for “strategy” is some kind of master plan inspired by a mythical film “Rise of the Planet of the Apes”.

So next time they present a “New strategy” we shall be a little bit more cautious. For that end, we will review the concept of software quality and assurance (QA) strategy. We will be then more prepared to counter-attack the apes.

We should start doing a quick review of some word definitions. So let’s see what is the meaning of the word “strategy”? The official definition by Cambridge Dictionary is as follows:

“a detailed plan for achieving success in situations such as war, politics, business, industry, or sport, or the skill of planning for such situations”

Therefore, by applying this definition to the software QA situation, we could conclude, after some deliberations, that the strategy definition in this situation would be somehow similar to:

“A detailed plan for achieving success in software development or the skill of planning for such successful software development”

The key word in this second definition is “successful”. In software engineering, we could quickly agree on a simple definition of success which is “achieving the initially wanted or hoped customer expectations”. In a more specific definition, we could admit something along the lines “the team’s outcome needs to match at the end of the development process, with the initial stakeholders’ expectations”.

Hence, the key item for success in any QA strategy is to have a clear vision of the stakeholder’s requirements. This is why Agile methodologies fervently recommend having the three amigos meeting in the early steps of the development process. In this meeting, the stakeholders, the developers and the QA team refine the stories TOGETHER. Refine means that the three amigos talk about the solution and make sure to have a common and clear understanding of the subject without assumptions or vague definitions.

All right then, in regards to software QA strategies, we will question if there would be a user’s request like “I as a user A want X number of issues at any point in time”. Well, it seems a strange request, isn’t it?

ALL RIGHT, GOT IT! A software QA strategy could NEVER be successful if it is mainly focused on counting the number of issues, bugs or failing test cases of our solution.

The quick answer, for the impatient reader, is that the QA strategy should be embedded into the development strategy. The following explanation for the avid reader is based on the concept known as SHIFT LEFT strategy.

We are going to use a simple and commonly overused parallelism in software engineering: the typical construction building analogy. Yes, it is a controversial analogy that falls apart if taken too far. For the purpose of comparing strategies, we believe that this analogy still bring a lot of good resemblances.

Any good project begins with the three amigos meeting in order to define the shape and structure of the solution. Will it be a farm, an office building or a pet’s house? The team gets a clean and well-defined idea of the final solution. At least an initial overview should be defined. The three amigos agree on those basic components of the construction that will hardly change over time. Here we are not yet talking about architecture, it is just a basic agreement of those hard assets like the land required, the orientation of the building or the basic functional features of the building.

Once a team has an initial vision of what it is required to build, QA Engineers should start the next face of the construction. By QA Engineers we refer to the lead developers and engineers responsible for building that initial structure.

The main goal in this step of the project would be to validate that the three amigos happy ideas are sounded. This team of experts work with the common aim to verify that the building is consistent and comply with all regulations. A testing strategy is defined in order to be able to find deviations from the original ideas. The construction fields have to be safe. The builders should have the proper tools set and so on.

The QA engineers BUILD UP the technical part of the strategy while working on the battle field. With hands on the solutions. The technical part of the strategy can not and should not be defined from an abstract management level. The QA engineers team should put their “builder hat” and work on the field finding the actual issues and practical solutions to build up an initial structure with the test harness appropriate for the project. The team prepares the land and builds up the initial structure with a safe scaffolding after the three amigos meeting has defined the basic requirements of the project.

At a practical level, the QA engineers team should write a set of failing test cases from unit to acceptance test level. Also, the team should build an initial and simple working script for an automatic build. The continuous delivery flow should be also completed in this step, even though it is initial with the minimum steps. All functional and nonfunctional features of the solution should be covered by the test harness. This step collects all the requirements as a set of test suites. The team do not work on writing test plans, nor requirements documentation, they just focus on writing a test suite to be executed manually and/or automatically.

In summary, the QA strategy for this face would be just defined with these two actions:

1. Write failing test cases, of the required type, to cover newly discovered bugs at any point in time, from the first line of code all the way down to maintenance face of the project.
2. Write unit, integration and acceptance test cases for new features. Define all the required details in clear BDD and TDD style.

Lead developers implement the initial coding architecture to shape the solution. With sounded foundations and coded architectural services to hold the solution in place. They implement the end to end slides of the project structure in order to validate especially all those new technical requirements never implemented before. This team should cover all this initial architecture with a wide and detailed test suite. This test suite will become the foundations for the well-known testing pyramid.

Once that initial structure and scaffolding of the building are in place we are able then to invite the development team to scale the solution up. In this face, the developers just focus on implementing failing test cases. Make those test cases pass will bring joy to the team and the progress can be monitored and estimated based on how many test cases have been or have to be passed yet.

The development team work with the simple aim to pass all the TCs defined in the test suite. The team implements and prepares all the required mockups and stubs to keep the test suite running as fast as possible. In this fashion, the development team can realise about bad decisions made earlier just by watching the evolution of the test results over time.

The team would be able then to adjust and redefine TCs in order to impact the project’s progress. The team is able to safely grow the solution with certainty and with constant verification that the original ideas (the structure of the building) are within the safe range of possibilities and complies with the initial plans of the building. If the original TCs start failing it means that the architecture needs to be reviewed or the code fixed. Developers focus on implementing what has been requested and nothing else. The programmers focus efforts to pass failing test cases while doing refactoring when needed and evolving the architecture within the allowed boundaries.

As a result, team’s motivation rises. Everyone is comfortable doing what they love. The three amigos write down the requirements in clear text and clean code. Lead developers put the basic structure in place. Developers implement code with the confidence of doing the right thing. As a result, stakeholders get what they originally wanted, even with all the last minute minor updates that disrupt the development team with joy.

What about estimations, planning and delivery? Simple! DO NOT OVERESTIMATE OR PLAN, JUST DELIVER!

In agile methodologies, the development team should make soft plans with potential dates to deploy. Then the team review the plan every week, which is different from delaying the plan one week at a time. Product owners can agree to only add the test cases that cover the required features per sprint. The team can then release the first successful automatic build that achieves the QA definition of done. The definition of done is a key part of the embedded QA strategy. For example, as one of the conditions of the definition of done could be that the test results achieve let’s say 95% test cases pass, of which 0% are crítical, 2% medium and 3% low priority. If the solution has 80% of stability then the team only have to remove features from the sprint in order to release earlier. Just like a vehicle, while driving fast if the vehicle gets unstable it is recommended to reduce SPEED in order to reduce RISK and thus regain control.

In conclusion, a better and more efficient QA strategy consists of shifting left all the QA activities. QA tasks should start at the very beginning of the development process by being embedded into the same process. This means that the QA strategy is embedded in the development strategy by providing, in advanced, clear and practical test case definitions. Without test suites defined in advance, the whole organisation fall down into a chaotic work style where no one knows certainly what is going on. The test suite, NOT THE TEST PLAN nor any other higher level document, can actually work as the road-map of the project. The test suite is the only tangible piece of work that developers can rely on when moving forward the project in order to know what is the actual progress of the project. Once developers are in control of the code, any managerial body will also know the actual status of the project. The opposite is not working at all, development team gets completely lost if plans are done from the stars and no one in the team knows the science to interpret those always changing constellations. Hence, shifting left cures all those bad QA strategy pains with relief and directly attacking the root cause of the issues.

I hope it helps, please leave a comment if you would like to add something.

To make a good use of my time while “waiting for the stars to align”, which means waiting for some managers to remove impediments, I started writing down some real life notes about how to set up things for a miserably “failing” test automation strategy.

Let’s start with the communication details. The first thing you should completely avoid is to talk to product owners or other people holding job titles that have the words like “Manager”, “Director”, “Chief” or “God”. If you are so unlucky that they would come to you and ask about what are you doing, under any circumstance, do not explain what are you going to do and how you can benefit them with your craftsmanship. A good enough reply would be something along the lines of “Oh, well…..blah blah blah” (Put a technical obscured verbose sentence here, which is so complicated that you don’t even understand). And hold your face straight without any doubts or smiles until they fade away thinking that they are so big shots and you are such a big guru in your subject.

The next step after you avoid explaining any detail about what you are going to do is to avoid any collaboration with developers and manual testers. They are so into another league of competences where they will never understand what you are trying to do. Do not waste time explaining them what are your needs and how you could help the team to achieve better outcome of their craft.

The developers are so busy creating what is popularly known as a technical debt. Nowadays everyone likes debt which is holding the world together. If you do not have a debt, you are not human, so coders are full time dedicated to that goal. Do not disturb them.

For the manual testing team, well, they are almost testing a complete different application. Their testing has nothing to do with automation and again a complete waste of your time trying to align with them about what tests should and should not be automated. Do not even think about to disturb the peaceful manual boring procedural testing either.

The last thing you must not do is to spend time trying to convince people on that Test Automation could help everyone on earth. That is like a “Taboo” subject and you risk your job, potentially your life if you even just think about it. If you keep trying to do the right thing, your teammates will see some sort of automation symbol in your eyes and you will be branded as a strange geek.

I will stop here for now because I am getting so excited that I could smash the laptop against the window and…

Keep safe. Keep playing the game. At the end of the day, most of the people are having an immense hilarious amount of unqualifiable fun!

This post is open for collaborations. I will add up any good contributions to this subject. I am sure that after all this time working in our beloved testing projects, we have learnt something, didn’t we?

“Those who cannot learn from history are doomed to repeat it.” George Santayana

For those who want to avoid all these failures, here there is a professional resource to back up our miserable experiences. In chapter 9, RoI Robbers in test automation by Greg Paskal, has a superb list of points that has to be considered when building up a testing strategy.

Update by Francesco Calvino

To help the not-to-do list, make sure that your core application hasn’t got any attribute like:

“readability” (the ability to read the code and understand what the app does just by reading the code)

“testability” (the ability of having elements properly and clearly identified on the page, log files that explain what error you just found has occurred and an application that is not coupled with the test data)

“stability” (the test environment should be stable and not going up or down like a yo-yo)

Also avoid decoupling where the tests should not be depending on the data that feeds said tests, and why not, you should not have instrumented builds. (instrumented is when your application is enriched with 3rd party tools that will enable automation tests to run) Although this is not a bad one, not having an instrumented build is the surest way to know that the application under test is NOT exactly the same as the application that you will ultimately ship to your customers. Thus, all your testing campaigns will be utterly useless.

For now I think it is enough Marc….

Many thanks Francesco. I hope it helps, please leave a comment if you would like to add something.

This was as workshop session. During the morning Russ performed a master lesson about how to prepare our mind towards Mirco-services solutions. Here I present just some of the hot key concepts of the theoretical part of the session.

Do not aim for the right / best solution, rather aim for the seed of an evolving organism.

Describe the system by its events only. Consider the things that happen in order to find the events of a system. Create an ubiquitous language not about the entities of the model, but about the events between those entities.

Use a more scientific method approach.

Events are immutable. They must be logged and persisted.

Find meaningful event names.

Avoid acronyms, they are an impediment on communication.

Avoid code names for systems, rather describe what the system does.

Apply single responsibility principle for events.

Good names come from meetings with stakeholders while describing the system.

Find the causality of events. The flow of events as they trigger other events.

Define the bounded context. The context where the team can know what happens, what are the events and the team can evolve the solution all based on internal / technical reasons.

The workshop was much more fun than this list of concepts. Russ is a top level speaker and I even would say he reaches the level to be a Technologist Philosopher with well rooted and practical principles. It is hard to summarise all the content that he delivered in this session. Probably would be much better to keep an open eye and attend to the next Miles’s master session.

I hope it helps, please leave a comment if you would like to add something.

Do not automate stories in levels 4th and 5th, do spikes and understand first by trying / testing. (Create failing test cases)

Convert 4th and 5th stories into 3th, 2on and 1st level stories.

Write “spiky code” before production code in levels 4th and 5th. In lower levels we write production code.

For stories in 1st level test manually just once. Do not BDD obvious stories, only create BDD test cases for complicated and maybe some complex stories. Do not plan for obvious stories, just do it.

New things continuously coming is part of the game. Reduce problems to fit within the boundaries of influence of the team. Break dependencies outside of the team.

Legacy projects are in the 4th and 5th levels of ignorance.

“Teams can spike, learn from the spike, then take their learning into more stable production code later (Dan North calls this “Spike and Stabilize”). Risk gets addressed earlier in a project, rather than later. Fantastic!” Liz Keogh

That was all for this session. I hope it helps, please leave a comment if you would like to add something.

Java mission control: Oracle® Java Mission Control for Eclipse is a set of plug-ins for the Eclipse IDE designed to help develop, profile and diagnose applications running on the Oracle® Java HotSpot VM.