A new teams encounter with DET/TET as a framework for testing – Part 4-Facilitation

First of all, I want to be clear that this series of posts have nothing to do with my work at Atlassian. These posts are experiences from previous assignments with Jayway, and I am just now gathering my thoughts on it.

Facilitation

The effort of having 12-14 people involved in testing 2 hours/week needs to be prepared and facilitated. This is something that I usually propose, and the need was acknowledged rather quickly in this team.

It ended up being a rotating task within the development team to take on the facilitator role of the week. However, it was usually natural for some person to be facilitator when it came to different focus areas. This also meant that some people got the facilitator role more often during the time I was there.

I personally coached the facilitator of the week through preparations and all the way through the test session, debrief and meta debrief. Here are some aspects of facilitation that we considered.

Plan focus area

Together with PO and dev manager discuss and decide on the focus area of the week. Also important to isolate parts of the focus area suited for test pairs. By planning of focus area we include scoping of test ideas as well as framing the testing that needs to be done.

Set and publish agenda

For the team to know what shall be tested next time, and have the possibility to opt-in/opt-out, the agenda was sent in the beginning of the week. This also usually triggered spontaneous responses with some test ideas, which was nice to see.

Participants

The opt-out/opt-in systems both had the tendency of being fragile to changes, meaning it was hard to know who was actually taken part of testing until very close to the session. This caused frustration when it came to planning the test pairs and which areas to test. To mitigate this, we decided always to have a thought-out 3-bullet strategy for last minute changes. Ex:

Combine test areas

Pairs can become triplets

Scope out focus areas

This strategy also became usable when certain aspects of focus areas turned out to be blocked for testing for some reason, like environmental or lack of test data.

Pairing

There are some aspect for the facilitator to consider when in the creation of pairs:

Personal profile/role (dev/support/business)

Relevant test environments

Physical place

Test environments

Facilitator is responsible for relevant test environments to be in place. Usually this also included setting up new environments as part of preparations of test session. This was the biggest reason for having the weekly facilitator role, since it became really visible early on that the preparations were crucial for session success.

Another aspect we had to deal with was the need for specific client test environments. When this was needed, we had everyone specify which types of environments they already had installed, and created good test pairs accordingly. These included operating systems, java versions and browsers among other things, that needed to be covered.

During testing

I have seen very clearly that the facilitator really should not be involved in a test pair himself, and that was also the case in this team. Being able to hover the session, the facilitator could help pairs with test environments and data, as well as point them in new directions if they end up stuck. That last part demands some skill and test idea generation before the session.

Also, during one session, the environment accidentally got too much tampered with in certain areas by a pair, and the facilitator then pointed the others to avoid that contamination.

Debrief

The facilitator usually knows more about the areas under test, thus will have a central role in the debrief. It is important to have this in mind when facilitating the discussion about the code that you wrote yourself, your baby. You are stakeholder in those discussions, and will be biased.

The debrief that I promote is a two step activity. Directly after everyone has been testing, there is a need to get the time to express the most emergent things. So every pair gets a very brief time to answer the questions “how did it go?” and “what did you end up testing?”. Usually these questions are not needed to trigger something to be reported, and the most common case is that you have to cut them off for everyone else to get the brief moment. After the first round, its time to actually dig into the details of every pair, decide what are bugs, improvements or other relevant things. I usually encourage discussions, but they also need to be moderated if you want to get through everything that was found.

Sometimes we saw the need to set certain things aside for a triage by dev manager and PO. These included decisions that needed more information as well as pure business decisions.

Thank you for the extensive report!
So how did it go? What did people think about it? Did they enjoy testing?
Did you do anything to boost the team’s testing skills or did they handle it well enough on their own?