Posts by Richard Kieft

Does your daily standup often take longer then 15 minutes? Do you often get bored during an(other) endless discussion at the daily standup? Time to think again about your daily standups!

Back to the basics.

So back to the basics, what are the daily standups all about? The answer to this is very simple; The scrum team meets every day at the same time (and mostly at the same location) and each teammember reports their progress. Each teammember answers the following questions and no more then this!

What have I been doing since the last standup?

What will I be doing today?

Do I encounter any impediments?

Stick to these questions, and no more then that. Any other discussions should be taken outside the daily standup. Keep in mind that most of the time a discussion is only interesting for a few teammembers and not the whole team. I have attended daily standups with discussions taking over 10 minutes between 2 developers, no explanation needed that most of the other teammembers got bored and started checking their Phone. Keep focus on these 3 questions and let the standup last not longer then 15 minutes.

Been there, done that, got the T-shirt.

I have attended so many daily standups already and the one thing I noticed the most is that the teammembers always line up the same way every day. For example; Member A (tester), Member B (tester), Member C (Developer), Member D (Developer), Member E (Developer). This is not a bad thing but I noticed that when Member A was doing the talking then the only one actively listening was member B. But when it was member C turn member A and B where the ones who “leaned back” and member D and E suddenly started awakening. The reason behind this was that the testers discussed in more detail their work being done which the developers felt less involved with and the developers discussed coding difficulties which was less interesting for the testers.

Making it interesting and fun again.

We resolved the above behavior by sticking to the 3 questions and keeping details out of the daily standup. We more or less made it a team effort to get a Userstory to done as soon as possible by only naming what we have done and what we still needed to do. Remember: the standup is all about progress! Everyone understood the goal, sticking to this and not to the details made it more interesting for everyone. We eventually cheered when we got to burn down a userstory. By doing this we felt more that this was done by the team instead of a tester approving if the userstory was done or not.

Another thing to make the standup more interesting is by using a ball (any small ball will do) to throw at the person you want to hear next. By using this ball it was not nessacary to get the team line up differently every time. Trying to get the line up mixed is more difficult as people seem to stick with habbits. So the scrummaster throws the ball (gently if you will) to the first person, after answering the 3 questions he or she throws it to anyone he or she likes to hear next. Since most of the people would like to have fun at work, consider throwing the ball (gently!) to the one who is just taking a zip of coffee and see what happens 🙂

Another thing we noticed is that some persons liked to be talking, more then absolutely nessacary for a standup meeting. So we set a max of 2 minutes of talking per person. It should easily be possible to answer the three questions in 2 minutes. Any impediments or problems to be discussed where taken/discussed directly after the standup and only the people involved attended.

Set an example by being prepared. One year ago I’ve been added to an existing scrum team with a project almost ready to ship their version 1.0. During the standups I noticed that a lot of people had to remember what they had done yesterday. What I already did before was writing my activities done in just a few words on a sticky note. For example:

Tested US-123

Helped with unit tests US-456

Automated acceptence test US-789

I didn’t have to hasitate a while before informing my colleagues what I have been doing and what task(s) I will be picking up next. After a few daily standups more and more of my collegues done the same thing. This reduced the total time for the standup. Besided that it is very helpfull to have a short list of your activities ready so that you don’t have to recap all the things you’ve done when an other teammember is reporting his progress.

Yesterday I was asked to add some kind of reporting to our selenium test suite. I had some ideas to create a report in excel, but this all ended up in having to write a lot of code and probably reinventing the wheel. After some time using google I came across the website of Bas Dijkstra where he was reviewing ExtentReport version 1.4. Before reading the rest of my post you might want to read this post. After reading I became pretty exited about this library! Next I went to see the website of the creator of ExtentReports, ReleventCodes.com. I installed the latest version (at the time 2.04) by adding it to my maven pom.xml and gave it a go. Below my approach and some tips.

Installation:

For installation I used maven. Add the following to the pom.xml of your test project:

Open your command prompt and navigate to where ever your pom.xml is located and execute the following command:

mvn install -DskipTests
And your ready to go!

Using Junit? Use a @Rule annotation!

Since we already developed a “complete” automation test suite I didn’t want to add a new reporting line after every test we have written. When your experimenting with this library you could do this but later on you probably want to put this on a higher level in your testsuite.
First of all I want to tell you a little about the testsuite. I’m using Selenium webdriver (Java) and I developed our suite using the PageObjectPattern. I wrote a baseTestClass and all other tests extend this class. The baseTestClass responsibility is to setup Selenium (start a browser, navigating to a URL, truncate a database, etc.) I wrote a screenshotTaker class earlier which makes a screenshot of the browser when the test fails. This screenshotTakerClass has been added as a rule to the baseTestClass. This seems like a good starting point for me to see if a test fails if it creates a report and how it looks. For now I just added a line of code to the “failed” method of this specific class:

BaseTestClass snippet:
See line 11 for the rule where we call on the screenshotTakerClass

That’s it for now! After this modifications the report suited just fine for me. Only thing left is to rename the screenshotTakerClass since it now has more responsibilities.
I think EventReports is a very nice library to use when you have the need to build reports for Selenium. It will probably also fit other Java based test tools.

Exploratory testing.

Not a long time ago I had a discussion with someone from my organization about testing and the way we test our product at the organization I work for. We had an almost endless discussion about what exploratory testing is and why we practice this kind of testing. I was shocked to hear him say that: “Exploratory testing should be a tester’s only thing”. From my perspective this doesn’t have to be tester’s only, testers could surely use the input of consultants and / or developers when exploratory testing. And when needed perhaps we should test together! I got the feeling that he had no clue about what Exploratory testing really means and what it could mean for the stability of our product. Perhaps you have experienced the same discussion, or experiencing it right now. So here is what I think about exploratory testing.

So what is exploratory testing?

Let’s start off with why we do exploratory testing. Many of you reading this article will probably have heard of the term “Exploratory testing” before since there is already written so many about this topic. Many testers already apply this way of testing for some time.

In short, Exploratory testing is testing a system under test (or a part of it) without following a strict test script. Take a part of the system under test where multiple components come together and test the interaction of these components. It will give you information about how the system works and if the interaction between components is correct and understandable for the (most of the times less technical) end user. So you will be designing and executing test cases on the spot. Use your curiosity to experiment with the components and see if what ever happens is as you expected it to.

By exploring the functionality you probably execute existing test cases (happy path) but you will also test paths not covered in test cases (perhaps resulting in new (automated) test cases to be executed). The more you know about the product that is developed and the more you are experienced with testing, the better the result of the exploratory test.

Is exploratory testing only for testers?

From my perspective I would say: No, not totally! Perhaps a exploratory test is at best when executed by a tester since our testing skills and mindset would help when testing. But like I mentioned before: The more you know about the product that is developed…. That would indicate that everyone from developers to consultants could be involved in exploratory testing. Not only testers have knowledge about the product developed. For example: Consultants usually know how one or more customers use the product. This knowledge can be of great value when they are testing. Not all developers and tester will have this knowledge.

Developers for example know how they developed the product and where to find “weak” spots in the application. They can point them out for you to test, but can also test these themselves, learn, fix bugs and deliver a better (part of the) product. Developers can also practice exploratory testing when doing a code review. Besides literally only reviewing the code, click trough the component if possible and look for some simple bugs. By saying that I’m not saying that testers are obsolete, I think testers just shouldn’t be bothered with finding bugs that are blocking the flow just after two mouse clicks.

Why exploratory testing is so important:

Following test scripts is repetitive work, testing a new release for the second time by following a testscript will probably not give you new results. It’s most unlikely that these test will now suddenly fail. By exploring (parts of) the system under test you will probably find new results, find new testcases, write them down and perhaps automate them. The first thing I do when I’m done with testing a new version of our application via a manual regression testscript is exploratory testing. This can also be done when your in an agile team and the user store is completed because it reached its definition of done, after that there is nothing holding you back to explore the functionality and put it to done when you as tester are completely satisfied with the result.

How to overcome some of the downsides of Exploratory testing

Unfortunately exploratory testing also has some downsides. The main problem I experienced was that you can get easily lost in time when exploratory testing, you’ll have to watch out that it down not result in some kind of exhaustive testing. Best practice for me is to time box each exploratory testing session. For each time frame I pick one or two (when testing interaction) particular parts of the system under test, set some objectives what to test, write down or log my findings and when the time is up I move on to the next test.

An easy and lightweight tool to help you with creating sessions for exploratory testing is Sessiontester. As described by their website: “It has a timer so you limit your test sessions to a particular length, and it provides an easy way to record session notes.”

When to apply exploratory testing

Here is a summary of situations in which you could prefer applying exploratory testing:

When testing with test scripts is completed but you do not feel totally confidant with the delivered functionality.

When fast feedback about a functionality is needed.

When the functionality is already mature and no or minor changes has been made.

When there is no time to execute a (scripted) functional test but feedback is still required.

When testing the frontend of a webapplication with multiple browsers (Internet explorer, Firefox, Safari etc.).