When all tests are done and you think you’ve done enough to justify a release, think again. If there is still time, sleep a night over it.

The extra time will allow you to defocus, to be creative, and to look for other ways. If you step back from your tests, you usually have a better eye for the bigger picture. It results in new “what happens if…”-questions, new test approaches, and new test-techniques.

The extra time needed depends on the project, the person, and the System Under Test. Sometimes a cup of coffee or a go to the toilet is sufficient. In other cases a night or a weekend gives better results.

In my experience new tests will pop-up and some of them even reveal some yet undiscovered bugs. It is often well worth the extra time.

I was involved in an interesting discussion on Twitter, the other day. It started with a tweet by @testingqa (Guy Mason): “Still of the opinion that ‘Automated Testing’ is a deceptive term,no testing is being performed,it should be called ‘Automated Checking’ #qa“. With that he probably refered to Michael Bolton’s blog that there is a difference between testing and checking.

After that blog lot’s of people, mainly automation sceptics, stated that Automated Testing should be called Automated Checking. Although I acknowledge and agree that there is a difference between Testing and Checking, I don’t think it should be called Automated Checking. I’ll explain below why not, but first the rest of the Twitter conversation:

I responded to @testingqa’s tweet with: “@testingqa but that’s not true either. Automated Testing is more than checks. There are actions as well.”

@testingqa: “@ArjanKranenburg Automation does gather info & does allow one to verify response but distinction remains that it only checks to verify…”

@testingqa: “@ArjanKranenburg …if it meets expected outcome (or not). Preparing/cleanup does not reveal new information though and reports based on…”

@testingqa: “@ArjanKranenburg … checks only verifies against expectations still. As @michaelbolton said it can be used for more, but most times do not.”

To summarize Michael’s blog, but please read all his blog-series on testing vs checking because there is more to it, an important difference between testing and checking is that testing requires cognition to interpret the information revealed by one or more checks. But I’d like to extend that as I think testing is more than checking + interpretation.

And this becomes apparent when trying to automate a test. The base of a test consists of actions, checks and interpretation of the retrieved information. Most Systems Under Test (SUTs) need to be tickled before it responses. You need to click a button, send a request, press a key, etc. etc. In theory, the SUT can do things without an external stimulant, but in most cases it doesn’t.

Then the SUT responses and that response can be checked for certain aspects and the revealed information must be interpreted. If you state that Automated Testing should be called Automated Checking because the cognitive part can’t be automated, you’re ignoring the actions that can and often need to be done as well.

And there is more:

Before starting the actual test, you need to prepare the SUT to make sure it is in the correct state for the test to execute.

Automated TCs are often run in a batch, so it is good practice to restore the SUT to its original state.

Since interpretations of the results is still needed, the results need to be presented in a tester-friendly way. What and how you are reporting is very important.

etc. etc.

All these activities can be automated as well and are often included in the automated test script.

My point is, if you don’t want to use the term Automated Testing, call it Automation Assisted Testing (I like that), but Automated Checking simply doesn’t cover the activities done in Automated Testing.

Lately, it’s popular to have Tests driven by something. Since long we have Requirements based and Risk based testing, which undoubtedly would have been called Requirements-driven and Risk-driven Testing if they were thought of in the past decade. Then there are Use Case-driven, Design-driven, Data-driven, Keyword-driven, Model-driven, and Business-driven testing. And I’ve probably forgotten a few, but Common Sense-driven testing never seems to be an option.

What’s missing in the discussion is that it is in most cases best to combine approaches and techniques in order to get a diversified test approach. If you consider the types of bugs that can exist in your application and consider that every type of bug has it’s own best way to be detected, it is common sense that different approaches and techniques should be applied. There are approaches that will allow you to find a majority of the bugs, but that may not be the best way to find some types of bugs.

Approaches should be taken broad. Some bugs are easily detected by reviews, white board sessions, or other static testing methods. What is the ‘best way’ depends on your product, your organization, and the circumstances and is certainly easier said than determined. But I don’t believe that there are situations where only one approach is best. And trying to find the bug in a later phase may be best as well.

If you rely on one approach it is getting harder and harder to find the next bug. And there is a substantial risk that not all bugs will be found. To minimize that risk, diversifying the test techniques and approaches is the logical thing to do.

On some conferences, time and space is reserved for some shameless book-promotion. Sometimes entire reviews are published in magazines and if you have written a book, you are almost automatically considered the authority on the subject. Test books are popular and in these days of crisis, it seems that even more books are published.

Personally, I don’t read a lot of books. At least not on the subject of testing, and here is why:

A book is an old technology. There is no interaction, no feedback, no discussion. Especially on the subject of testing, the interaction and discussion is very necessary.

The content is already old when the book is published. The IT world goes faster and faster these days, but books must be reviewed, edited, printed, distributed, etc. And a good book gives discussion. This discussion is never printed (see point 1), but corrections resulting from this discussion are only published in later versions of the book.

The purpose of the authors of test books is often for their own promotion. It looks good on their CV, not that of the reader.

Test books are rarely based on good research. E.g. the kind of research done at universities. This makes the foundation very thin and often applicable to a very specific situation. What is left are opinions. Blogs and online fora are far more suitable for that.

I am, and this a personal one, a slow reader. I simply don’t have the time to read boring books of 400+ pages.

This does not mean that I don’t educate myself. I read a lot of blogs, magazines, participate in online fora, and goto events and conferences when possible. For me these are valuable sources of information. They provide me with tips, new insights, hints, etc., and keep me up-to-date with the latest from the testing field. And in a much faster, honost, and direct manner.

No doubt that there are exception and that there exist books that do not have one of the drawbacks mentioned above. Let me know if you’ve found one.

A test ideology that I totally agree with is Context-Driven Testing. But I will never describe my activities like that. In short Context-Driven Testing says that the best way of testing depends on the context. There are so many external factors of influence to your test activities, there is no one best practice that will work for all.

So why won’t I use the term Context-Driven Testing?

Because it is a typical engineer’s answer to the question how should be tested: “it depends”. This is true, but the answer will not get you any further.

Of course it depends. Every project is different. Every team is different. The customer, the budget, the timing and the time are all different. So it makes sense that testing is different as well. Therefor every test project should start with a good thinking session about how the test activities should be done. (Actually, the first question is if testing needs to be done at all.)

And best practices, as well as experience, can help you with choosing a strategy, techniques, tools, etc. Like a list of tips & tricks that can be used. Pick the ones you think are suitable or invent your own. Whether or not right for the job is you’re responsibility, not that of the author. No one ever claims that best practices are universally applicable.

Besides, best practices already have a context in them. Testing for traditional engineering is far different than testing software. That’s why best practices are often presented like: “Best practices in Software Testing” or “Best practices for Model Based Testing in an Embedded PLC Solution”. But unfortunately that context is not always copied and the Best Practice ends up on a list of general Test Best Practices.

Nonetheless people talk about testing like it’s all the same. Often you hear a presentation about a strategy on testing a web application and people ask questions with testing of an embedded application in mind. When the context is forgotten, miscommunication is the result.

Model Based Testing (MBT) is a term frequently used these days. It sounds well and just from the words seems like a good thing to do. But is it really worth the hype?

The strict definition of Model Based Testing is when you base your testing on one or more models. This still sounds good, but not so much if you consider the following:

A model is a simplification of reality. So if your tests are solely based on one or more models, you’re bound to leave some paths untested.

If code is also generated from the same model, you’re testing the generation process in stead of the product or model. Even if the code was not generated automatically, but the developer also based his code on the model, it is not likely that many faults are found.

Models must be created or provided.

Models can contain faults as well.

But lately the term is also used for tests described by models. The model still describes the system, because every test case does, but the primary purpose of the model is to describe the test case. Describe what path is followed, what actions are to be taken, and what is to be expected. Although I don’t call this Model Based Testing, Test Case Modeling is perhaps a better term, I do see a great benefit of it.

In earlier, pre-Agile times long descriptions used to be made for every test case followed by another description of the next test case that was only different by a few (essential) words. Later, Excel was used to describe test cases by key- and action-words. Test Case Modeling can be the next step in the evolution of describing test cases and to visualize them with the best possible model-type.