The subject of last week’s conference was “Free and OpenSource Testing Tools“, and there were 3 presentations, together with great food, and a lot of networking.

The evening was kicked off, as usual, by the very competent Alon Linetzki, the founder of SIGiST Israel, who introduced the 3 speakers, and updated the audience with details of the 2009 summer conference, which will be held from the 29th June to 2nd July, focusing on “Adding Business Value and Increasing ROI“.

There will be 2 days of workshops followed by 2 days of track sessions. Some of the workshops and talks will include TPI, Agile Testing and Performance testing, and among the speakers will be international names, such as Andy Redwood and Mieke Gevers.

Dudu Bassa, of Sela, one of the SIGiST’s biggest supporters, made a few announcements relating to test engineers and test managers (if you are one, looking for a job, or your organization needs one, you can contact him).

Dudu also announced that Yaron Tsubery , theITCB President, is running for Presidency of the ISTQB, and has a good chance to be elected. If he is elected, it will be a great success for Yaron himself (of course), but also a vote of confidence for the whole testing profession in Israel.

Yaron – we wish you the best of luck !!

OK, so back to last week.

SNAP, by SAP

The first lecture was given byAsaf Saar, QA Manager at SAP, and initiator of the SAP Netweaver Automation Platform (aka SNAP). Asaf described the complexities that his team has to deal with:

multi-site development and integration, including tests developed in many countries

many technologies and frameworks, including frameworks for black box and white box testing, developed in house – each office having developed its own …

more than 10,000 UI automation scripts, and more than 100,000 Java unit tests (we should all be so lucky!)

too many solutions, that can’t be synchronized

lack of usability

need to manage the Life Cycle – run it all, get reports, analyse the reports, and currently each type of report sits somewhere else: XML, database, access, automation report, etc.

all tests needed to be in the SAP test repository, 20 years old, with millions of tests from the whole world, and no API …

In short – unmanageable!

And from the unmanageable grew the idea of SNAP, which was developed by just 2 test engineers and a student over a 10-month period, in parallel to their regular work!

How did they do it?

SNAP was developed based on Visual WebGui, an open source RIA (rich internet application) development & deployment platform, atop standard .NET. Visual WebGui, from a small Israeli startup in Kfar Saba – Gizmox– enables development & deployment of applications on the server which are then virtualized on a standard browser with no specific installation.

The development was done in C#, compiled, and out came an Ajax web application!

SNAP itself enables integration of any testing framework via API/Web services (test drivers), and is used by developers, test engineers, and integration engineers. The actual tests are run on temporarily idle PCs across the globe, from a central entry point, thus maximizing resource utilization (all QTP and other automation PCs can be configured public and used for testing).

Even though it was originally developed for the test group, the developers now run the automation tests before submitting the s/w to the test engineers.

SNAP was presented at the SAP world conference last year, and it caused a lot of buzz, in particular raising the motivation and the positioning of the QA engineers in the organization.

Asaf mentioned that they had fantastic cooperation with HP/Mercury (it doesn’t do any harm if your “daddy” is SAP!) – which helped them get as many licenses as they needed.

You can see a detailed description and screenshots from the Gizmoxsite by clicking on the screenshots below.

Altogether are very interesting and educational presentation.

AutoIT – A Free Functional Automation Tool

The next presentation was given by Meir Bar-Tal, of SOLMAR Knowledge Networks. Meir gave an in-depth talk and demo of AutoITv3, a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows (including Vista) out-of-the-box with no annoying “runtimes” required!

Detailed Help file with examples, and large community-based support forums

Unicode and x64 support

Digitally signed for peace of mind

Works with Windows Vista’s User Account Control (UAC)

Below are a couple of AutoIT screenshots, so you can get an idea what it looks like.

AutoIt also has a community, run by Jonathan Bennett (just a kid – born in 1973) – author and copyright owner, an open community of contributors who are highly prolific and very dedicated. They release a new version every now and again – latest one was on the 24th December 2008. AutoIT has more than 26K forum users worldwide, who are very busy and active, and 2 books have been written about AutoIT.

In conclusion, AutoIT is a cost-effective solution for Web, .NET and Standard windows applications, it is limited to main windows technologies, and is best used with a proven testing automation methodology and framework. AutoIT enables maximizing ROI on automation by effective use of enterprise resources (parallel execution).

And interestingly enough, one of the other attendees at the evening said that he had heard of AutoIT a couple of years ago, but didn’t think it was much use for him and his team. After hearing Meir’s talk, he’s going to have another look.

AllPairs and PICT– Free tools for test design optimization

Last but by no means least, the third talk was given by Michael Stahl, Senior SW Test Engineer at Intel (although Michael was careful to say that the talk actually has nothing to do with his work at Intel), and member of the ITCBExecutive Board.

Michael started off by showing us the following video about the ESP (electronic stability przogram) in a car, which “Enhances driver control andhelps maintain direcctional stabiloity under all conditions. Provides the greatest benefit during critical driving situations, such as when driving on mixed surface conditions such as snow, ice or gravel.”:

[sorry all, I can’t get the stupid video to embed here, so you’ll have to click on the link below to see it]

Just combining these criteria, we get to the impossible number of 1,140,480 different test cases !! And that’s just for one car model….

This problem is known as a Combinatorical Explosion. which leads to a “Test Explosion” – more tests than you can ever (want to) run. The problem is very common in software, where you have many configuration parameters, external events, user inputs, environmental parameters, etc.

the parameters are orthoganol (i.e. don’t affect each other), we can test each one on its own

the combinations can be covered while testing other things (but that actually makes it more complex)

we have enough planned iterations to cover all the combinations (can you be sure that you will be able to control combinations per iteration?)

Uh-uh. Nothing is quite so simple – you need to evaluate the risk of using any of these strategies.

The other strategy is based on a theory by Tatsumi, and Cohen et al, which states that (bad) interaction between variables is usually between two variables, and that bugs involving interactions between three or more parameters are progressively less common. Therefore, we need to test all of the “all-pair” interactions, and add a few selected tests for specific cases.

He used as an example, a “Generic Installer” with 9 different on/off configuration options, i.e. 512 test cases (2 to the power of 9).

First he demo’d All-Pairs, which gave an answer that just 8 test cases were needed to test all the pairs (instead of 512, remember?)

Next he demo’d PICT, which gave an answer of 9 test cases (also miles better than 512).

Well, this was where All-Pairs stops. It’s a great little free tool, but even James Bach agrees that it has its limitations, the main ones being that there is no logic, and it works only with pairs.

PICT on the other hand, is a bit more complicated, but much more powerful. And it’s also free 🙂 Pict has a pile of options, that help you do any of the following:

combination order control

randomization

constraints ,e.g. if parameter x is “off”, then set parameter y to be “off”

sub-models: bundle certain parameters into groups, e.g. if there are some cases you always want to test

aliasing: when certain parameter(s) are “don’t care”, i.e. they can take a few values. It doesn’t matter which you use, but it would be nice to test all of them

weighting: bias the “don’t care” value distribution to test more important values more than others, e.g. test Vista more than XP

negative tests

seeding

and it has a very good help file, so go look if there’s anything here that isn’t clear 🙂

In addition, PICT can work with more than 2 parameters. As a real-live example, Michael showed a case in which there were 6 parameters, with between 2 and 22 values per parameter:

A: 2 values, B: 5 values, C: 6, D:7, E:8 and F:22.

The number of tests ended up being: 2 x 5 x 6 x 7 x 8 x 22 = 73,920.

In the PICT file, 1 constraint was defined, and 56 “if-then-else” equations. All pairs were checked.

The final result (i.e. number of test cases needed, instead of 73,920)??? 90. That’s right, NINETY. That’s it!

OK, so this sounds to good to be true. Where’s the catch? Well, no catch, but some risks:

N-pairs is just another tool in your toolbox

It’s not guaranteed to find all the bugs

The test quality is dependant on the input values you chose

Was your Equivalence Class correct>

Did you select the “right” representative values?

If the output is influenced by more than 2 variables, you need a higher level than all-pairs to catch the bugs

Blind selection of paris will miss the “interesting” or often-used combinations.

Bach and Shroeder both say: “We believe that this technique is over promoted and poorly understood”

And not to be forgotten, in fact about ALL your tools: ” Don’t fall in love with your tool – it’s not a Silver Bullet. Apply your tester’s instiscts, and analyze the situation”.

I hope you all enjoyed this summary, and can take away something to start doing tomorrow. It would be great if you would add a comment to this post as to what testing improvement using tools YOU are going to start implementing.

And don’t forget, mark the 29th June-2nd July in your calendars, because yours truly (yes, that’s ME) will also be at SIGiST Israel 2009, giving a talk on “Test Engineers – Adding Value in Tough Economic Times“.

Until then, see you at my next post (if you sign up for my RSS feed, of course!)