Testing And Related Discussions In Software

testing

The other night I went to see First Man, the Neil Armstrong biopic, directed by Damien Chazelle, and starring Ryan Gosling and Claire Foy. Without going into too much about the plot (basically the follows Armstrong’s life during the Gemini and Appollo Missions) and quality of the film, I can tell you it is an absolute masterpiece. Check out the trailer here before you go and see it yourself.

This film brought to me a thought on risk. In software testing, we think about risk all the time. The risk to the product, projects we are a part of, the team, even the company. Being a stakeholder in testing, as part of product development involves a lot of risks. However, one area of risk we rarely touch on is the risk to human life, and the cost thereafter if the worst should happen. It’s not something that really comes into the realms of possibility, unless we are testing safety-critical software, as part of a larger system, such as an aeroplane, medical device or defence system.

During the missions that Armstrong worked on, many men and women had to deal with the risk that human beings would be harmed as part of the endeavours of sending someone to the moon. Armstrong and his team were at the heart of that. Many of his colleagues were killed or injured, either during missions themselves, or during Earth-based testing of the equipment they were using.

Take the testing of Appollo 1 and subsequent deaths of astronauts Ivan ‘Gus’ Grissom, Ed White and Roger Chaffee. They died during the ground testing of the command module. According to the report that followed, the fire that killed them was caused by an electrical fault, the nylon material used in pressure suit construction, and also the pure oxygen atmosphere of the capsule at the time.

The crews of these missions were in themselves a crucial part of the test data. If any of them were physically or mentally incapable of performing their tasks, then they either wouldn’t have been allowed to take part in missions. The lives of the people involved, as well as the financial and political cost of such missions, would have been too great. They knew the risks, and they understood them. They took the ultimate test. They were the test!

When I think about the risk concerns of the software I’ve worked on in the past, it’s often hard for me to disassociate from the ultimate risk of that product failing. It’s probably why I am so interested in security as a concept. For me, it’s one of the ultimate tests that a software tester can undertake. What can you do to undermine the systems you build, so that you can then protect them?

During the movie, I tried to consider what heuristics they had for solving the problems they encountered. I also considered what oracles they might have looked to when building up a picture of what was important for them right then. The scope seemed to be endless. The mind boggles of the complexity of the task.

Human life seemed to me to be at the heart of the story of this movie. Not only in the grander endeavour and global impact that the success of the Appollo missions had on the world, furthering our desire for scientific exploration of space.

I salute everyone who was involved in these missions, whether they were mathematicians, computers (see Hidden Figures) physicists, engineers, administrators, or the astronauts and their families. I also salute those who came after them, during the Space Shuttle and Soyuz projects. They weren’t just test data, they were the greatest pioneers (and testers) the world has ever seen.

Solving a problem of learning

I’d like to introduce you to a little project that David Hatanian and I have been working on. David is a member of the fantastic team at Codurance, and we first started working together on this project in February 2016.

Following my experiences at European Testing Conference in Bucharest, I realised the time had come for me to create and build my own vulnerable application. This was so that I would be able to run my own workshops on security testing, coach my colleagues and other testers aswell as demonstrating vulnerabilities; such as the OWASP Top 10.

I also worked closely with Bill Matthews, initially shadowing him, but then helping him to deliver workshops at international conferences. For these workshops, he built his own web application, Ace Encounters, which is a travel and wild adventure website.

Of course, using a real world application to practice these skills is highly illegal. So, students of security testing need a safe place to practice and learn. We aren’t hackers after all, we are testers. We aren’t there to steal, undermine or attack. We are there to explore and learn.

Pairing with David has been incredibly rewarding for us both. I’ve supported him with his understanding of security vulnerabilities, and he has supported me with my learning of object orientated programming (in this case Java).

A couple of months ago I ran a session using Ticket Magpie, for the testers at NewVoiceMedia. The session was well received, and everyone appeared to have fun. The team there are really great at generating interesting test ideas, developing their skills, and following through with practical application of their learning. Taking this out into the wider community of testers was to be the next step, at Test Masters Academy.

Get Ticket Magpie

Ticket Magpie is easy to get, from David’s Github project. Check it out here and follow the instructions on the page. Here is some additional installation guidance.

Running TicketMagpie

If you are successful, your browser should display the application, and it should look like this:

Ticket Magpie

Bug Hunt

I invite you to have a go at exploring Ticket Magpie. There are some fun features for you to take a look at. I’m not going to spoil things for you by listing everything here. You might also find some interesting problems.

Because the application runs on your local machine, docker or VM, you can use any technique, tool and gnarly hack you want, without harming anything or anyone else.

Take your time and let me know what you think. If you feel the need, you are welcome to use this form to provide feedback about the application: Ticket Magpie Survey. Alternatively, just message me on Twitter, or comment on this blog.

I’ve not blogged recently for various reasons, both personal and professional. But on the anniversary of my blog, I want to return with a more positive attitude to it after a fallow period. This is a quick blog as way of a catch up over the last few months activities (other than my professional and personal ones). It’s an opportunity to share some of the highlights of my experiences in the testing community recently, which have been warm and welcoming during some difficult times.

A few months ago I attended the inaugural Brighton Testing Meetup, catching up with some of the good folk I last met at TestBash 3. Brighton is sort of my home town, yet I have never worked there so having a foot in the pond that is the testing community there has been a great thing. We talked, we ate and drank and shared ideas. Early plans have been made for my future involvement, leading talks and discussions around some exciting testing topics. Emma Keaveny and Kim Knup are developing a vibrant new community of interest and I can’t wait to be more involved. Roll on 2015.

The community of testing is as varied and as exciting as the variety of people who work and learn within it. This is a good thing, perhaps the greatest thing about the community…and this is where the contrast lies.

The same week I went to Brighton, I also attended the latest Special Interest Group in Software Testing conference. SIGIST is organised by established, more academic people in the testing industry, on behalf of the BCS. It meets quarterly in London. There were a number of interesting topics being discussed, but it didn’t set my heart on fire. Only one or two talks out of the whole day really engaged me with the subject matter. Whilst there was the opportunity to learn from some experienced practitioners, there wasn’t the same emphasis on collaborative learning, challenging established testing paradigms and positive enquiry. It wasn’t a bad experience, it just didn’t make me more passionate about my craft, nor help me understand something new about testing. It was good however to catch up with some people who I have met before, and some who I hadn’t…but were on my radar. Namely Tonnvane Wiswell, Declan O’Riordan, Paul Gerrard, and Mike Jarred.

Another recent experience has been with some of the free, online and collaborative forums for learning and discussion that I have participated in. Firstly, Stephen Blower’s Testing Couch forum. This is a free and open Skype forum for any testers who are interested in talking about their craft. In the couple of times that I have attended, the chat has always been productive, supportive and non judgemental. Stephen makes this forum available periodically, usually every month or two. It’s a fantastic opportunity for experienced or novice testers to throw ideas around, be challenged and share thinking and learning.

Lastly, and probably my most positive experience was being a guest speaker in October’s Weekend Testing Europe forum. I was sharing my recent learning and experience in software testing, leading the attendees in an exploratory session with security as the focus. To a lot of the people during the chat, security testing was a new concept for which they had little experience or opportunities to learn. It was incredibly rewarding to be able to facilitate this session, not only on a personal level, but also to see many others taking up the challenge of securing their applications, and considering security as part of their testing.

Amy Phillips and Neil Studd have really breathed new life into Weekend Testing Europe, which had been dormant for a while. Keep an eye out for WTEU in the future, as it is a great way of keeping in touch with the testing community around the world. Be prepared to go in with eyes open, lots of questions, and a hunger to learn. All you need to do is volunteer two hours of your time on a Sunday afternoon. It sure beats watching Columbo repeats or traipsing round a garden centre.

So, that’s it for now. I’ll be blogging again soon. The Test Doctor will return!

In what seems to now have been a storming comeback, the European chapter of Weekend Testing was a breath of fresh air in the learning opportunities for testers. You can find a link to the latest session here. Ably facilitated by Amy Phillips (@itjustbroke) and Neil Studd (@neilstudd) the session was dynamic and a great chance to talk with other testers in a relaxed environment. I didn’t even have to leave my house!

The main focus of the session was heuristics, how we understand, use and learn from them. There is a lot of great material on what heuristics are and how they can be used to inform and drive our testing ideas and execution. I won’t dwell too much on these areas but just hope to point you to some useful material:

Anyway, my main take away from this session was the ruts that sometimes as a tester that we might sometimes get stuck in. I chose the Constraints heuristic, utilising data type attacks upon the World Chat Clock application we were all discussing.

I found myself falling back onto what now I feel to be a bit of a party piece. I immediately decided to perform a few simple XSS and SQL Injection attacks against the application. As I expected but couldn’t be sure, was that the application’s user interface would prevent these kinds of basic security vulnerabilities from being exploited. I did ultimately find a way of injecting XSS, via OWASP Mantra, but not getting it to expose any data. The bug did however cause some interesting display and wrapping issues.

Rather than looking at the functionality, usability, accessibility and its overall purpose somehow I have begun to think the worst about the software under test before I have given myself a chance to really take the time to evaluate it critically, honestly and objectively. I immediately questioned how secure the application was before I considered any other factors.

In my work at New Voice Media, I am part of a cross functional development team, and part of a community of testing interest within the business. During this time I’ve taken onboard a lot of security testing skills, with still a lot more left to learn. It may be that I have taken these skills to heart and want to use them at any opportunity, to develop them further, to discover more about the underlying behaviour of the application under test.

Yet sometimes I feel guilty that I am not approaching the testing of software from any number of other directions, using other skills and techniques. Maybe the newer skills I have learned are higher up in my priority list in my mind before I take other approaches. So, there are of course biases at play here. I’d like to explore that further and challenge them in the future.

Perhaps this has something to do with the way I personally learn things? Early in my career everything was driven from scripts and spreadsheets. There was no impetus to learn better ways of testing, only how to get testing done faster with fewer bugs and more coverage. I was learning how to manage my testing, but not being critical of the testing I was doing, nor evaluating the testing of other people.

Now this kind of learning is the bread and butter of the testers I work with now. We learn, explore, test, check, learn some more, share, improve and the cycle continues. A much more positive way of working. It’s not without its problems, as quite rightly so, you are much more accountable for your work, justifying your choices and decisions. There is a certain level of emotional maturity that we as testers need to develop in order to sustain this cycle, be accountable, share our learning appropriately, learn well from mistakes and improve from them.

This is one of the reasons why I enjoyed Weekend Testing so much. You can’t really hide or be a silent observer. You need to get stuck in and get your hands dirty!

A couple of hours on a Sunday afternoon in the past has not been a huge cost to me, as I would only be doing a bit of housework, DIY, gardening, Scouting, sport or watching something geeky on TV. Soon though however my weekends will be taken up with the ultimate challenge of parenthood, so chances to learn with peers in a relaxed environment will become fewer and far between. More on that learning experience and how it relates to testing another time.

Weekend Testing: infinitely better and more rewarding than mowing your lawn. Thanks to Neil and Amy for running such a fun and exciting session. The same goes to the other participants for the opportunity to learn from you and the excellent conversation.

Manual regression testing has always been a burden for testers. It’s one of the practical problems that we face, where there is a gap at the top of your testing pyramid that can’t be checked using automation. At NVM we have a suite of automated tests which do a lot of the leg work with regard to checks. This post isn’t going to be a discussion about the difference between testing and checking, or the merits of automation over manual checking. It is simply a demonstration of a problem we faced as a team, and a solution we came up with as a team.

We realised that we had a problem. Some time ago we found that manual regression was only being done by two or three people and it was taking hours and hours of those individuals time. And it was usually the same two or three people for each release. It was taking those individuals away from their ‘more exciting’ feature teams work, where we do a lot of funky exploratory testing. But more than that, it was creating division and ill feeling amongst the team where some individuals felt they were carrying the burden.

Also we saw that we weren’t getting the visibility of the process that we wanted, such as the number of checks we were doing, the quality and appropriateness of the scenarios under test; or whether these checks were already (or could be) covered by automation. We didn’t want to duplicate effort. All of these tests are maintained on our internal wiki, rather than in some impenetrable test tool. We are all responsible for maintaining the regression suite, so if we feel something needs adding or changing, we take initiative and do it ourselves.

The scenarios went through a process of review and streamlining over a number of iterations, to make sure that we had the most best possible set of checks we could. These were all described in terms of agents and supervisors operating within a contact centre, which is of course a core part of New Voice Media’s business.

Getting a kit ready for live deployment is the top priority for us, and we want to release as regularly as possible…weekly if resources and time allow. There is of course a challenge to manage the needs of our own feature teams and their priorities, but of course there is a business to support, so the release takes priority.

So to the physical execution of the regression…a whole challenge in itself.

At New Voice Media we are lucky enough to have a dedicated test lab, where we have desks, networking and telephone handsets that allow us to run our tests in one place. All available testers meet at a set time, with our PC’s, telephone handsets, tablets and other tools.

We have created a Kanban board, along with some other elements. We have story cards for each of the scenarios under test, with two separate groups in the team handling different logical streams of the application. The board allows us to see progress through the testing, delegate tasks but also gives us a chance to visually provide instant feedback to what worked well and give praise, what didn’t work well and any ideas we might have to improve the process.

This is a great example of skilled testers working together to solve a testing problem, and has started to make regression testing an enjoyable event rather than an onerous chore.

I’m running a tutorial at ATD 2019

Follow me on Twitter

Dan Billing

I'm a software test engineer of 17 years, and recently I decided to go it alone as a consulting tester, founding my company The Test Doctor Limited.
I love testing and all its wondrous variety. I like to help others become better testers by attending events, speaking, blogging and giving training.
Most of my current work focuses on testing strategy across the whole of the clinical trials suite that we build. This includes any kind of testing, from UI, API, performance, security, mobile etc. Whatever needs to happen.
I'm also building on the training, coaching and learning I've picked up elsewhere, and bringing that into my new team.
I enjoy running workshops and speaking, especially in the technical testing and security space; and to a lesser extent the psychology of what testers do.
Hopefully, It'll make me a better tester too!