Friday, 12 February 2016

Introduction

In the last few weeks I’ve attended my first ever testing meetups in Manchester and Liverpool. Both of these meetups were organised by a group called the “North West Tester Gathering” and you can find it here on meetup.com. Other than online, I’ve never spoken to any testers outside of the companies I’ve worked for and I was really looking forward to it. I wanted to go for two reasons:

To listen to other testers’ experiences and try to learn from them, the problems they faced and the solutions they chose.

To talk about my own experiences and seek out fresh opinions and ideas and talk about the challenges I face. This is not necessarily because I don’t believe I can face the challenges alone, but because I believe I can never think of everything and I like to try out new ideas that I might never think of.

Speakers

For the first meetup in Manchester there was only one main speaker, Richard Bishop from a company called Trust IV - a software testing consultancy company. The main topic of the talk was about Network Virtualisation, which is a technology that allows you to “stub” or simulate network interactions such as a user visiting to your website through an iPhone on a 2G network. The tool they demonstrated this with was one created by Hewlett Packard called HPE Network Virtualisation.

The second meetup in Liverpool had two speakers, Vernon Richards and Duncan Nisbet. Vernon’s talk was about the common myths in testing that we all know and how we can tackle these myths - mainly by improving how we talk about testing in the first place! Duncan’s talk was about exploratory testing and how we probably all already conduct exploratory testing, we just don’t include it in our existing processes.

Main Takeaways

Richard showed us some figures produced by one of the large big data companies forecasting how the technology market would look for 2016. In it, he especially highlighted the rise of end users relying on mobile devices to interact with products. I think this was useful food for thought especially as I’m involved with a project which could be viewed via mobile.

He also used some very effective examples of demonstrating the value of performance testing as well as the need to validate your assumptions (which applies to any testing!). He described an interesting test where they took network speed samples before and during a major football match and found that the speed was faster during the match - against their assumption that it would be slower!

I’ve definitely got a lot to learn still regarding performance testing, right now it feels like a domain rich with specialist knowledge (or at least different knowledge, for example the need to understand and know about statistical significance). I now know what the phrase “jitter” means! (where network packets are received in the wrong order).

Richard also provided some useful example use cases such as Facebook’s “2G Tuesdays”. This is where employees at Facebook are asked to work with Facebook using a network speed as slow as 2G to help them understand the difference in experience for some users in more remote or developing areas of the world. I felt this was an effective example of the lengths Facebook were going to, to try and help their employees empathise with these customers and therefore take their product’s performance on slow networks seriously.

Vernon’s talk mainly focused around talking better about testing to non-testers. A lot of myths people believe about testing are partly caused by our own inability to talk about testing.

There were a lot of themes that I think we would all recognise, such as “The way to control testing is to count things” - which is to say, judging the value of testing in terms of test cases executed or bugs reported and how this isn’t necessarily useful.

I really recommend you watch the video above! But the other themes were: "Testing is just clicking a few buttons" and "Automated testing solves all your problems".

Duncan’s talk focused on some typical testing examples of where we all perform exploratory testing but simply don’t think about it being exploratory - we don’t value it because we don’t identify it.

He also talked about exploratory sessions being iterative, you spend time exploring, learn what you can and then repeat but designing further tests based on what you’ve learnt.

He also talked about the difference between good and bad exploratory testing being how well the tester can explain what they did in an exploratory session. Good exploratory testing can be explained and justified, it isn’t random and a tester should be able to easily explain what they were doing and why.

Socialising!

So other than the main talks, I was attending these meetups to meet and talk to other testers! I introduced myself to a few people and got chatting to quite a few different people. Some people I already knew from my days at Sony in Liverpool, others I met for the first time. It was nice to be able share stories and experiences, I highly recommend attending meetups just for this really, you can learn a lot from others and get some different points of view on your testing ideas.

Being brave…

At the Manchester meetup I caught up with Leigh Rathbone, who was organising the Liverpool meetup. During the course of our chat, I think my passion for testing got out and Leigh asked if I wanted to stand up and do a lightning talk at Liverpool. I don’t take opportunities like this lightly, so I accepted. I think the process of writing these blog posts has helped prepare me a little bit but I certainly have never stood up in front of 80 people, let alone people from my profession, some of whom are massively more experienced than me and whom I have a lot of respect for.

I chose to talk about the very subject that I passionately discussed with Leigh - diagrams. Lately in my recent work I’ve found many examples where people try to explain themselves in terms of words - either written or oral and failed. Not everything is easy to explain this way - I have definitely found that right here on this blog! The point I tried to make was that sometimes some information is better explained in a diagram or chart - e.g. timelines, flowcharts, mind maps and entity relationship diagrams to name a few. Its worth considering this when we are trying to explain ourselves or when someone is struggling to explain something to us. I explored this theme a little in my post Test Cases - do we need them?

I also quickly recommended a book that I believe every tester should read - “Perfect Software and other illusions about Testing” by Gerald Weinberg. I had never read a testing book before and I’m fairly sure a lot of testers haven’t. I particularly like this book because I think it addresses the very topic Vernon was talking about - explaining what we do as testers in terms that anyone can understand. I’m also very much a fan of Jerry’s writing style, his stories and anecdotes make his points so much more memorable and relatable!

Summary

You should attend testing meetups! Even if you’re not a tester!

Even if I knew something already about the topics discussed, I always had something to learn or a new way of looking at it. I’d like to think I will always learn from the talks at these meetups.

Richard, Vernon and Duncan are really friendly and engaging people to talk to!

I shouldn’t be afraid of talking in front of lots of testers, because they are friendly people and I must have made some kind of sense as people came to thank me and chat about diagrams! I hope this inspires other people are nervous or unsure of talking to give it a go! Don’t listen to your brain!

Take opportunities with both hands when you see them - it can be very rewarding!

I’ve only attended two meetups so far and I’ve got so much to talk and think about!