Sometimes we all over analyse. Especially in software testing, sometimes we just look too deep, we get lost in the details and we fail to see the big picture. We stop. Sometimes we forget to restart, or when we do restart then the time has past. Have you ever spent a day trying to reproduce a bug, only to find that in the meantime the next stable build had corrected the failure anyway? Have you ever got so side-tracked, maybe installing that piece of software, that the point of installing it in the first place becomes lost?

Focus is important in testing but equally important is timely focus. The importance of the service that software testing provides should not be overlooked, but also we should not be naive enough to assume that others will wait for the service. Time moves on, projects move on, and people move on. Quickly. So do stop to think, but always quickly start again.

I’ve been a bit busy lately with work that pays, hence the lack of updates on here. Maybe not the greatest excuse but the day job has to come first.

Related to the day job, that being software group management, focusing primarily on software quality, I’ve also found a few minutes to the use the rather excellent Ovi App Wizard to create a simple app with which to read this blog with. So if you have a Symbian, Series 40, or Maemo powered phone then head over to the Nokia Store and download the all new onegreenhill application!

I’ll make a start on the Android version soon – if only there was an app wizard though….

I’ve just started a Lean Six Sigma course. One day in. Already it’s very interesting and I can find myself thinking how to apply it to the testing processes and scenarios that I see day-to-day. It’ll be great to see how myself and my team can improve the way we work and reduce some of the waste that must be inherent in our processes.

I’ve already studied Kanban in a fair bit of detail and using that for management of the testing team, and also for project management of other testing projects has been very interesting. Pulling tasks through the board, and through the lifecycle, works very well. Tackling our processes is the next step.

This is the second post in the ‘Little and Often’ series – shorter posts but more frequent. A little longer than Twitter. Hopefully 🙂

With that in mind, here’s my plan for this year, sorry, I mean here are some practices 🙂 I intend to engage in through the year which I hope will help me and others.

Blog little and blog often: Easy to say of course but more difficult to do. But often you spend too much time thinking about what to say and the moment is wasted. So I intend to start practicing regular blogging, but posts will be shorter. Not as short as my posts on Twitter but a little shorter than last years more lengthy ones.

Get back to getting my hands dirty: It’s time to start doing some testing again and not just managing and talking about it. This really is a practice not a goal, becoming part of my DNA again. As a start I’ve signed up for James Bach’s excellent Rapid Software Testing class which The Ministry of Testing are bringing to the UK this year. Can’t wait 🙂

Widen my circle of contacts: Obvious really, I’d like to learn more from others outside of my regular circle of contacts. Do you want to talk about something or help me out?

Practice generosity: Directly from zenhabits and I’m not ashamed to say. It’ll all be a bit better if we help each other out. What can I help you with? (I know a fair bit about software testing and test management, and too much about cycling :).

One interesting question, and source of conflict, that I often see when managing testers and testing is that of ‘who classifies the defects’. A lot of the negativity one sometimes notices between testers and developers stems from this, a lot of misunderstandings start from this, and in some companies a whole cottage industry has sprung up around it. Who should classify a defect, who should make the decision on how important it is, and how quickly it should be fixed?

Before I start I should explain that I work in a large company, delivering mobile applications to multiple internal customers, based on a platform model. We have a lot of defects across the products and platforms. We have so many that we employ many dedicated defect managers whose job is to categorise, arrange, house-keep and close the mass of open defects across the multiple release streams. We use an Agile SDLC, with Scrum for new development, switching to Kanban once we are in dedicated productisation phases. So you could say I know a thing or two about defects. I even used to raise them, back in the day 

In the Red CornerNow, back to the story. In the red corner we have the testers. These guys are at the coal face, they are finding the bugs and raising as many quality defects as they can. If they are good then they feel almost an emotional attachment to the defects; they represent time and effort put in. So they should tell us how important they are, right? Well maybe, but not so fast – maybe they are too close to the each defect. Maybe they have too close an attachment. Maybe emotion blinds good judgement. Whilst we rely on our testers having the best ‘customer’ view of the product, are they the right people to be categorising our defects?

In the Blue CornerOver to the blue corner – here we have the developers and team leaders or managers. Aren’t they the best people to decide? They know in detail how long it takes to fix a bug, how much capacity they have, how complicated it is. And there could lie a problem – are they too focused? Can they see the big picture? Do they really understand what the customer needs, and which defects need to be fixed in order to deliver that?

A Third Option?Then to a third option. Taking the boxing analogy, let’s call them the referee. Someone to make sure that things run smoothly. Here’s where we use a defect manager; someone with an independent view to manage the defects, consult both the testers and developers, representatives of the customers, and project managers. Someone who can try and see both views, not a database admin or defect pusher, more of a technical person who a good overall knowledge to ensure that categorisation happens with inputs from all and not just influence from one.

The defect manager can be a person but equally it could be a role. The key thing is that the person sits in a position of independence and is not seen to be pushing their own view. In a cross-functional Agile team we have used the Product Owner for this in the past. Sitting in neither corner, but somewhere in between, controlling the backlog and therefore also controlling the defect fixing. Categorisation falls naturally in this area, whether the defects come from internal testing or from external testing together with the internal or external product program. I have not found the test manager to be a good person to carry out this role – they are too close to the defects and lack the independence needed in order to effectively categorise and drive fixes to completion. Equally, a development team leader or manager is also not a good choice, often taking the technical view and not that of the customer. Sure, if there is no-one else then either of those roles ‘will do’ but getting true independence will result in better categorisation and a more effective close-down of defects and therefore better quality code.

I’ve been running teams who have been using KanBan for a few years. We find it is a really useful methodology to use, especially for our maintenance and dedicated testing/ release teams. Being a little lighter than Scrum, it enables us to quickly re-prioritise backlog items; ideal when you don’t know when the next Showstopper bug is going to come in.

Recently I’ve started to take things a little further with a pilot in my management team, which consists of test managers, defect managers and product owners who are driving various software projects to completion, mostly in the maintenance and productisation phases. The pilot involves using KanBan to run the team and prioritise the various actions and items that we need to drive forwards.

We wanted a lightweight approach to the toolset, so rather than go for a heavy tool, or something online (company policy – nothing on 3rd party servers), we picked SherePoint. Daniel Roots excellent guide on how to setup a KanBan board in SharePoint has been invaluable and what we have now certainly fits our need. OK, so you may not want to use this to run a detailed software project without some adaptation, but for prioritising and driving the management backlog then it works just fine, especially when tasks are sync’d with Outlook.

Changing our way-of-working was, as usual, initially difficult, but quickly the benefits of being able to see what everyone was working on and where it was in the cycle became apparent. Figuring out the work in progress limits for the team was a challenge, given the varied nature of the different roles, but we found that, more often than not, our tasks relied on each other, and therefore our ‘internal’ WIP limits did too. This meant that the team did have a natural velocity and this could be used to set WIP limits accordingly.

You may wonder whether this approach sounds a little heavy-handed? I’d argue that it does not, simply because you always keep an action list or backlog for a management team. Maybe you have tried to colour-code it or tabulate it, in order to make it more visible? You are halfway there. Why not go the full way and try KanBan for it instead?

I’ve just qualified at Practitioner level in Managing Successful Programmes. MSP is, (in the UK), the preferred approach to programme management, and the normal next step after getting the relevant project management qualification PRINCE2. As well as being a very hard week of training and exams, and very useful for me and my career, what can it teach us when applied to the software testing area specifically?

MSP teaches us to take the programme approach to controlling groups of projects, gives the bigger picture, vision and framework in which to run them, and most importantly for me, it teaches the importance of your stakeholders. I’ve found during my career so far that this is one of the most important areas that a test manager can focus upon. I’ve managed projects and groups of projects, programmes if you will, and the most common problems I see are always related to lack of buy-in, understanding, or focus on rolling out changes and embedding them in the groups that need them. That’s not the technical test methods and techniques, the dashboards required to track progress, and the planning that needs to be done; this comes easily. Or if it does not then you make sure there are people in your group to whom it does. But overall management of the stakeholders falls to you, as head of the group, senior test manager, programme test manager, (or whatever job title you have). You have overall responsibility.

MSP gives a good framework in which to manage those stakeholders, encouraging regular communication and follow-up, focused planning of stakeholder engagements and close cooperation.

It’s easy to get stakeholder engagement wrong. It’s not easy to do, it’s sometimes not nice to do. You believe in what your programme is trying to do, why the need to convince others? But ignore stakeholder management at your peril. This is especially important in testing where you often have to fight for your budget and your resources more than other disciplines. Being able to provide the right vision to secure initial buy-in, to follow this up with the right targeted communications to the right people, and give the required and regular continued communication means that your life can actually become easier.

Where should you focus when considering test management stakeholder engagement? Start with identifying who the stakeholders are. Do you need to roll out a new testing process in many teams and projects? Do you also need to introduce new tools and new teams, possibly off-shore? You’ll need to focus on different people and you need to know who these people are and what stake they have. Then think about a vision of the future, paint a picture where testing brings higher quality to the products, through better process or tools. Talk in terms of money saving, of shorter release cycles, and of long term quality. This will be high level so target this further towards each individual stakeholder; think about what each one will care about. Are they a ‘money guy’ or a ‘people guy’, are they fascinated by the process or the tools? Tailor the message to each one and you have a much better chance of success.

Then don’t stop. You need to be communicating to these people regularly. And don’t stop once you have delivered something; keep talking to them as the new processes and tools are rolled out and become operational. They will have questions and they often won’t want the changes. It’s here that stakeholder engagement often fails; people stop it too early, leaving stakeholders un-happy, and expensive new processes and tools unused.

Ultimately you hope to win over some of the doubters, those who do not see the value that testing brings. You want to ‘push the right buttons’ to do this. And if you can do this, whilst letting the rest of your team handle the individual projects where testing is taking place, then you have proved your value at the programme level.

The argument over the importance of software testing, and how software testing should be taken more seriously, is as old as the hills. “Software testing is seen as a second rate career” is the most common thing testers have to fight against, with various degrees of success, normally dependent on the company they are working for and that company’s approach to quality. But do testers really help themselves?

Over the years I’ve been a tester and managed a lot of testers and others involved in QA. More recently I’ve also started to manage a development team, and when talking about careers to developers and testers, the difference are sometimes startling. Most developers had a plan from college onwards, most still have a plan, and it’s been coding. It’s rare you find someone fell into software development by accident. Now compare this with the testers. Some have a plan, some have stuck to a plan but a majority have not. It’s been a case of falling into testing, often from totally different careers or training. Think of it as being in the right place at the right time 🙂

This got me thinking – is it an issue that developers have a plan and testers just found their way into their chosen career? Does this in fact mean that testers have a more broad experience and that actually makes them more rounded individuals? Does this make them better testers? Does it just demonstrate the lack of formal test training in schools and universities? Do we need more certification to be taken seriously? (joke – we really don’t).

Whatever you feel about it, it’s clear to me that only be recruiting sensibly, by inspiring people to become career testers, and by continually sharing and demonstrating that software testing is worthwhile will we swing the balance. Those in the software testing profession should be focusing on those at the start of their careers, not just focusing on test process and continual improvement. I hope to be able to find more testers in it for the long run, and in the middle of a longer term plan for their software testing careers.

The group I manage make regular use of Exploratory Testing. We even took training, way back in the day (well a few years back anyway), we know about our test charters, we know where to start and where to stop, we understand it’s place in our strategy. It forms a corner stone of this strategy, which we have to enable quality within our projects. Yet to some of us there seemed to be something missing.

As a tester it’s great. We make a thing about exploratory test sessions. There’s coffee and if you are lucky then there is cake too. You turn up, you pick your area and you fill in your charter. Start the clock and go; testing begins. Stop the clock and testing stops, bugs are added to our database and charters are filled in. Coffee is drunk and cakes are eaten and we go home…..

…..leaving a large pile of pieces of paper which are subsequently bundled in a drawer and forgotten. You might argue this is ok, after all “the bugs got raised didn’t they?” This is of course true but as the Test Manager then I miss the ability to learn from the experience, to review the metrics and to understand how to make the next sessions even better.

So recently I discovered Session Based Test Management, James Bach’s take on ET. I know, late to the party again, by about 10 years, but I work for a large multinational telecoms vendor and change doesn’t come easy to us (awful excuse of course). But thankfully we’ve now found SBTM. I love it, you get the metrics you need, the visibility and the control that allow ET to really come to life. We’re focusing on it more now than ever before.

(One tip if you do take SBTM and James Bach’s tools – also use Session Creator; it just makes things so much easier for the testers.