Category: Cultivated Management

I get lots of enquiries from founders of start-ups who reach a certain growth point where they really need to start taking control of the quality of the work being produced. Their companies seem to reach a size and market growth where the focus on quality becomes a priority.

This is usually about the time when I get a call or an email and get posed the typical next two questions:

“You’ve got to be very careful if you don’t know where you are going, because you might not get there” Yogi Berra

Everytime I get asked how to implement agile in a team I ask a simple clarifying question back.

“Why?”

The answer to this question is often not very forthcoming. There is often no concrete reason. There is often no tangible benefit for moving the organisation or team to an agile or rapid delivery way of working.

A key aspect of being a good tester is being able to understand, decipher and communicate in the language of your business. The jargon that your business uses is an important aspect of the way your team members communicate. Embrace the jargon and learn to use it – it often makes communicating within your own social or business group more effective.

As a tester you need to know what other people are talking about.

You need to know the technology they are talking about.

You need to know the approach they are taking and the way they are articulating what they are doing.

The most valuable way to understand the business you work in and the product you are testing is to do your research about the words and phrases being used.

Don’t come away from any meeting not knowing what a word means without it being an action for you to research it. Don’t encounter a word or phrase that you don’t know more than once. Research it and understand it.

You might not need to know the ins and outs of the words meaning or what that word represents but you should at least be aware of it.

For example. In a typical meeting here we might mention some or all of the following:

API

UI

Stack

SIP

SIPp

WireShark

Trace

Cold Transfer

Call Leg

Proxy

WAF

App Firewall

Burp

CCXML

VoiceXML

NewRelic

This is a tiny tiny subset of the words we use. Some represent technologies. Some represent aspects of a system under test.

As a tester we need to know what each one means. We need to know how each one works.

So how do I find this out?

Search the web. Find resources and read about each word or phrase.

Ask. Ask your peers. Ask your colleagues.

Decipher. Work out what it means by deciphering the context in which it is being used. For example if you hear the words “run a wireshark trace” you should be able to decipher that Wireshark is some sort of tool or technique for tracing something. The more you listen, the more clues you’ll get; you’ll soon have a mental picture in your mind of what a wireshark trace is.

You can then join in and understand the conversation – and then confirm your assumptions with some research after (or even better – a question to clarify during the meeting).

I make a point of jotting down any word, phrase or description I hear in any conversation. I then search, ask or decipher.

What do you do?

Find out what the word means?
Or ignore it and hope you’ll never need to know?

This is a guest post by Adam Knight who’s blog (A Sisyphean Task) is a must read testing blog.

I was pleased when Rob asked me to write this guest post on the subject of T-shaped testers. It is a subject which I have a strong affiliation with, and which is integral to my approach to management. I’d discussed the subject with Rob previously, having encountered the term ‘T-shaped’ from reading Jurgen Appello’s post on the subject.

Whilst the concept is an interesting one on an individual level, and Rob covers this really well in his previous post, what I really like about the idea of the T-shaped Tester is its implication for teams.

Not all T’s are the same

Whilst providing a simple name and model for people with broad knowledge and deep key skills, one or two important concepts are not well represented by the T model.

– the first is that, individuals can have more than one deep skill. The T shape tends to imply broad knowledge with one deep core skill. What I have found is that the best individuals possess multiple deep skills, combined with the broad skill base to ‘glue’ these together. Less a T and more like a city skyline.

– the other is that , not all T’s are the same. Each T-shaped individual can have different core skills that have arisen through their experience, interests and self learning that make them unique shapes. It is the variety of skills and shapes that, for me, presents the most important aspect of hiring generalising specialists, the ability to put the individual ‘shapes’ together and create amazing teams.

Piecing together a Team

One of my earliest blog posts was on the subject of how I felt that a Testing team worked best when comprised not multiple individuals all possessing equivalent skills, but when each individual in the team possessed specific both a general set of skills, then specific specialist abilities that benefited the team.

I presented the idea that, in the context of testing the software, the presence of a range of skills and experience in a team allows the testing operation to critique the product from a range of subjective viewpoints. If we populate a team solely with testers produced via the same training process then the testing will be limited from the perspective of insight and empathy for the stakeholder.

Beyond Testing

The benefit of having a range of deep skills in a team extend well beyond the ability to test the product more effectively.

– self sufficiency

Having a wider variety of skills contained within a team can help to make a team more self sufficient. This can be a distinct advantage in removing the friction caused by external dependencies . Testing teams can suffer being at the bottom of the priority list when it comes to internal IT infrastructure. Having one or more team members who are proficient in setting up operating systems and virtual/cloud test environments reduces this friction and allows the team to be more autonomous and less subject to organisational inefficiencies elsewhere.

– multi-tasking

In the world of small companies and startups particularly it can be a huge benefit if individuals and teams can take on additional responsibilities. In our earliest customer engagements my organisation really benefitted from myself and some of the other team members making use of good communication skills to take on the role of customer support. Other members used their diligence and management skills to maintain and manage the automation processes during periods of support activity. This allowed us to successfully progress through these early engagements and support the customers until such time as the company could expand and bring devoted support staff on board.

– co-operation not competition

One of the problems that I have seen in teams populated with multiple individuals all with the same skill set is that , all of the individuals in the team share common goals, and often the act of achieving these goals by one individual will implicitly exclude others from doing the same. When a team is made up of individuals with mixed skills and interests I feel that it is easier to build up unique personal development goals for each member which are less likely to conflict with the interests of others.

– this is how we want to work

I have never seen a film involving a team where every member of the team had the same skills. Take any famous ‘team’ movie and you’ll most likely have a story where individuals have key skills which contribute to the eventual success of the team. If art holds a mirror to life then our cultural subconscious gives us a pretty strong message that the best teams to work in are those where individuals have unique skills which are valuable to the goal of the team overall. As I wrote recently here, motivation is an important factor in work success. Feeling that your skills and contribution are valued is an important motivational factor in thought work such as software development.

Whilst extending ones own skills as an individual tester is a noble endeavour, for the test manager the implications of hiring T-shaped individuals take on a whole new dimension. By combining the unique skill sets of different team members we can construct powerful and dynamic team units that are autonomous in operation, with mutually beneficial personal goals, and can extend their remit to take on a range of tasks. Each team within an organisation may have slightly different make up but each will possess deep skills and knowledge. Each skill set pieces together to form a final picture of both breadth and depth, the Square Shaped Team.

I presented the topic of moving to weekly releases from 8 month releases.

I talked about some of the challenges we faced, the support we had, the outcomes and the reasons for needing to make this change.

It actually turned in to a questions and answers sessions and despite my efforts to facilitate a discussion it continued down the route of questions and answers. It seems people were very interested in some of the technicalities of how we did this move with a product with a large code base of both new and legacy code (and that my facilitation skills need some fine tuning).

Here are some of the ideas.

We had a vision

Our vision was weekly releases.
It was a vision that everyone in the team (the wider team of more than just development) knew about and was fundamentally working towards.

This vision was clear and tangible.

We could measure whether we achieved it or not and we could clearly articulate the reasons behind moving to weekly releases.

We knew where we were
We knew exactly where we were and we knew where we were going. We just had to identify and break down the obstacles and head towards our destination.

We had a mantra (or guiding principle)

The mantra was “if it hurts – keep doing it”
We knew that pain was innevitable but suffering was optional.

We could endure the pain and do nothing about it (or turn around) or we could endure the pain until we made it stop by moving past it.
We knew the journey would be painful but we believed in the vision and kept going to overcome a number of giant hurdles.

Why would we do it?

We needed to release our product more frequently because we operate in a fast moving environment.

Our markets can shift quickly and we needed to remain responsive.

We also hated major releases. Major feature and product releases are typically painful, in a way that doesn’t lead to a better world for us or our customers. There are typically always issues or mis-matched expectations with major releases, some issues bigger than others. So we decided to stop doing them.

The feedback loop between building a feature and the customer using it was measured in months not days meaning we had long gaps between coding and validation of our designs and implementations.

What hurdles did we face?

The major challenge when moving to more frequent releases (we didn’t move from 8 months to weekly overnight btw) was working out what needed to be built. This meant us re-organising to ensure we always had a good customer and business steer on what was important.

It took a few months to get the clarity but it’s been an exceptional help in being able to release our product to our customers.

We also had a challenge in adopting agile across all teams and ensuring we had a consistent approach to what we did. It wasn’t plain sailing but we pushed through and were able to run a fairly smooth agile operation. We’re probably more scrumban than scrum now but we’re still learning and still evolving and still working towards reducing waste.

We had a major challenge in releasing what we had built. We were a business based around large releases and it required strong relationships to form between Dev and Ops to ensure we could flow software out to live.

What enablers did we have?

We had a major architectural and service design that aided in rapid deployments; our business offering of true cloud. This means the system had just one multi-tenanted version. We had no bespoke versions of the product to support and this enables us to offer a great service, but also a great mechanisms to roll products out.

We owned all of our own code and the clouds we deploy to. This enabled us to make the changes we needed to without relying on third party suppliers. We could also roll software to our own clouds and architect these clouds to allow for web balancing and clever routing.

We had a growing DevOps relationship meaning we could consider these perspectives of the business together and prepare our plans in unison to allow smoother roll outs and a growing mix of skills and opinions in to the designs.

What changes took place to testing?

One of my main drivers leading the testing was to ensure that everyone took the responsibility of testing seriously.

Everyone in the development team tests. We started to build frameworks and implementations that allowed selenium and specflow testing to be done during development. We encouraged pairing between devs and testers and we ensured that each team (typically 4/5 programmers and a tester) would work through the stories together. Testing is everyone’s responsibility.

Testing is done at all stages in the lifecycle. We do TDD, Acceptance Test Driven Development and lots of exploratory testing.

We do a couple of days of pre-production testing with the wider business to prove the features and catch issues. We also test our system in live using automation to ensure the user experience is as good as it can be. We started to publish these results to our website so our customers (and prospective customers) could see the state of our system and the experience they would be getting.

We started to use techniques like KeyStoning to ensure bigger features could be worked on across deployments. This changed the approach to testing because testers have to adapt their mindsets from testing entire features to testing small incremental changes.

Why we love it
Releasing often is demanding but in a good way. The pressure is there to produce. The challenge we have is in balancing this pressure so as not to push too hard too often but have enough pressure to deliver. We don’t want to burn out but we want to ship.

We exceed the expectations of our customers and we can deliver value quickly. In an industry that has releases measured in months (sometimes years) we’re bucking the trend.

As a development team we get to see our work in production. This gives us validation that we are building something that is being used. Ever worked on a project that never actually shipped? Me too. We now see none of that.

It’s been tough getting to where we are now but we’ve had amazing support from inside and outside of the business which has helped us to really push ahead and set new markers of excellence in our business domain. We’ve still got lots to get done and lots to learn but that’s why we come to work in the mornings.

These are just a few of the factors that have helped us to push forward. There are companies releasing more often, and some releasing less often to good effect. Each business has a release cadence that works for them and their customers.

I got asked the other day how I come up with ideas for talks/blogs, how I think through these ideas and how I go about preparing for talks. I’ll take this opportunity to add a short side note of how I do this. This approach may not work for you.

I firstly create a central topic idea in a mind map (I use XMind).

I then brainstorm ideas around the central topic. After the brainstorm I go through the map and re-arrange, delete, add and rename until I feel I have a story to tell.

I then start planning the order and structure of the story. Every story has a beginning, a middle and an end.

I start by writing the beginning and then the end. The middle is the detail of the presentation.

I then doodle, sketch and plot.

I then move to my presentation tool of choice. In this case it is PowerPoint – sometimes it is Prezi.

The presentation typically takes a long time to prep, even for a very short intro like this. This is because I don’t like including too much text in my slides and also because I think simple, but attractive slides can add some impact to the topic. So I spend some time making sure they are right. Saying that, no amount of gloss in the slides will help with a bad/poor/boring story.

Last week we had the very clever peeps from Neuri Consulting in to give us a special one day course on being a scrum master.

It was David Evans and Brindusa Gabur that delivered our training, and what a great day it was. We had around 10 people in the course ranging from those with lots of agile experience, those currently in scrum master roles, one of our product owners, those who have expressed an interest and those who’ve done scrum master roles previously.

As it was a mixed bag of experience and expectations we focused heavily on story breakdown and estimation; two areas we’ve yet to master. David and Brindusa targeted their training on these two points whilst also covering a lot of other ground.

We played a couple of games to illustrate points and mis-conceptions that we brought to the games. We also worked through some ideas which we thought were truth or lies regarding the scrum master role. We ran a retrospective of the training which highlighted some interesting points and some good take-aways for the teams to work on.

It was a really good day and I think we all took a lot away from it. From my own point of view I feel we need a more consistent approach across teams, but in reality we’re doing pretty well.

With a little tweaking on how we measure cycle time and more emphasis on quick estimations for the backlog I think we’ll start to see more throughput in terms of stories and features.

What was great to see though was the banter and friendships that have formed here at work. It was a lighthearted affair yet focused and in tune with our core ethos of learning being central to all we do.

The only thing to disappoint us was that we all wanted more. We should have had a two/three day course with the peeps from Neuri as we felt we didn’t cover everything we could have.

The key take-aways from the training seemed to be about having more off-site retrospectives and limiting the retrospectives to a shorter period of time. This gives the retrospectives more focus and the opportunity to move away from problems that are lingering in peoples minds but aren’t actually currently a problem.

Last week one of our team, Simon, ran a really fun session with the whole test team on our Exploratory Testing process.

We started by discussing some of the thinking that we’ve identified happens when we plan exploratory testing sessions. We talked through a diagram we created a few years back, and although it’s pretty accurate in identifying some high level ideas of our Exploratory Testing process, it’s by no means complete, but it served it’s purpose as an aid to explaining.

Once we’d been through the diagram Simon introduced the team to one of my favourite games that explores creative thinking.

It’s a common game where people are asked to come up with as many uses they can find for an item.

It’s a really good way of understanding how people think up ideas, how diverse the thinking is within our group and it’s also a good way of injecting some discussions and banter in to our session.

Simon gave the group 5 minutes to write their ideas on post-it notes. The item of choice today was a “brick”.

We then affinity mapped them in to rough categories on the white board and discussed a little around each category as a group.

We then discussed the process of how people came up with their ideas.

It’s always tricky analyzing your own thinking, especially so in retrospect, but we did spot a couple of patterns emerging.

Firstly, whether consciously or not, we all envisioned a brick and started from this image we’d constructed. As it turned out we’d all thought of a standard house brick; some people saw in their minds the one with holes in it, others the bricks with a recess. Either way we started with the standard form of a typical house brick (here in England).

Here’s where we appeared to head off in slightly different thinking ways. After writing down all of the basic ideas that a brick is used for (building stuff, throwing at things, weighing things down) we started to head off in the following three directions:

Thinking of other sizes, shapes and forms that a brick could take

Thinking of different contexts and locations that a brick could be used in it’s original form (outside of those we naturally thought of straight away)

Thinking of everyday objects that we could replace with a brick.

For example:

We could grinding the brick down to create sand

Use the brick as book ends

Take the brick to a philosopher (and/or someone who had never seen a brick before) and try to explain what it was used for

Use the brick as a house for small animals, insects and little people

Use the holes in the brick as a spaghetti measurer

Putting the brick in a toilet cistern to save water

Projectile missile and other weapons

Use it to draw right angles

Use it as a paperweight

Use it as a booster seat in a car

Use it as a holder for fireworks

Use it as a bird bath.

And many, many more.

As you can see we explored a number of uses and we created a decent amount of categories in which a brick would fit.

What was most important though was that we all took time to reflect on where our ideas were coming from.

We also noted that not all of clearly think in the same fashion. Some people remained true to the form and shape of a brick but explored different contexts.

Others ignored the standard shape of a brick and explored different elements and uses of the materials within a brick.

This realisation that we all think differently about something as simple as a brick triggered some further discussions about how we come up with ideas for testing our new features.

It ultimately lead to us to concluding that it would make sense to pair with others during and after our story kick off meeting. It might generate a broader and deeper set of test ideas. It might not. We’ll have to experiment.

For now though we’re running with two testers attending story chats followed by a brainstorming and ideas exchange meeting. We decided it would make sense to not do the brainstorming in the story chat as that will sidetrack the purpose of that meeting, but we will be sharing our wider test ideas with the team as well as writing acceptance tests in SpecFlow for static checks.

Let’s see how it goes.

It was a really good session and we took away a direct change we could try. It’s good to get the team together every week to chat through the ways we are working, thinking and solving tricky testing problems.

I was hanging out at Google Campus Cafe yesterday in London and checking out the notice board when my eyes were drawn to a sign that said:

“Testers wanted”

It turns out the sign was for another crowd sourcing community of testers called Testlio. The crowd-sourced business model is growing in popularity and can be a good fit for companies wanting something tested in the wild, on a wide variety of devices or sporadically (so have no need for a dedicated test resource).

I do quite like the gamifications aspects of Testlio though and the fact the testing is pitched as “challenges”. Points are awarded for these challenges as well as doing other things like asking questions and responding to questions. This helps to build a community aspect and engagement.

There are lots of crowd sourcing sites popping up and some consultancy companies have started to offer this model as a way of catering for the need for this approach to testing

One of the highlight talks from EuroSTAR 2012 was the keynote by John Seddon.

It wasn’t even a testing talk. It was a talk about value, waste and failure demand. The talk was about Vanguard’s (John’s company) work with Aviva Insurance to improve their system to provide value to the customer. It was an interesting talk from my perspective because it was centred around the call centre aspect of Aviva. As I work on call centre products I had more interest than some of those around me.

I saw good parallels to testing and software development but I don’t believe all did. I think it’s a shame because had many people seen the connections I believe they may have been as inspired as I was after the talk.

In a nutshell John told the story of how Aviva was being run based on cost. Management were looking at the system (the business process) as a cost centre and working to reduce costs rather than looking at the root causes of why costs were high.

Aviva started to receive large numbers of calls to their call centres. So they started to build more call centres to cater for demand. The call centres started to be moved to area in Britain and abroad where the cost per centre was cheaper. They were looking at the costs of the call centres and were optimising and reducing cost where they could.

The problem was though, that the costs in the call centre was an effect of customers not getting value at the beginning of the cycle. So when a customer would interact with Aviva they would not get their problem or request dealt with 100%. They would then call the call centre again. And again, not get it resolved. So they would call back. The managers took this to mean that people liked to speak to Aviva, hence more call centres. The real reason was that they were not solving the problem correctly first time, hence they were spending more trying to solve the problem later.

John coined the term “Failure Demand” to explain this. Failures in the system were creating demand elsewhere. In this instance it was calls to a call centre.

He worked with Aviva to increase the chances of satisfying the customer 100% on their first interaction, thereby reducing the need for further call centres. Customer satisfaction went through the roof and savings were made.

The problem Aviva had was that they were managing based on cost, rather than the value they provide to their customers. Switching this focus means a significant mindset change, but the results are incredible.

What’s this got to do with testing?

A lot. When we manage our development process by cost we start to ignore the value that we are adding. We use metrics to make decisions, we look for cheaper ways of doing things and we optimise the wrong parts of the system.

I immediately saw lots of parallels with software development. Every time we do rework, bug fixes, refactoring, enhancements and any other work which could have been avoided is, I believe, failure demand. We are spending more money correcting things than if we had spent more time getting it right in the first place.

With software development though there will always be times when we need to refactor, change something or fix bugs. The question for me though is at just what level does natural change cross over in to failure demand.

Did we not define the requirements well enough and are now having to change the product because it’s not right for the customer?

Did we not include the right people at the start and some tough questions get asked too late in the process?

Did we not have sufficient testing in place early enough to catch obvious bugs which now require rework?

Did we not have the right skills in place to make sound technical decisions which now mean we have to re-write bits of the product?

Did we not spend enough time understanding the problem domain before jumping in and building something?

Agile helps to reduce this somewhat by making the feedback loop smaller, but as John mentioned in his talk “Failing fast is still failing”.

It was a really good talk. It made me really think about what elements of a testers work could be failure demand. It re-enforced my ideas that optimising parts of the system doesn’t often address the root cause and it gave me renewed energy to look at the value rather than cost.

If you’re interested in the talk, here is a similar one (without the Aviva part) from Oredev and here is the Aviva video that John showed during the presentation.

We had our team wide sprint demo yesterday where each team presented what they have been working on in the sprint.

These meetings are a great opportunity to share and learn about the world outside our immediate teams.

In this meeting one team decided to use personas to present their work.

It is not the first time they’ve used visuals and role play to demonstrate the work, but it’s the first time they’d used print-outs of people and given them names and background details.

Each team has settled in to their own way of presenting. Some are able to use a demo and talk through the stories they have finished whilst others talk through what they have done as their work often has no visuals to show (our test infrastructure team). No way is better than any other as each is characteristic of the work and the teams.

From these meetings, we all learn a lot about the product, the work each team does and to some extent, ourselves, as we share with the wider team what we are doing.

Our product is Contact Centre (Call Centre) software and as such we can use personas to visualise the flow of calls, the states and the interactions. Assigning roles, personalities and context to each person in the scenario allows us to really understand the motives, user story and operational context clearly.

Applying personas in this way allows us to seek empathy with each user in the system and to understand a little more about why the product is working the way it is. After all I’m sure most of us have interacted with a call centre at some point in our lives; some experiences good, some bad and some very much dependent on our own contexts (in a rush, bad mood, upset, angry, anxious)

The demo that the team did (presented by scrum master Dan and tester Andy) involved the wider team in the demo also.

One product owner played the role of Mary (a tech savvy stay at home mum awaiting a call back about an issue)

Andy played the role of Claire (a call centre agent who was new to the role and was making the return call to Mary)

Another teams scrum master/dev was Clive (a senior call centre agent who was to be transferred to in order to solve the problem)

Getting people playing roles in the demo proved to be a good way of articulating interactions and stories too. This approach might not be suitable for all, but it went down well with the team yesterday.

Can personas be helpful for testing?

Absolutely.

We’ve started to dabble with personas for testing also.

By having a core set of user interactions/capabilities we can start to apply different personas to each interaction shedding a different light on it.

For example, one of the most fundamental aspects of our product is the ability to connect callers to agents.

We could break this capability out in to many different scenarios for testing, for example:

There is no agent available immediately, the call is queued, then an agent becomes available and takes the call. (testing – queuing, dequeuing and call allocation, stats, call recording, etc)

The agent can deal with the call immediately. (testing – straight forward two party call, stats, call recording, etc)

The agent cannot deal with the call at all and has to transfer to a colleague (testing – hold, transfer, retrieve (by second agent), call recording, stats etc)

I could go on with many different scenarios….

Now let’s apply personas.

For each scenario we could approach the interactions with different personas.

We could have agents who are experienced, stressed, new to the job, coming to the end of a shift, working under strict call handling times, etc.

We could have callers who are relaxed, angry, frustrated, fed up of being transferred, knowledgeable in the subject of the call, not knowledgeable in the subject of a call.

We could be in a large consumer driven call centre, or a small inbound support desk.

I could go on.

What do personas bring to the testing?

Well personas allow us to understand how and why someone is using the system.

They allow us to seek empathy with the agent, caller, supervisor and any other persona in the mix. It allows us to think clearly about waiting times, queues, effective use of routing, call quality, expectations, sub systems that support the interaction (stats, call recordings, audit) and it allows us to fine tune our own exploration to look for aspects of the interaction we might not have considered previously.

There are some fundamental expectations that the system must meet.

The expectations though will vary in their scope depending a lot on the context. Personas allow us to look at the product differently and see if it’s still meeting expectations from that view.

Elisabeth Hendrickson, in her excellent book Explore IT! (What? You’ve not read it yet?), refers to personas as a great tool for exploring the product and gives a really neat insight:

Just as personas are useful for designing systems, they’re also useful for exploring systems. Adopting the mantle of a persona prompts you to interact with the software with a self-consistent set of assumptions, expectations, desires, and quirks.

We’re finding Personas useful for design, testing and now presenting work back to the team.

As with anything though, there is no silver bullet solution; personas are another tool and technique we can use to achieve our goals, not the only one.

Some things are best presented and tested in other ways, but sometimes personas can be quite helpful.

That’s the question I asked myself the other month after finishing writing an eBook on Evernote for one of my other blogs. I’d become convinced that Evernote could be used for almost any requirement (within reason). Could it be used for test cases?

I did a quick proof of concept the other week looking at whether or not we could use Evernote as a Test Case Management tool with some interesting outcomes, and a great deal of learning along the way. I’ll share this with you here.

I floated the idea with my team after the initial spike and a few of us did a quick brainstorming session to talk through the process flow and major problems with the model. We drew it out and talked about some of the pros and cons. We concluded it was possible, but not without some potentially major work arounds (metrics, reporting, master copies etc). I’ll talk more about that in this post.

Why Evernote?
Evernote, if you’ve ever used it, is a truly awesome way of capturing information.

There are several ways of getting information in to Evernote and once it’s in there it’s super easy to search for stuff.

It’s a perfect companion for me when exploring.

We also have a number of mobile devices which we use for testing. As Evernote is available on almost any device, it seemed like a perfect companion.

A great feature of Evernote is that you can share a notebook. This meant we could create a series of NewVoiceMedia testing notebooks and share them across the entire team. Another great feature. Any device, any tester, anywhere.

Requirements of a Test Case Management Tool

Despite being in an agile environment and heavily automating tests we will always have a number of test cases for legacy bits of the products, tests which are not valuable to automate and for compliance/reporting reasons.

We also have some very lightweight (I actually consider them checklists) test cases that we run to check over each kit, as well as regression tests.

All of the team make copious amounts of exploratory testing notes in their favourite system (rapid reporter, notepad++, Evernote) which ultimately end up as txt files in SVN too.

How Evernote might work
I created notebooks which I populated with notes. As a basic guide a “note” is a test case and a notebook could be considered a container or “suite”. I tagged each note with relevant functional area and type of test (i.e. regression, call control etc) I created the following notebooks.

MasterCopy (contained the master copy of each test case)

ToBeRun (containing a copy of each test case needed to be run)

Complete (containing all test cases completed – moved from the ToBeRun after it was executed.)

Exploratory (containing all exploratory testing session notes)

Areas to explore (a notebook containing any area we deemed was worthy of further exploration)

Creating Test Cases

I like to encourage exploration with the tester’s knowledge and insight leading the testing, so where possible we all tend to write our tests in a very checklist orientated fashion. This means we don’t have “step by step” instructions and we assume that the person running the test case has some prior knowledge of the system and a good grip of testing.

Therefore our tests are very much like checklists and we leave it to the tester (with their own skill, experience, knowledge and insight) to test against the checklist where relevant. I don’t mandate that each checklist item is tested against either. It could be that we only run 30% of the test case. That might be ok. The Tester is in charge based on their insights and local knowledge.

Evernote supports checklists in a very simple form. It needn’t be more complicated than a checkbox against each item so Evernote worked well for creating and storing test cases.

There are exceptions though. Some of the areas we test against are incredibly complex and require absolute precision to check for known outcomes. Therefore we do have some test cases that detail each step someone must take and the expected outcome. These are mainly focused around statistics, percentage reports and other numbers based outcomes that require precise timing and action. Automation helps greatly here, but sometimes a legacy code stack can hinder that automation.

These types of detailed tests can still be accommodated in Evernote easily. I used something called Kustom Note to create a template for these types of test ensuring I captured all relevant information.

The only thing Evernote clearly does not do is report on metrics of completion or coverage. I knew this going in to the experiment. That was ok when I started out.

Automatic Creation of Tests In Evernote

So far, so good. Evernote clearly does allow for basic test management where metrics are not the sole output. This suits us nicely. Our metrics are taken at an automation level which is dealt with by other systems and tools.

One of the really great uses I found when exploring Evernote was how I could trigger a new “test note” to be created when a new feature/story/workitem was created in a different system.

I configured IFTTT to grab the RSS feed from Pivotal Tracker and automatically create a new note in the Evernote notebook.

Kapow. A new Exploratory Testing placeholder for a new story. Sweet.

I got giddy with where this could go now.

I then hooked Evernote up to the RSS output of a social media aggregation tool I use. I could therefore collect social media mentions of a product and create exploratory (or investigatory) sessions from it. Interesting.

For example, if there is a mention of something like a bug, or slow down, or any other query about how something might work we can automatically create a note in the relevant notebook for a tester to explore, or even for someone to respond to it.

But it gets more interesting still.

Let’s look at some other potential sources.

Case/Support management tools could generate a session when a new case is raised.

Bug tracking tools could trigger a session and include bug details when a new bug is fixed.

Texts, emails, facebook updates, tweets…..all used to create a new session/test.

Evernote changes could trigger further new sessions also.

Delicious feeds could create new sessions based on new ideas/techniques or approaches that someone else has written about. Bookmark the page (and therefore idea) and create a new session based around that idea.

Dropbox (or SkyDrive etc) updates could trigger a new session. New screenshots, files or shared resources that need exploring for example.

If you use Chatter (or Yammer) you could automate a new session based on a post to chatter from someone in your company.

There are many uses I can think of where we would want to create a new test based on the content or changes in another system. IFTTT can help you with this. RSS feeds from other systems greatly increases the ease of which you can do this, especially if that other system is not supported by IFTTT.

Or you could step further in to the realms of beta tools and check out Zapier. It looks truly awesome.

The options are plenty, not just in creating sessions from other work streams and items but how about the other way around? How about on completion of sessions/tests you update another system?

Or how about stepping outside of the test case world?

How about using Evernote and IFTTT to create an amazing notebook of testing ideas and content? Each time someone in your team shares something, bookmarks something or creates some other content it could be collated in one notebook for the rest of the team.

With a little time and some deep creative thinking I suspect there is very little a test management tool can do that you couldn’t with some hacking and mashing with apps like Evernote….with the possible exception of metrics.

But why would you?

Why not? Why not use the tools and systems that are naturally good at their specific niche and make your testing process more fluid and contextual?

If you have a test case management tool that suits your needs then that’s cool. Stick with it. If you don’t, and can’t find one, then why not get creative? Most of the tools that you can hook together are free and could solve your problem.

You’ll also probably learn a lot about your own needs and requirements through the process of exploring, have a good laugh hacking systems together and probably learn about a load of cool tools that could help you in ways you’d never imagined before.

It’s good to explore the new tools on the market, even if they aren’t strictly test management tools. The tech world is moving fast and sometimes it’s worth exploring new ways of doing things. Sometimes it’s best to stay with what you’ve got also, but only you will know what the best course of action is.

After all of this exploring and trialling we are sticking with what we have right now though.

After investigating these tools I gained deep insight in to our needs and requirements and realised that we do need some metrics around our testing for compliance and for reporting coverage.

I’m not done with the exploration of Evernote as a test management tool though. I think it has massive potential. What do you think?

I know many readers of my blog would suggest I’m a hater of certifications for Testers.

That’s simply not true. Despite my ardent fight against them I am pragmatic enough to realise that getting a job often requires getting a certification. And putting a roof over your head often trumps principles and ideals.

I also believe that a certification course, delivered by a competent tutor who has bucket loads of skills and experience, can be very valuable.

I just don’t like what they have come to symbolise in the market place. I don’t like how you DO NEED A CERTIFICATE to get a job (in most cases).

Where did it all go wrong?

I’m not here to bash Certification schemes. Use your own judgment and experience on whether you think they give you insights and learnings or not.

Instead I’m going to ask you a question:

Are certifications still relevant?

I don’t believe they have succeeded in making people competent Testers. This is evident from the number of certified people on forums and LinkedIn asking “What is Testing?” or “Tell me how many Tests I should have for X feature!” or “why is testing so boring”.

I don’t believe they have succeeded in creating a universal language with which to talk about Testing. This is evident from the fact most Testers don’t know what “action word driven testing” is or what a “Software Failure Mode and Effect Analysis” is OR the fact that I call it a Test Case you call it a Test Script. The big question here is “Do most Testers care outside of their own company and context?”.

I don’t believe they have succeeded in promoting the value of software testing to organisations and business. I still come in to contact with a vast array of companies who don’t test, don’t appreciate testing and don’t understand what value testing can bring.

So are they still relevant?

There was a time before the Internet when you had very few places to go to obtain Testing knowledge, training or awareness. When I started out I went to the British Computer Society, a few well known books and the ISEB foundation. The ISEB crowd certified me. I still kept Testing as I had before. I just felt slightly more hire-able.

Only when I reached out to the wider community online did I find a place to soak up information and ideas about testing. I started sharing ideas. I started to meet people who thought the same way that I did. I started to feel like Testing was actually interesting. I started to find people who didn’t talk about standards, didn’t speak in platitudes and marketing pitches and didn’t push certifications at me from all angles.

When access to information is restricted or impossible those that hold the information have the power. If you wanted that information you had to pay. If you wanted to see what the “industry” thought was a good standard, you had to pay to find out, and then pay even more to be accepted.

Social networks and the “digital revolution” has made that information (and a much broader selection of ideas too) available to the masses. Having to pay for access to information is becoming rare.

Yet we still continue to pay for certifications.

We’re no longer paying for the content; almost all of that is available online, for free.

We are no longer paying for the training as it’s possible to sit the course and pass without in person training. (There are also a vast selection of excellent paid and free courses available online and in person outside of the certification schemes.)

I believe the masses* are paying for the right to say “I have a certificate!!!!!”

In a sea of people all shouting “I have a certificate!!!!!” why would anyone pick you?

* There are some people I meet who sit the certification courses as just one part of their continued learning…not the only part of their learning.