The Satir Interaction Model describes what happens inside us as we communicate — the process we go through as we take in information, interpret it, and decide how to respond [2]. In the model above, there are four fundamental steps to going from stimulus to reply: intake, meaning, significance, then response [3].

These four steps are the Gerald Weinberg simplification of the original work of Virginia Satir. Weinberg has written about the Satir Interaction Model in the context of technical leadership, while Satir wrote for an audience of family therapists [4]. When compared side-by-side, the original work included some additional steps:

Resolving communication problems

The Satir Interaction Model can be used to dissect communication problems. It can help us to identify what went wrong in an interaction and provides an approach to resolve the issue immediately [5].

Many communication problems occur when a response is received that is beyond the bounds of what was expected. Because the steps in the model between intake and response are hidden, the end result of the process that assigns meaning and significance can be quite surprising to the recipient, which can be a catalyst for conflict.

I like the J. B. Rainsberger example of applying the Satir Interaction Model to a conversation where someone is wished a "Happy Holidays". The responses to this intake may vary wildly based on the meaning and significance that people assign to this phrase. [3]

I focus first on my own process, because the errors I can most easily correct are the ones that I make. When I see and hear clearly, interpret accurately, assign the right significance, and accept my feelings, I understand other people's messages better, and my responses are more effective and appropriate. And when I understand well, I am better able to notice when other people misinterpret my messages, and to correct the misunderstanding. [2]

Finally, Judy Bamberger offers a very useful companion resource for adopting the Satir Interaction Model in practice [5]. She provides ideas about what could go wrong at each step in the model and offers practical suggestions for how to recover from errors, problems, or misunderstandings.

Association to Myers-Briggs

Weinberg drew a link between the Myers-Briggs leadership styles and the Satir Interaction Model that may be useful for adapting communication styles with different types of people. He suggests that:

The NT visionaries and NF Catalysts, both being Intuitive, skip quickly over the Intake step. … NTs tend to go instantly to Meaning, while the NFs tend to jump immediately to Significance. … The SJ Organizers stay in Intake mode too long … The SP Troubleshooters actually use the whole process rather well … (pp 108 & 109.) [6]

Weinberg then offers the following questions to prompt each type of person to apply each step of the Satir Interaction Model:

For NTs/NFs ask, “What did you see or hear that led you to that conclusion?”For SJs ask, “What can we conclude from the data we have so far?”For SPs appeal to their desire to be clever and ask them to teach you how they did it. [7]

Distinguish reaction from response

Willem van den Ende draws the Satir Interaction Model so that both sides of the interaction are shown:

Using this illustration he specifically differentiates between a reaction a response. A reaction happens when a person skips the meaning and significance stages, and simply jumps straight from intake to response. When both people in an interaction become reactive instead of responsive a fight is the likely result [8]. Understanding that these missing steps may be the cause of a misunderstanding could help resolve the situation.

Unacceptable Behaviour

The Satir Interaction Model may also be useful in structuring conversations to address unacceptable behaviour. Esther Derby suggests that these conversations should begin by getting agreement that the behaviour happened, followed by discussion about the impact of the behaviour, then conclude by allowing the recipient of the message to realise that their behaviour is counter-productive [9].

Sunday, 12 October 2014

I talk a lot about visual test coverage modelling. Sometimes I worry that people will think I am advocating it as a testing practice that can be used in the same way across any scenario; a "best practice". This is not the case.

A tester may apply the same practice in different contexts, yet still be context-driven. The crux is whether they have the ability to decompose and recompose the practice to fit the situation at hand [1].

To help illustrate this, here are three examples of how I have adapted visual test coverage modelling to my context, and one example where I chose not to adopt this practice at all.

Integrated with Automation

I was the only tester in an agile team within a large organisation. There were two agile teams operating in this organisation, everybody else followed a waterfall process.

The other agile team had a strong focus on test automation. When I joined, there was an expectation that I would follow the process pioneered in that team. I was asked to create an automated test suite on a shared continuous integration server.

Having previously worked in an organisation that put a high value on continuous integration, I wanted to be sure that its worth was properly understood. I decided to integrate visual test coverage models into the reports generated by the automated suites to illustrate the difference between testing and checking.

At this client I used FreeMind to create mind maps that showed the scope of testing. Icons were used against branches in the map to show where automated checks had been implemented, and where they had not. They also showed problems and where testing had been descoped.

The primary purpose of the visual test coverage model in this context was to provide a high-level electronic testing dashboard for the Product Owner that was integrated into the reporting from our automation.

Paper-based Thinking

I was a test consultant in an agile team where my role was to teach the developers and business analysts how to test. This team had no specialist testers. The existing team members wanted to become cross-functional to cover testing tasks.

The team had been working together for a while, with an attitude that testing and checking were synonymous. They were attempting "100% automation" and had started to require test sprints in their scrum process to achieve this.

At this client I wanted to separate the thinking about testing from the limits of what is possible with an automation tool. I wanted to encourage people in the team to think broadly about test ideas before they identified which were candidates for automation.

I asked the team to create their visual test coverage models on large pieces of paper; one per story. They would use one colour pen to note down the functionality, then another colour for test ideas. There was a process for review at several stages within the construction of the model.

The primary purpose of the visual test coverage model in this context was to encourage collaborative lateral thinking about testing. By using paper-based models, each person was forced to think without first considering the capabilities of a tool.

Reporting Session Based Testing

I was one of two testers in an agile team, working in a department of many agile teams. I was involved in the project part time, with the other tester working full time.

The team was very small as most of the work in this particular project was testing. We had a lot of freedom in determining our test approach and the other tester was interested in implementing session based testing.

We used xMind to create a mind map that captured our test ideas and showed how the scope of our testing was separated into sessions. We updated the model to reflect progress of testing and issues discovered. It was also clear where there were still sessions to be executed, so the model also helped us to divide up the work.

The primary purpose of the visual test coverage model in this context was for quick communication of session based test progress between two testers in a team. As someone who was only involved part time, the model served as a simple way to rapidly identify how the status of testing had changed while I was away from client site.

A Quiet Team

I was a test consultant in an organisation where my role was to coach the existing testers to improve their confidence. The testers worked together in a single agile team. One of their doubts was that they thought production issues were due to tests that they had missed. They weren't sure about their test coverage.

The testers were very quiet. The team in general conversed at a low volume and with reluctance. There were only a couple of people who were confident communicators and, as a result, they tended to dominate conversation, both informal and in meetings.

I wanted to shift ownership of testing from the testers to the team. I felt that scoping of test ideas and decisions about priority should be shared. It seemed that the lack of confidence was due, in part, to the testers often working in isolation.

Though I wanted to implement a collaborative practice for brainstorming, I felt that visual test coverage models wouldn't work in this team dynamic. I could imagine the dominant team members would have disproportionate input, while the testers may have ideas that were never voiced.

Instead I thought that the team could adopt a time boxed, silent brainstorm where test ideas were written out on post-it notes. This allowed every person to share their ideas. Decisions about priority of tasks, and candidates for automated checks, could be discussed once everyone had contributed by using the collective output of the group to guide conversation.

*****

Before anything else, ask yourself why the practice you'd like to implement is a good fit for your situation. Being able to articulate the underlying purpose is important, both for communicating to the wider team and for knowing what aspects of the practice must remain to meet that need.

I have found visual test coverage modelling useful in many scenarios, but have yet to implement it in exactly the same way twice. I hope that this illustrates how a tester who aims to be context-driven might adapt a practice they know to suit the specific environment in which they are operating.

Sunday, 5 October 2014

My third and final Let's Test Oz post; three experiences that left a lasting impression.

Cognitive Dissonance

Margaret Dineen gave a presentation where she spoke about cognitive dissonance*, the mental stress that people experience when they encounter things that contradict their beliefs.

As an example, you might believe that your test environments have been configured correctly and continue to believe this despite repeated evidence to the contrary, perhaps because your test environment administrator is usually so reliable. However, observing the signs of poor configuration will create cognitive dissonance; a cerebral discomfort that tells you something is not quite right.

Margaret shared how she had learned to acknowledge her distress signals, defocus, and complete a self-assessment. She writes an entry in her "Notebook of Woe" to locate the source of cognitive dissonance that she is experiencing. This notebook is a tool to ask herself a series of questions. How do I feel about my work? What deviations am I experiencing from my model?

I love the idea of this notebook, and the message that Margaret delivered alongside it. "Failure provides an opportunity for learning and growth, but it's not comfortable and it's not easy". This constant and deliberate self-assessment is a method for continuous personal improvement, capturing our discomfort before failure has an opportunity to take root.

Bugs in Brains

I ate lunch with Anna Royzman and Aaron Hodder one day. Our conversation meandered through many testing topics, then Anna said something that really struck me.

"I keep finding bugs in people's brains, where do I report those?"

She was speaking about the way in which we, as testers, start to learn how to interrogate a piece of software based on the developer who coded it. When we've worked in a team for a long time, our heuristics may become incredibly specialised.

Aaron concurred and provided an example. In a previous workplace, he knew that the root cause analysis would lead in very different directions dependent on the developer that had introduced a bug. One developer would usually introduce bugs that were resolved by a configuration change for an edge case scenario. Another developer would usually introduce bugs that were resolved by a complete functional rewrite for a core system flow.

This was something that I could also relate to, but had never considered as anything unique. I'm now thinking more about whether testers should raise these underlying differences between developers and how we might do so.

Talking to Developers

Keith Klain gave a keynote address in which he spoke about the ways to successfully speak to management about testing. I found his advice just as applicable to the interactions between testers and developers.

Enter conversations with an outcome in mind. Manage your own expectations by questioning whether the outcome you are seeking is a reasonable one. There is a common misconception that testers are wasting developer's time. Having one specific goal for every interaction is likely to keep your conversations focused, succinct and valuable, which will help to build rapport.

Know your audience and target your terminology accordingly. Even if you don't have the same skills that they have, you can still learn their language; interactional expertise. You can talk like a developer without actually being able to do development. For example, learn what third party libraries are used by the developers, and for what purpose, so that you can question their function as a tester.

Work to build credibility with people who matter. If you don't join a team with instant status, remember that this can be built by both direct and indirect means. Cultivating a good reputation with people in other roles may help create respect with those developers who are initially dismissive of you as a tester.

* By the way, Margaret has an excellent article on this topic in the upcoming October edition of Testing Trapeze.