As the resident TestRetreat social butterfly, I made sure to introduce myself to all the new faces, although some of us already knew each other from the internet. It’s always nice to put a face to all the Twitter handles in the tester world. After a leisurely breakfast, we began to settle into business mode, which is actually pretty casual for a group of this size.

As per the usual, I came with some ideas rolling around in my head, but I didn’t have a formal plan until I got up in front of the group to pitch them. I settled on a couple of topics: Doing What You’re Told and Building Community in Testing. After hearing the pitches of other attendees, we decided to combine our forces to address these topics along with 2 other ideas, What is limiting your agency? and Personal branding respectively.

Agency vs Doing What You’re Told

Jesse Alford mentioned that he has often heard people say they cannot follow up on a particular suggestion he made when discussing real world problems in testing on the job. He is interested in what limits people’s sense of agency, or being able to be the change they want to see in the world. I felt this related strongly to my interest in the balance between testing as you are requested to work and using our professional judgment to recommend or simply execute appropriate testing. We had several other collaborators join us to explore these topics.

Although I often prefer live-tweeting sessions, I wasn’t sure how we would structure this conversation. We all gathered around a table to discuss these ideas as peers, bringing our diverse experiences. When I lost wifi early in the conversation, I switched to drawing a mindmap on a large piece of easel paper. I find this technique very helpful for visualizing connections as well as helping me to focus on the conversation as it flows. While that may sound strange to some, my own research into teaching and facilitation approaches indicates that other learners find it helpful to keep their hands busy so their minds can be clear.

Hand-drawn mindmap for Agency vs Doing What You’re Told

First, Doing What You’re Told. If we view testing as a service provided to a business, then a business may request various types of testing, effectively buying a requested unit of testing work. The request will vary with contextual variables such as product scope, release cadence, and release purpose. A business wanting to release a minimum viable product (MVP) version of a feature or application has different concerns from a business that has built up an inventory of value ready to deliver that is not yet deployed. In the case of an MVP, the business is looking to explore the market for a particular solution in a problem space. When the concern is idle inventory, the long feedback loop may be related to cost of delay or lack of value realized in a system flow. These motivators are quite different although each has the same desired outcome: deploy a tested set of software features. These requests may address different risk profiles, including the need for both internal and external feedback on quality and value. (We could do a Mary Had a Little Lamb on MVP… but let’s stay focused on this session.)

What do you do when your professional judgment is that a business request doesn’t make sense? For example, some industries are regulated with standards and compliance concerns. While these definitions are often vague, the way a company chooses to satisfy them is very concrete. Auditors may have particular requirements or expectations that influence what testers do to provide early feedback about the viability of software implemmentations. However, I have heard from testers in the regulated space that an audit can be a negotiation about how to satisfy a regulation (problem) rather than a mandate of using a particular set of processes and metrics (solution). Sometimes the mandate comes from within a company. In that case, what can a tester do to provide valuable information? When the pressure is focused on counting some form of testing work, one option is to use session-based test management (SBTM) rather than manual step-by-step test cases.

What’s constrains your agency?

Jesse’s question about agency dovetailed nicely here. Many testers have reasons that a particular approach cannot work for them in their particular situation. Some broad categories of concerns include inexperience with the proposed way of working, organizational hierarchy control, and motivation.

Inexperience can affect perception of a situation in both the problem statement and in the proposed solution. Sometimes the way we frame a problem limits the solution options we can see, i.e. “Why don’t you just …” For example, if we frame a testing problem as visual validation of a feature and then insist that Behavior-Driven Development (BDD) automation is the way to go, we may box ourselves into the corner of heavily imperative Gherkin scenarios. Alternatively, if we stated the problem of visual validation as automation-tool-supported, we could consider approval testing as a way to quickly detect changes while preserving the element of human judgment that helps us to make progress toward quality without maintaining brittle automation scripts. This may satisfy an organizational constraint such as “100% automation” in a way that empowers people while automating the boring stuff (i.e. visual inspection of every screen component).

Some testers work in an environment of strong command and control from the organizational hierarchy. These testers may live with concerns of being fussed at, e.g. you signed off on this release yet we discovered bugs in production. People higher up the organizational ladder may use their power in negative ways (e.g. sociopathic games) or in positive ways (e.g. sponsoring junior team members to develop talent). An official “open door” policy may indicate that employees were told to trust one another rather than earning trust through their behavior. Let’s say you buy into the policy and speak to someone above your boss’ level. Although you may be simply sharing ideas or asking for information, this activity can be misconstrued as undermining your boss.

Dependency on others could take many forms. This may disproportionately affect the “frozen middle” levels of management who may not feel in control. These managers may have the ability to remove obstacles to providing testing value but not recognize the opportunities. When we form the problem statement in a way that doesn’t make us feel safe to act, we can lose motivation to solve it. Our emotions heavily influence our perception of what we can do. If we feel threatened or fearful, we may spend energy on resisting change. However, when we are willing to be self-questioning, we can recognize when we really can make a difference and choose how to act. Through reflection, we can act effectively with integrity.

One way we can try to reconcile what we’re told to do and what we may choose to do is pairing with our colleagues. This provides a dedicated period of time to ask about the intent of the request. In some contexts, no one tells you what to do, so you may pair with someone else motivated to solve this particular problem. When you choose to work this way all the time (i.e. 100% of your work hours), you can overcome physical separation, whether with colleagues in the same location or working remotely. Pairs can achieve a high level of flow though constant exchange of information and quick feedback on ideas as well as solutions. Some of these solutions may be non-testing mitigation of risks.

We only had an hour together to dig into this rich topic, but it definitely has me thinking. In the end, we should remember that software development is a relatively young industry. Sofware testing as a specialization is even younger. Making room for good testing work involves both hearing what you are told and using your judgment about what you can do in your context to accomplish the goal. We can try small experiments in how we work to see what improvements we can make without asking permission. #SorryNotSorry

Well this has turned out to be a longer and more serious post than usual… I’ll tackle community and personal branding in a follow-up post.

I heard that Gerry Weinberg has an exercise called “Mary hada little lamb,” in which you analyze each word in the sentence to elicit implicit meaning that might be important. This sounded interesting enough to try, so when the opportunity came to propose a topic at Test Retreat 2013 I went for it. My topic “Is testing for me?” didn’t end up formally scheduled but made a nice interstitial topic to discuss with those milling about in the main room.

I chopped the sentence into separate words and wrote them top-to-bottom on a large sticky note. Then, instead of giving some sort of prepared remarks, I elicited brainstorming from the gathered participants. Having received interesting feedback on my professional and personal strengths at Agile2013 that had left me questioning how best to use my evil powers for good, I wanted to hear how others were thinking about the testing field and how it fit them.

The resulting scrawled notes ended up a mindmap, the path of least resistance for me. I won’t say the discussion solved all my problems, but it did give me some direction for future exploration – exploration that might also be helpful to a newbie wondering whether to pursue a career in testing.

I started composing a list of things I’d recommend to people just starting out as testers to help them to evaluate whether to continue. I wanted to encourage them to jump right in but also think big, not waiting them to wait 5 years to reach out to the wider world of testing (like I did).

Here’s my current list. I blogged about various experiments I tried, so you can read for yourself to see what it’s like to select what’s a good starting point for you.

No matter how many times I think I’ve found all the meaning in my testing career, suddenly I realize there are more layers… but like a parfait, not an onion.

Donkey: Oh, you both have LAYERS. Oh. You know, not everybody like onions. What about cake? Everybody loves cake!
Shrek: I don’t care what everyone else likes! Ogres are not like cakes.
Donkey: You know what ELSE everybody likes? Parfaits! Have you ever met a person, you say, “Let’s get some parfait,” they say, “Hell no, I don’t like no parfait”? Parfaits are delicious!
Shrek: NO! You dense, irritating, miniature beast of burden! Ogres are like onions! End of story! Bye-bye! See ya later.
Donkey: Parfait’s gotta be the most delicious thing on the whole damn planet! – Shrek

Abstract: Software development is a beautiful thing. We often create amazing ideas and features that would be wonderful… if only those pesky humans that end up using, abusing, and mis-understanding our brilliant code weren’t part of the equation. Unfortunately, we’re all in the business of developing software for people (yes, even when it’s machine to machine communication, it serves human beings somewhere. What are some ways that we can approach testing software outside of the theoretical ideals, and actually come to grips with the fact that real human beings will be using these products? How can we really represent them, test for and on behalf of them, and actually bring in a feature that will not just make them happy, but represent the way they really and actually work, think and act?

Expected Deliverables: An excellent debate, some solid strategies we can all take home, and some “really good” practices that will be helpful in a variety of contexts.

My take-aways were:

People are imperfect so ideal users aren’t enough for testing.

By definition, a composite of many people (e.g. user persona) is a model.

Too many user descriptions based on small differences is overwhelming, not practical for testing.

On Wednesday night of this week, I joined Christin Wiedemann‘s regularly scheduled Skype test chat with someotherlovelywonderfultesterfolks and we focused on empathy in testing. We wrestled our way to some working definitions of empathy and sympathy, which was much better than shallowagreement though it took a bit of time to establish. We agreed that testers need to observe, focus on, and understand users in order to serve them better. We find that empathy for our users and passion for our testing work go hand-in-hand since we care about helping people by producing good software.

Then we struggled with whether empathy is an innate trait of a person testing or whether empathy is a learnable skill that testers can develop through deliberate practice. (Go watch the video behind that link and come back to me.) We concluded that knowing what others are thinking and feeling, getting inside their skins, in the context of using the software is essential to good testing, though this might require a bit of perseverance. This can go a long way toward avoiding thinking we have enough information just because it’s all we know right now.

I immediately took to (end) user personas as a natural progression from user stories. After all, user stories are focused on value to and outcomes for a particular group of users. Describing those users more specifically in a user persona dovetailed nicely. Rather than some sterile requirements that never name the user, identifying a role – or, even better, a rich symbol such as a named primary persona – focuses the product team’s efforts on serving someone by helping us to understand the purpose of the work we do.

It’s fair to say I’m a UX-infected tester. More than fair. I identify with the curiosity I see in the UX profession and I admire the courage to kill their darlings (carefully crafted designs) when evidence shows it is time to move on. After all, we’re not building this product to marvel at our own cleverness but instead to serve humans.