This blog is about thinking of things past, present and future in testing. As much as I'd like to see clearly, my crystal ball is quite dim. Learning is essential and this is my tool for that.
A sister blog in Finnish: http://testauskirja.blogspot.com

Tuesday, April 26, 2016

There's recently been a significant transformation on a piece of software we work on with my team. In the last month, we've done more releases of that piece than we managed to do in the year before it. There's a clear feeling of progress. And the progress has included the amount of code for the piece dropping to a quarter of where we started off with the transformation. The final frontier of siloed development has joined team ownership and our drive to clean up things to be understood.

Looking at this piece of software, I'm observing an interesting aura of mysticism on how users approach it. It's like I'm moving into the past where I four years ago surprised our product manager with the idea that software can work when he tries it out - something that should be close to obvious in my books.

The worst piece of code my team delivered probably had also just one or two bugs in production over the last year. But for us, the reason was that the users would seek workarounds rather than ask for fixes, or that the bugs just did not bug the users enough for them to speak up.

Since we moved the piece of software into the frequent release cycles, we've fixed a LOT of bugs users never complained about. A lot that users had not even run into. Looking at the bugs we've fixed, I find it nearly impossible to believe that they could have been using the software all this time. But they did, they found their ways around everything that bugs us (and slows them down).

When you get little to none in bug reports from production, do you assume all is well? My sample says I shouldn't. Even asking the users directly might not give you a realistic picture.

And on the other areas where feedback is more frequent: if it bugs a user, it's a bug. And we have loads of things that bug the user, that we rather frame as feature requests, some of which we have no intention of acting upon. Counting them seems more of counterproductive. We could count how often we get interrupted with a quick request to change something in production that reorders our immediate priorities? But most of those wouldn't count as bugs in any traditional sense.

This post is inspired from two sessions of mob exploratory testing. I did one in Agile Serbia -conference on Saturday, and Llewellyn Falco did one on pre-Craftconf meetup in Hungary reusing my session description.

In discussing his experience of facilitating exploratory testing mob without me, it was interesting to notice what he had chosen to use from me and what he seemed to do differently. Here's what I learned:

We both emphasize the private role of the mindmaps created while exploring. It's not a final deliverable and an external document, it's something intended to help you (your mob) while you're testing.

He emphasizes more the tracking of concepts on the mindmap than me. It seems I treat the product as my external imagination so that the mindmap is more of a secondary tool for me, whereas he treats the mindmap as a necessary model that needs to be built right from the start. I speculate that the different role we're seeing is related to me primarily thinking of all testing as performance (practice before making pretty much any documents) and he still has the strong developer background that might be driving him towards testing as artifact creation. More discussion on the differences in belief systems clearly needed to understand this.

We both emphasize paying attention to the emotions while you test, but there is a clear difference in how we try to communicate that. He taught his group to pay attention to trigger words revealing negative emotions: "This is confusing", "I wasn't expecting that". Basically any negative sentiment, frustration and confusing being the most common ones. Trigger words should, as he explained, lead to making a note that drives a discussion about a possible bug with the developers. When I lead a teaching mob, I alert the groups on emotions in general. We both note that there's a lot of uncertainty around on what to speak up about among testers, leading teams to lose information they could have readily available. The encouragement to speak up through feelings is necessary.

I find Llewellyn makes an excellent case for developers picking up the thinking patterns around exploring. Learning from various mob exploratory sessions with testers around the world is probably the quickest way to get practical ideas from people's heads. You should try it too. Let me know if I can help!

There's all sorts of production monitoring tools we've been using, but I recently run into something different I've been looking into tonight. The tool is called Hotjar and a friend introduced it to me as a tool for usability testing. With the tool, you can see for each user in video format their mouse movements and clicks, and can build more fine-grained ways of analyzing when your users lose engagement on your pages.

How much of a spying tool this is became clear to me today as I went and checked for the first time the recorded uses of my personal landing page.

The red line traces the mouse movements. The red dots indicate clicks. I see what devices and browsers my visitors have used, and how long they've stayed.

Following what my users saw I can test different screen sizes and devices with eyes of my users, without setting the environments up myself. Doing this early on (and fixing), I could prune out problems through testing in production, annoying a limited number of users but coping with my limited ability to cover different combinations.

For now, I'm just blown away with this. And needed to share. I reserve my right to change my mind as always, but for now, I'm just excited.

Monday, April 25, 2016

On a remote day of work, there was very little discussion. I was focused on the application over the team's Flowdock channel when a discussion started.

Four developers at the office had started to wonder about a user interface design element on something one of them was working on right now. The question on the channel was directed at the user interface specialist, with a picture: "what if it was this way instead?".

The discussion continued on defining relationships of concepts to be selected: should you be able to select two at once out of the list or just one? Should there be a "no selection" option too?

The user interface specialist comes up with a conclusion on selection between radio buttons and combo box, that the discussion boils down to: "There's no reason to hide the selections from the user".

A developer comes back with a consistency argument: there's another element conceptually just like this just right above this and it uses a combo box. Why shouldn't the two be the same?

I enter the discussion, repeating the consistency argument. But I also add a piece of data: in 90+ percent of the cases, the user does not want to make a selection of this. Selecting anything but "no selection" is a special case.

The conclusion changes with the added data. In this case it's natural to have a combo box over the radiobuttons. A design is agreed upon.

I share this story to emphasize that it does not matter whose role is what and what contribution you could expect based on the role. As a tester in the team, I have a lot of empirical data and a keen eye on the real use cases through listening to end users.

Seeing this through a discussion over addressing it through hands-on testing just made everyone's life a little easier. Hat tip to my developers on initiating a discussion that required them effort due to remoteness, and on persistency on caring how it would be.

Thursday, April 21, 2016

A remark in an online lean coffee caught my attention: "I don't write automation. I really don't like that work. And there's N developers and just one of me, so I leave the code for the developers".

It could have been me speaking. But it wasn't. It wasn't even someone who has been around me to learn to speak like me. But hearing something I recognized so strongly made me feel the urge I need to share my story of how things changed.

Rewind back a little over a year, and I would have sworn I will never be a programmer. "I really don't like that work" would have been exactly my words. Someone suggesting I could try that would be met with skepticism. You can probably find me saying that in public in this blog that is intended to keep me honest to myself - if you get something out of it that's a plus.

I started mobbing not because I wanted to learn programming, but because I wanted my team to work better together. I wanted to feel my social needs met at work. So in hindsight, little did I know about how the human mind works.

When thinking of changing jobs, I had an interview with a psychologist. Me mobbing with my team was one of the stories I shared with him, and he labelled what might have happened: Cognitive Dissonance.

Cognitive dissonance is a state in which our beliefs and actions are not in aligned. People have this habit of needing feel whole and consistent, and we do that to the extent of rewriting our history and perceptions.

If my foundational belief is that I don't really enjoy programming, I seek to not do it - after all, I believe I don't like it. Theories around cognitive dissonance seem to be saying that we change our beliefs from dissonant actions - not the other way around. When I do something that is against my belief system, it starts a process of rewriting parts of my belief system.

With mobbing, this happened slowly. After all, we were mobbing at most once a week in the office. The extra sessions to learn the teaching technique I organized with the local tech community added some. I always told myself I had other motives to join the mobs, I never really cared for the programming part of it. I wanted developers to learn testing. I wanted to see if I could think fast enough to spot problems without the product as my external imagination. I wanted to see if I could use mobbing to teach my exploratory testing skills to other testers. The coding was just something I had to endure.

But over time, my feelings changed. As I did more of the programming in the mob, it wasn't just that I became more confident with the things I already knew as technical non-programmer or that I picked up pieces while we were doing it. My attitudes started to change, impacting my belief system that I used to define my place and role in the system of creating software.

I went from "I don't write automation. I really don't like that work." to "Let's write some automation. But why focus only on test automation, we have a bigger problem to solve with thinking around code".

My claim is that convincing me on this in advance would have been close to impossible.

So think about it: how could you use cognitive dissonance in giving yourself a chance to change your fundamental beliefs by engaging in work you feel you don't do. Or even more: could you use cognitive dissonance in changing the mind of that difficult developer who never wants to create any unit tests?

Wednesday, April 20, 2016

"I could add this new reporting feature in less than a week", says the developer. "But that would not include the cleaning up and changing of components we should do in the same area", he continues. "We probably should just say the feature takes a month, because we have to be able to change the components too", he concludes.

Ever heard this monolog? The idea that you need to come up with a different story for your product owner, because they understand only features and not the technical maintenance work. The idea that all the technical maintenance work should be baked into the work, even if it transforms the scope of the work?

We ended up reframing this discussion. Let's be nice to the product owners, and enable them the fact that the feature is in the production sooner. It gets real feedback sooner, and it starts paying itself back sooner. And let's trust that it is ok to do the needed cleanup after, without baking it in.

Sometimes, the feature-orientation and control of the product owners causes development teams some peculiar behaviors. So this post is a start of my new thing to focus on: what could we do so that the relationship of the perspectives wouldn't be based on rules (who gets to decide what) but on trust and collaboration.

Tuesday, April 19, 2016

Zero bugs. Stop coding bugs. I can't help but smile a little whenever I hear this but often choose to step away from the argument. This was a topic, however, that Arlo Belshee addressed at Agile Alliance Technical Conference.

I don't really care much for zero. But I care for numbers becoming much smaller than what we're used to. And that is the perspective I listened into the discussion around the fancy marketing term.

The core idea I picked up was that when your code reads well, you make less mistakes. And that your ability to read code is more important than the executable tests keeping your code harnessed. Even more, it could be that the amount of code comments could be a negative indicator on the ability to code with less mistakes, as good code reads kind of as English.

I've been speaking about the fact that my team does continuous (daily) delivery without test automation around. It's nowadays so normal for me that I don't really put much effort anymore into explaining it to others. But when I still talked about it, I heard we're irresponsible. I heard it is not possible even if I've lived it for almost two years now. And it is possible with a team of 10 developers and one tester.

The zero bugs discussion gave me new tools to look at what we do and what might make us successful in what we do.

We focus on code readability.We're cleaning up and rewriting regularly. I encourage this behavior - even if it means I get to help test same things over and over again, as the amount of test automation is ridiculously narrow.

Developers do exploratory testingThe developers test themselves quite extensively. This is actually the main reason I'm driving forward test automation in my team still, as I feel the manual testing must make us slower on this part. We're careful. But with care, we also succeed in introducing very little regression while delivering features with a steady pace.

Tester adds perspectives to exploratory testingWhen I test, I still find problems, so we're down to small numbers only after addressing the internal feedback. For the last year, the developer's testing skill has increased significantly, and a part of succeeding with this is my new discipline of avoiding writing bug reports (replacing them with discussions).

There's a special approach on how I find my problems, and I still try to open it up better with my team. I read shape of code commits to know what I will cover. I will compare the shape I see to the shape I expect based on discussions. And shape of discussions is a form of both what is said a lot, a little and not at all. The shapes are models and I overlay a lot of different models to make up my mind of what to cover.

I see value in what I do to keep things great for production, but also love the fact that I'm increasingly less needed - I call that progress. But listening to the zero bugs discussions, I get the idea that I might have been discounting the value of my other activity as a tester: making sure my developers are empowered and supported in their need of keeping the code readable, to their best current knowledge. I love how they pick up new things and ideas and hate their old stuff - that's how it is supposed to be.

The more we practice, the better we get. How often are we allowed to fix our past mistakes, made at times we knew the least? I would hope the answer is: every day.

Sunday, April 17, 2016

I've been around conferences a while. A little over a year ago in one of the conferences, I had a chat with one of the DEWTs (a peer conference in Netherlands) and expressed my interest in joining. I don't remember the exact response, but I remember my feeling: the remark on why I would be included was that they needed more women. Not more awesome people, but women. It turned into a laugh as these things usually do, but the feeling remained.

When I then got an email inviting me, I was happy and joined. I enjoyed the conference. I met some awesome people there, and the new people I added on my list of people to appreciate were mostly women. The men I already knew and the new men did not get through my limited bandwidth.

Late in the night, I remember hoping more people would talk testing. Instead, the discussions I ended up in were either about beer (I'm amazed how much there is to say about beers and I learned a lot even if I'm not particularly into the subject) or board games. My warmest memories are about talking on changing the world of testing with one of the DEWT fellows who I felt extended out of his usual circles to keep me company. Most of the time I felt alone in a crowd. And I was invited once only, for next ones I could see the same men there but know that I was excluded for lack of space. New women (and men) occupied that space, which is great. The spaces are limited.

I have social awkwardness around small talk (non-testing talk). In the last few years, I've found a perfect way for me to work through my social awkwardness. I go and talk to people about what they are into in testing, and find out if they would be interested in speaking in public. And a lot of people are, after I get to a point of sharing my honest excitement of something we've just discussed. I can do that because I have perspective of what is special and everyone has something special they could share.

Recently, I've been counting numbers. At AATC (Agile Alliance Technical Conference), I can easily remember meeting 10+ new women (including the amazing Laurie Williams - highlight of my conference was to teach her about strong-style pairing she then mentioned in her talk on pair programming) and 3 new men. And quite an even distribution of old friends in both genders.

It seems this happens to me a lot. I go to a conference, and I come back with a list of names, primarily women. I'm biased, I know. It's easier to approach a lone stranger who is my gender. It's not active decision, it just happens. So when I organize conferences and need to name speakers, I have to work not to create 90 % female speakers rosters.

This experience makes me think that the same might happen the other way around. That the men naturally get to meet more men. And that when the bandwidth is limited, it would require a lot of effort to go talk to the women. And if the men tried this, who knows who would take that as an unwanted advance.

I also help women submit to conferences. I'm proud to say that there are 7 women on AgileTD list that are somehow connected to me. Some I encouraged to reuse their old talks (you'd be amazed it took me 14 years to learn I can and SHOULD reuse my talks instead of always writing a new one - a lot of women submit only new talks). Some I helped write their abstracts, listening to their stories of excitement they could not quite get on paper. Some I asked to share their stories, knowing they had one. Some I reminded on the fact that the conference pays for travel. I even suggested organizing a scholarship to compensate for time away from work (got declined, that was not an issue this time). Some I reminded that they can submit now and decide later if they will want to go, if they get accepted, knowing they are just afraid of the no and postponing your decision helps with that.

When the conference organizers say "We've tried everything" they mean "We haven't actively left women out and even reached out to a few". Let me show you a few things the recent people who did "everything" did not do:

They did not invite me. I have no idea which other women that could have joined they excluded. Not only did they not invite me, they made me aware of the event only through "we had no women" messaging. Also, inviting once does not really count when others get to be regulars.

They did not ask me or Lisa Crispin for our extensive lists of amazing women around the world

They did not ask Speak Easy for their contacts

They did not spend 5 years in conferences meeting primarily women they could encourage, invite or promote

There's a few special aspects to women in testing conferences that are hard to express over twitter:

Speaking about lack of women makes this worse. It makes every woman a representative of their gender, over being a representative of personal awesomeness.

Women group up so that they don't have to represent their gender. There is a reason why some IT work places have loads of women whereas others have none. It's safer in a group.

Peer conferences with their debate culture (let's attack your ideas) is more often a turn-off for women. Why would I volunteer to be attacked just because that's a mechanism the men in the field found useful. I speak for dialog culture, the idea that we'd actively build a safe environment to talk about ideas over attacking them so that only strongest survive. The stories from past peer conferences where people felt attacked (!!) aren't helping.

The "women and weekends can't be a problem, men have families too" feels ridiculous. The families where the work distribution (especially the meta work - organizing the work) are rare. I'm one of the lucky women who only feel extreme guilt leaving for weekends, I still get to go. Many don't get the chance.

An average woman at conferences (or in IT in general) is better than an average man. They all feel they need to represent their gender. They get heavily filtered just from that. Helping the women get over all the obstacles they will face when they decide to speak in public is a support effort. More women helps. And allowing places for women isn't going to make your conferences worse.

An added note of someone who does well on this: James Lyndsay with LEWT. I was at LEWT#1. I've been invited to every single one since. I've only made it twice. But I feel I'm a LEWT. First come, first served. With enough women to never end up as a representative of my gender.

Friday, April 15, 2016

I tried it again. Taking my style of mob exploratory testing to unknown, unchartered waters. I delivered a training day where we learn through testing of the client organization's own software. Software I've never seen before we start testing it.

I know I need more practice until the point where I no longer feel afraid of not being able to do it. Moving around with different software takes my quick learning ability out for a stretch, that often feels a little uncomfortable. But it brings such insights for my trainees that any of my discomfort is justified.

I asked the organization to bring in a laptop with a test environment with their software on it, and select an area out of their documentation we could work on. The documentation is often a test case or a piece of requirements document. We start with identifying an area and exploring it. We extend to using the documentation only a little later, first impressions are built without constraint of a document.

There were a few insightful moments for me that I wanted to make note of:

Less than 15 minutes to first relevant bug

We were testing a feature I label here "meters". I quickly learned something that was obvious to everyone else that there's two kinds of meters, and that they are different enough to justify having their own create functionalities. Yet the form opening seemed very much the same, and I still seemed to have options of changing my mind not only between the two types we started with, but a dozen others. The concepts were drawing in front of me, as I could map those to physical entities taken to the software world. I did not agree with the design, but that was nothing to mention in the beginning since more info would be needed.

We created both kinds of meters, and making notes of functionalities we saw within those. As I asked the group to note a particular piece of functionality by giving it a name, the group also tried selecting the options. End result was serendipity: out of the two meter types, we could only be creating one as the other would have a remaining database error message. And it was not even illegal data, just a combination of something legal deeper than the first level.

Caring for code -discussion

Later in the day, a discussion took place that kind of surprised me. I shared how I test our daily releases looking at what developer says vs. what code / checkin says to assess risks and to build a per-change strategy. A developer in my training looked puzzled and voiced out his idea: "Is that what testers usually do?".

I explained that I look at the code to build a model of how well we understand the whole. Inconsistencies in what I hear and see lead me to insights on focus areas. Overemphasis of something does the same. I look for patterns that lead me to add things that are missing. And shape of code has proven very useful for me for that.

The puzzled look continues, and we end up asking the audience of 20 people - from management, testing, programming and support if anyone does anything of this sort. None came forward.

This left me wondering if it's something more special than I had given it credit for. Because still when doing that I don't "review code" or "read code". I look at size and shape rather than contents. And I find there often is an idea that the size and shape leads me to that makes the software speak to me to reveal more potential problems.

"This is not the testing we do here, but should"

My last note is on a comment I was given from behind the scenes by a friend that brought me in to do the training. As he checked on how we're doing, the comment was that I was teaching testing they don't really do, but should. I suspect they do, but don't talk about it enough.

Talking about how we do intelligent manual exploratory testing happens way too rarely. And sharing more about it with anyone who wants to hear (and some who don't) would make life as testers better.

My takeaway was that a lot of this stuff boils down to learning to talk about it. Communication. Helping people see what I do. And it left me with the idea that mobbing is indeed powerful. It allowed me to show testing in a way people think don't happen.

Thursday, April 14, 2016

There's a lot of advice flying around about how to interview testers. I get that people are concerned about finding the right person to join their team, and recognizing what makes a good / great tester is relevant, and that with the modern team work approaches the person needs to gel into the team. But let's look at the recruiting process from another angle: the one being (potentially) hired.

I enjoy my current work a lot. Things are better because of me. Things keep getting better. We try some great stuff, and I adore my developers for the intellectual challenges they pose me (when they need my help) and for the great stuff they do (making me feel unnecessary). There's two things I'm regularly missing: other testers to grow with me and harder problems to solve. And I'd love to relocate to California for private reasons for the right job, if that would present itself.

I don't actively look for new work, but every now and then there's something I can't quite resist taking a better look at. The positions I've looked into recently would serve my need of other testers to work with and harder problems (being significantly behind in agile adoption and quality-related practices) and I would have a lot to offer.

I've just gone through one recruiting process to the point of having to make a decision of whether I accept the job or not. I'm still undecided on my choice, and wanted to share a concern these recruitment processes have in my perspective.

As the trend of the era is, there can be a lot of work without compensation as part of the recruiting. There's the manager, team and HR interviews (or even more). There's a full day of psychological tests. And there might be a day of working with the team. All unpaid. Things that happen while I should still be at my day job earning my monthly salary. Things that make me take unpaid vacation or give up weekends to compensate for the lost hours.

The feeling these processes give me is that I'm a liability. The processes bring out all of my weaknesses (which I'm well aware of, having been through these tests and discussions many times before) and mention my strengths in passing. I feel I'm being heavily filtered for possible inappropriateness. And this filtering is expensive for me. I don't have the corporate resources behind me. This is significant money out of my personal pocket, and I refrained from writing about this up until the point that I knew this was not a bitter rant of being rejected, but a consideration at a point of accepting me after all the filtering.

I think back to the time I was about to join Granlund. How my manager back then used a bit of the time in the interview into getting to know I was a tester they are looking for, but majority into making me feel like I was wanted and welcome. And I remembered discussions with one of my smartest old colleagues who mentions that the really good candidates need to be approached differently, to make them choose your company over all the other options.

I'm sure I'm not the only one out there who has a lot to give (and expects to be paid for that work fairly). I'm sure I'm not the only one who feels many companies could use my skills and expertise. I'm sure I'm not the only one who feels that I'm interviewing the company and reflecting on their value system just as much as they are interviewing me.

Interview shouldn't be just a filtering process. It's also a sales process. Why would you want to give your capabilities for this job and this organization? Money is not the defining character for quality of life around work. For me, sharing some of my values are a defining character. And making me invest days into recruiting process for the assumption that that is how these processes works just feels wrong and outdated.

Monday, April 11, 2016

There are some discussions that make me both happy and sad at the same time. I had one exemplary version of these chats at Agile Alliance Technical Conference right after my session I wanted to share with you.

My session was about exploratory testing an API. I wanted (and succeeded) in creating an experience in mob format where people would apply exploratory testing as a group on something without a user interface. So we were testing ApprovalTests. I like ApprovalTests for many reasons. First of all, testing a testing framework is so wonderfully meta in many ways that I feel a lot of testers would have intuitively ideas about what could be valuable in that domain. Second, it has extensive unit tests and a large number of users (or downloads, rather). It could work. I would love to test something that mostly works, to not get bogged down with the simple bugs but to actually show where value of exploration lies even with "well-tested software" like ApprovalTests.

I felt pretty successful delivering my message, and especially enjoyed the insightful and deep end-of-session questions. There were a few testers in the room, but my perception was that the majority of my audience was developers.

One of the developers approached me after my session, with the experience of what they might be missing focusing on testing as artifacts (automation) over testing as performance (exploration). He shared that his company had let go all of their "QA folks" and been told that they are doing all of the testing among developers. And now he could identify they had a skills gap.

The genuine interest he showed to learning these skills and becoming a better developer was what made me happy. There's nothing about exploratory testing that developers couldn't learn if they realized that it was a set of skills in its own right. While the little mob had just shown that people with a tester background were more function & value focused in exploring and dared to ask for better usability of the API, the difference is a learned behavior on both roles. What I do as an exploratory tester in my team coins down to two things: perseverance and serendipity. I stay with the software longer giving it more chances for positive surprises of it failing in ways I can't even expect. And it helps me to stay around longer that I too "test until bored" but don't get bored very easily as there's so many dimensions, so much viewpoints and so much variation that I never need to approach a problem the exact same way.

What made me sad though was the idea that there are organizations, who will throw away skilled people in my craft for missing some part of the skill (often explaining what they do). Instead, they retrain other people to do that work, often complaining that 1) there's lack of people with the right mix of skills available and 2) there's so few women in the industry.

I believe I can train developers to explore. But I could also train the testers to work better with the developers, and grow into their full potential. The 1st is a positive opportunity, and I love how many developers are actively volunteering to learn that - fast tracking them to seniority. But the 2nd is a lost opportunity that shows how little people mean and how little we're willing to help them.

The world is full of "commodity testers" that are that because that is what they think they're asked. Those of us who have fought our way of the that cast keep fighting regularly as there are managers who would love to put us back into the stupid old ways. But I have so far met very few people who, given the chance wouldn't grow fast.

The environment often makes us what we are. Change the environment. It's not an overnight thing to change, but changing all of the software work into high-value assumption is so worth it. For both organizations and individuals.

Something really interesting has happened, and I wish I knew what caused it. My team feels more courageous, more active in taking a stand about design of features and we do so much better!

This started happening around the time we started Mob Programming together, so I would attribute practical, hands-on work around fun and laughter has a lot to do with the increased courage. We also started discussing more about our values and what we'd consider our role to be, encouraging initiative, so the explicit asking for this might also be a contributing factor.

The ideas of what needs to be improved are now constant. The discussions around concerns of our designs are now shared. And it's rewarding to see how those discussions morph the features we're implementing.

There's been three very different yet similar experiences recently.

The first one is a lesson about power of mockups with business users, in understanding if we are building the right thing without building it. Things where the users have found it hard to contribute early on, we're now getting actual discussions around use cases. A user experience specialist has added some of the needed focus in that area, taking developers into the discussions too.

The second one is a lesson about power of conversations. We identified a need, and two opposing views to approach it. The first idea seemed implementation/risk-intensive, so we analyzed the second seemingly easy option to learn it was implementation/risk-intensive on a different, more under the hood way. Connecting the lessons learned from the two, a third option emerged. Something that was never on the table without my hands-on pairing session with a real customer, but something that seemed blatantly obvious after understanding how our customers use was different from our own use.

The third one is a lesson about initiation. We've found many places, most recently today, with relics of old ideas of how the software is supposed to be extended that never turned reality. Instead, they create complexities and places of errors. These come out actively, as people are working on the areas.

Where there might have been a feeling of unability to do anything about these before, now there's hope. And the hope gets stronger through the shared actions we're taking.

Experiencing the awakening from within is wonderful. I love seeing how developers take on the full potential they have in creating great products. And looking at this reminds me on how fragile this is, how the environment focused on separation of roles could again take us back to past.

Monday, April 4, 2016

I've been having discussions about teaching and coaching as part of being coached and as part of mentoring that I think of as form of coaching. With this post, I wanted to share the a model that helps me make some sense in to the world.Situational leadership or something of that sort

Many years back, I read a post somewhere on the idea of which seems to map to a concept of situational leadership (picked up the word from googling what I remember - the dimension labels). When working with people at different places on the map, you do things differently.

For people who are unable and unwilling, the help they need is career coaching, disciplinary actions, detailed delegated tasks, well-run meetings, making decisions for them, focusing extra effort on testing as a feedback mechanism, encouraging good behavior (working code over lengthy designs, prototyping and unit testing, small stories and tasks, team shared responsibility) and playing with peer and manager pressure to get the person out of the place they're in.

When you're at the point of unable but willing, you're really in a fruitful place to learn. The help needed could be highlighting successes, identifying learning potential and steps, assisting with difficulties, and checking on status and feelings. Great things to do close to the work is helping people take tasks through delegation, giving guidance is new types of decisions, giving feedback, encouraging teaching others to learn from each other.

When you are at the point of able and unwilling, the help that is needed is attitude coaching, selling options as opportunities, and focusing on responsibilities as employee. This is a time to step back and let the person plan more, still enforcing small stories and tasks, with focus on personal accountability and deadlines.

We'd love to see people who are both willing and able, and whatever we do when they are not one or neither, is to help them get to a point where they are both. At this point, people still appreciate highlighting and rewarding successes (and being reminded that failing is learning, and when learning, that is a success!). Encouragement could be directed towards cross-training the others and continuing to learn, offering advice on decision making (but making sure the decisions are done by the people, not given from management). And even with people in this stage, there's often stuff outside the team that needs attention, be it to highlight the successes of these people as internal marketing, or removing roadblocks in cross-departmental work.

People who are unwilling are hard to work with. While unwilling and unable, these two might be connected: hard to be willing to do things you fail at doing. However, some of the trickiest problems I get to deal with are about lost motivation. When you've been put in a weird place for long enough, you start thinking the cave you're in is what the world looks like. You might not like it, but you know of nothing else.

I find that sending people outside is critical. Quoting a friend who chose to stay anonymous:

An average person at a conference is above an average worker at a company. Insight of the day.

Sunday, April 3, 2016

We've been working on a set of features on our product we've labeled around a customer. They're features beneficial for others too, but they're existence is particularly important for one.

At some point of this, we already learned that taking down the distance between the programmer&tester and the customer was a great idea, helping us a lot in solving the right problem. As often goes, the second hand information was distorted turning it into a feature instead of the need, and the best way to solve the need was altogether a different feature.

With the past learning of direct contact behind us, the product management had no issue with the idea that I would be trusted with getting the customer to use the product too. I was told to write a guideline, and in addition I scheduled a workshop with the customer.

Armed with the goal of knowledge transfer, I asked the customer to share his screen on Skype for me. I walked him through the login, guided him to the library his task was to fill out and made him do the work instructing with my words. I created him an experience of use, and in the end we retrospected for a while on what he learned and found complicated. While observing him as he did what I guided him to do, I learned of a missing feature he never realized to ask for as he even now did not see that was his need. And from our session, he could now do the things that were abstract. The guideline I wrote might be a helpful future reference, but much less necessary having the experience (not a demo) of use under his belt.

Later, I talked with the product manager who assumed I had "demoed the features" and seemed very pleased with the idea that the customer had been my hands on this demo saying he wouldn't have thought to do that. I labelled what I did: Strong-style pairing - for an idea from my head must go through the others hands. The label gave it a place in the ways to do things in the product manager's head, and he said he'd try that too. "Never thought of that", he said. I could feel smug about having thought of it, but the truth is I was told to think of it 1,5 years ago. I too NEVER thought of that before - that the roles in how you pair matter a lot.

I'm from a family with six children, so being around others has been my natural state. Two of us ended up in the software industry. I became a tester and my brother became a programmer. I always thought it came from what we were good at - so very fixed mindset of me. Looking back, I remember limited access to computer at home. I remember the lan-parties for boys only. And I remember being quite happy and content not facing weirdness by trying to join anything I did not feel welcome to.

I chose to do something where I would be welcome. Where I would not stand out too much. And where I would not get special attention and help because I'm a girl, without giving up my interests. And software testing is perfect - both genders are equally loathed upon in most place.

I told myself I don't really like programming. I told myself I had (which is true!) enough to learn on other stuff. But for the other stuff (team collaboration improvement), I volunteered myself to work as a mob with my team - one computer for getting the code in and my taking my fair share.

I learned that learning this coding stuff is easier than most stuff I've dealt with under the label "testing". Stuff that programmers say are programming, I say is testing. The thinking and analyzing. The understanding the problem and solution. While my programmers may at first tell me "var" letter by letter, a moment later I know that and learn more as we go. And with my presence, we avoid many costly mistakes, have a better big picture and a voice of conscience reminding that we really did not test enough yet.

In a mob, the code isn't mine. It's ours but I can see my fingerprints all over it. I don't have to deal with the "since women don't code well, you'll need special attention on feedback that you learn this stuff". All this special attention makes me very uneasy, just like pushing myself to places I'm not welcome to. And it gives me the social aspect of work I'm craving.

I would love to see people with my type of background - and any background - at the Mob Programming Conference. With Kindness, Consideration and Respect as guidelines for the type of work in mobs, I feel safe and welcome. And welcome you to join the experience. An experience it is, with 2 workshop and reflection heavy days, only opened and closed by a talk.

I've been thinking about safety today. Perhaps it's somehow related to talking about SAFe, the agile framework is less than admiring fashion last night, but the word has come to my mind often. Feeling safe. And in particular, lessons I've learned on what it takes to make people feel truly safe at work.

Some years back, I had a manager who believed that he needed to see me mark test cases passed / failed. He was honest enough to not try to wrap this into any methodology wrapping, but stated his true feeling: how do I know you're working if you don't do this?

We found other ways. For us, the simple thing that worked it to establish trust through the fact that our software at the time was so buggy that he knew I was working when I was logging on average 8 bugs a day over consecutive years. It took away the need of worrying, and established trust that was very important to me.

With my latest manager, the trust is extended even further. There's no cadence of me having to check in, and I've been intentionally testing the limits. With me feeling trusted and safe, I'm the natural me: excited about stuff, wanting to share, being active and figuring things out together so that he does not need me to check in, I just do. And I feel happy, I generate new ideas and act on them.

But the safety goes even further. It's a foundation of feeling nothing you do could destroy it all.

Imagine an employee who did not provide value. Or if he provided value, the value could be perceived negative. Interrupting others from creating value. Breaking things. Creating artifacts that would make further work harder. I used to believe these people should be fired. But I've also experienced now what I feel is extraordinary safety: supporting these people. Helping them over long periods of time. Thinking firing would almost be an option out of reach, even if it wasn't.

I'm realizing that action sends a message stronger than anything I've experienced before. It's safe. Safe to fail. Safe to be incomplete. Safe to try things out and learn.

That's a level of safety I'd now like to aspire for. Safety in the sense that it feels the current results-oriented world has little room for.