Pages

Friday, 3 January 2014

The theme of the weekend was a test-related topic that interested us from the past year.

We gathered, a mix of experienced/less-experienced testers, experienced/less-experienced peer conferencer attendees - a number of us knowing each other well - a good mix.

Talk 1 - Simon - Online Coaching

I started with a some experiences of my online test coaching - I had been doing during the year - and raising up some of the patterns I had noticed. Some of these were about the dynamics of the coaching - whether the session was a “challenge” or tutorial or pair-work. I reflected on how I had seen behaviours change between sessions, partly based (my hypothesis) on the previous session (a feedback loop).

The open season covered a number of threads:

Student-Coach relationship

Growing coaching sessions to have more students

Differences between coaching and mentoring

How did the coach affect the start-up time for the student

Teaching testing is a great learning experience

How do you know when you are done coaching a student?

Discussion around “tricking” students and ethics

The topic was a tricky one to tackle in a general form, but produced a wide range of interesting threads and the discussion was fruitful. On this last thread James evolved a new suggestion (model) to the ethics problem - something that deserves a separate write-up along with coaching in general.

Several in the group weren’t familiar with how a coaching session works, and I took up an example of a coaching session I’d done with James - which he was willing to demonstrate for the rest of the group - this, combined with the debrief, was a good demo of how a coaching session looks like “up close” - including the learnings and power of using an apparently “simple topic”.

Talk 2 - Henrik - On a short/intense test period he’d been involved in during the year.

Henrik picked a topic where he’d returned to “grass roots testing” - not teaching, not talking about it, but actually doing it. And that’s one mark of a good tester - getting into the details of a test discussion, with “why I did it this way and not that…”

The discussion was about searching for the “important bugs”, the strategy and risk considerations that went it this. Some of my comments/reflections during this talk were:

Q: What was the value of the information in the report?C: Bugs don’t matter, it’s the information about the product that matters.Q: How do you estimate how many unimportant things that you don’t know?

Puzzle

We had a short gap between the end of the 2nd discussion and the scheduled end for the day, so we threw it open to puzzles. James gave one that we weren’t familiar with.

I enjoy being fooled by puzzles (or falling into the traps) - it’s good ammunition when examining what assumptions you made and why. Good personal retrospective material, quite simply!

Lightning Talks

A number of different of lightning talks on different topics - I didn’t take notes for this section, so other participants feel free to add in the comment section.

My own talk was a brief walk through some of the research/meanderings I’ve been doing in the last year connected with reframing testing as a hypothesis/experiment-driven activity - leading through the works of Gigerenzer (risk and number illiteracy) and the prosecutor’s fallacy, DiCarlo (How to be a really good pain in the ass), Collins (Rethinking Expertise) and Popper (Logic of Scientific Discovery)

Talk 3 - James - A Testing Challenge and subsequent report

James talked about an experience of a test challenge he had gotten earlier in the year, which had been quite involved and whose reporting had been drawn out. He had subsequently decided to turn the report into an exemplar of the processes he had gone through, techniques and analyse applied, re-visit, re-work and documentation of mistakes and thoughts to show the many non-apparent activities that go into producing the final report.A very interesting taster of the challenge, formal report and extended report.

Some comments/reflections I had at the time:

C: The FORM of the reporting AND the testing contributed to the result, i.e. The pauses AND reflection (and maybe discussion/consultation with colleagues) contributed to the quality of the work.

At the time I declared this as an heuristic: The "pause and reflect" (and, optionally, consult) heuristic; or the “System 2 Insertion heuristic” (in deference to Kahneman & Torkel Klingberg)

Q: Did you use all testing techniques you know about?

At the time I commented that a heuristic (I used) for noticing heuristics or patterns was “blink comparator testing”.

Checkout

All were happy, had spent a weekend surrounded by sharp minds, had learnt something and achieved the goals of the weekend. I personally had material to further expand on…. Quite simply a good weekend!