Pages

Sunday, 10 January 2016

So it's 2016 and I have been reflecting on some of the challenges I see for Software Development, with emphasis on Software Testing.

Continuous Integration

Everyone knows what this is right? The concept has been around a while* - everyone has been there and done that, if you read the hype. But who is innovating?

A lot has been written about CI and its place in support of testing... Or has it?

Some Challenges

Massive parallel script execution against the same target drives a re-think on test framework design, modelling and creation - impacting data modelling and needs for flexibility in frameworks and harnesses

This is a move away from single isolated and independent "tests" on a stateful application. It will trigger a change in test script approaches. Where is the work on that? Pointers gratefully received...

I have seen some research on "multi-session testing of stateful services", but more is needed.

CI script execution and studies showing the effectiveness of dynamic test script selection strategies for team or testing support

I see that as a rule-driven approach to setting a series of checks on commits, e.g. (1) which checks cover my updated code-base (execute and result), (2) which whitespots in my codebase are there now (report)

Where are the studies or experience reports, where is the work?

There are socio-technical challenges with CI use and implementation. Technology is the easy part, the soci-technical part comes in when organisational issues and preferences distort the technology choices. This might range from "we have always done it this way" to "the language or framework of choice here is X so everyone must adapt to that".

CI is a development approach, and is distinct from testing. It's like an extension to compiler checks**. Thinking around selecting and adding to those "compiler checks" needs to be dynamic. Experience reports, empirical studies for this?

There is a danger that "testing" is driven into a TDD/Acceptance Test-only mode.

I would like to see more research on organisational and soci-technical challenges around software development...

Are people really going all-in on cloud and virtualization technologies to solve their CI related bottlenecks? Mmmm...

Software Testing

Some Challenges

Detachment from Software Development

This can be seen in various forms

Distillation down to process on "testing" only - the ISO 29119 papers are a classic example of this. This is the "reductionist" approach to a wicked organisational problem - not very sophisticated and with a high risk of solving the wrong problem.

Other examples are some/many software testing only books - yes, it can be good to highlight the testing and testers role and special challenges there, but until you start from software development as a whole (systems thinking approach) then there is a high risk that you are making a local optimisation. Another reductionist approach, liable to solving the wrong problem.

Mis-understanding of the software testing challenge - how to link a creative process (software creation and positive and confirmatory tests and checks) to a challenging process (testing for issues, highlighting problems, testing for risks)

Many organisations focus on confirmatory tests - a proxy for customer Acceptance Tests - as an MVP (minimum viable process), i.e. a proxy "get out of gaol card". See Myers [2] example of testing in an XP approach is an example here.

Myers [2] first wrote about the psychology of software testing. However, Martin et al [4] make the case for reframing this as an organisational approach/problem. Rooksby et al [5] observe the cooperative nature of software testing.

More studies on satisficing the organisational needs please!!

Lack of soci-technical studies and research into software testing and its part in software development. Rooksby & Martin et al [4] & [5] performed ethnographic studies of software testing to highlight its cooperative and satisficing nature. This called for further research

Sommerville et al [6]:

"An over-simplification that has hindered software engineering for many years is that it is possible to consider technical software-intensive systems that are intended to support work in an organization in isolation. The so-called ‘system requirements’ are the interface between the technical system and the wider socio-technical system yet we know that requirements are inevitably incomplete, incorrect and out of date."

The sooner we stop treating software development, and especially testing, in reductionist approaches, consider the socio-technical aspects - especially for large and complex systems - the better. And, today, most systems are inevitably complex.

Saturday, 9 January 2016

Many readers here would be familiar with a number of the concepts but the course was useful to me to help structure some concepts around statements and arguments, strategies in analysing them and eventually trying to understand the viewpoint of the person making the statements (arguments).

The course was good and something I'd recommend as an exercise in helping to understand and categorise ones own approach to argument analysis and deconstruction.

On element that was re-inforced and stood out early on in the course was to treat all arguments and statements sympathetically. This is like a safety valve when you see or hear a statement that might infuriate, irritate or wind up.

It's an approach to help one get to the root meaning of a statement and understand the motives of the person (or group).

I used this approach when first looking at the ISO 29119 documents and supporting material, ref [2] [3].

Challenges & Trolling

I often get challenged about my reasoning and thinking. I welcome this, it's usually very good feedback not just on my thinking but also the way I communicate an idea. So, I try to treat all challenges and criticism sympathetically.

But, when does it become "trolling"?

I saw a post this week from Mike Talks (@TestSheepNZ) - I liked the post - but I also recognised the source that triggered the post.

Troll?

Well, when it comes to software development - and especially software testing (due to expert-itis, amongst other things - I need to update that post) - there might be some tells to help judge:

Does the person claim to be an expert/authority in the field, but without evidence or catalogue of work? (this is a form of avoidance)

Twitter - due to the 140 char limit - can make people appear to be buzzword, soundbite or even BS generators. Do they resort to soundbites or other indirect responses when questioned or challenged? (this is a form of avoidance)

Essentially, the keyword in the above is avoidance. If so, you might have a hopeless case on your hands.

Treatment?

You can google how to deal with internet trolls, but my approach:

Start with sympathetic treatment. If this doesn't help you understand the statements, arguments or motives (and see the list above), then

Detach, ignore, unfollow.

Give yourself a retrospective. Was there a potential feedback element - can you communicate your message differently, is there some fundamental insight that you could use? (this is a topic for a different post). I.e. "trolls" can have their limited use even when their interpersonal skills let them down...

Learn and move on.

I'm also a fan of humuorous treatments & critique. I'm reminded of The Not The Nine O'Clock news approach changing attitudes in World Darts (you can google it...) Sometimes these are forms of reductio ad absurdum.

If you have other perspectives on understanding arguments I'm all ears!