In Part II of our latest Testing the Limits interview with Michael Larsen, Michael talks why test team leads should take a “hands-off” approach, and why testers should be taken out of their comfort zones.

Get to know Michael on his blog at TESTHEAD and on Twitter at @mkltesthead. Also check out Part I of our interview, if you already haven’t.

uTest: In a recent post from your blog, you talked about the concept of how silence can be powerful, especially when leading teams. Do you think this there isn’t enough of this on testing teams?

Michael Larsen: I think that we often strive to be efficient in our work, and in our efforts. That often causes us to encourage other testers to do things “our way.” As a senior software tester, I can often convince people to do what I suggest, but that presupposes that I actually know the best way to do something. In truth, I may not.

Also, by handing other testers the procedures they need to do, I may unintentionally be encouraging them to disengage, which is the last thing I want them to do. As a Boy Scout leader, I frequently have to go through this process week after week. I finally realized that I was providing too much information, and what I should be doing is stepping back and letting them try to figure out what they should do.

Michael Larsen is a software tester based out of San Francisco. Including a decade at Cisco in testing, he’s also has an extremely varied rock star career (quite literally…more on that later) touching upon several industries and technologies including virtual machine software and video game development.

In Part I of our two-part Testing the Limits interview, we talk with Michael on the most rewarding parts of his career, and how most testers are unaware of a major “movement” around them.

uTest: This is your first time on Testing the Limits. Could you tell our testers a little bit about your path into testing?

Michael Larsen: My path to testing was pure serendipity. I initially had plans to become a rock star in my younger years. I sang with several San Francisco Bay Area bands during the mid-to-late 80s and early 90s. Not the most financially stable life, to say the least. While I was trying to keep my head above water, I went to a temp agency and asked if they could help me get a more stable “day job.” They sent me to Cisco Systems in 1991, right at the time that they were gearing up to launch for the stratosphere.

I was assigned to the Release Engineering group to help them with whatever I could, and in the process, I learned how to burn EEPROMs, run network cables, wire up and configure machines, and I became a lab administrator for the group. Since I had developed a god rapport with the team, I was hired full-time and worked as their lab administrator. I came to realize that Release Engineering was the software test team for Cisco, and over the next couple of years, they encouraged me to join their testing team. The rest, as they say, is history.

In part II of our latest Testing the Limits interview with James Bach, we tried something a bit different this time, crowdsourcing some of the questions from our uTest Community members. Additionally, James shows us his lighter side and which of his picks won the World Cup — of his heart.

What is the biggest hurdle to testing you see testers struggle with? (Jeff S.)

JB: The hurdles that come with having no credibility. Gain credibility, and every external hurdle gets a lot smaller. If you ever find yourself saying, “I want to do good work, but my manager insists that I test in a stupid way, instead,” then probably the issue is that your manager thinks you are incompetent. Fix that. Then when you politely tell your manager to mind his own business, he will let you get on with your work in the way you see fit.

Do you see the tide changing for development teams modernizing their testing philosophy? Or is entrenched thought winning the day? (Jeff S.)

JB: I don’t know, really. I don’t do polls or anything. I can say that business is good for me and my colleagues, right at the moment.

Which area or skill is best to focus on first as a tester to build a solid foundation or understanding of testing? (Frank B.)

JB: I would say: general systems thinking (GST). See the book Introduction to General Systems Thinking by Jerry Weinberg. Within the realm of GST, I suggest: modeling. It’s vital to gain control over your mental models of products. Models are a prison from within which you test.

James Bach is synonymous with testing, and has been disrupting the industry and influencing and mentoring testers since he got his start in testing over 25 years ago at Apple. Always a great interview, James is one of our most popular guests and we’re happy to have him back for his first Testing the Limits since 2011. For more on James’ background, his body of work and his testing philosophy, you can check out his blog, website or follow him on Twitter.

In Part One of our latest talk with James, he talks about a future that involves a ‘leaner’ testing world, the state of context-driven testing outside of the United States, and why you’re “dopey” if you’re a manager using certain criteria in hiring your testers.

uTest: We know you don’t enjoy certifications when it comes to testers. In fact, in a recent blog, you mentioned that ‘The ISTQB and similar programs require your stupidity and your fear in order to survive.’ Do you feel like certifications are picking up steam when it comes to hiring and if they’re becoming even more of a pervasive issue?

JB: I don’t have any statistics to cite, but my impression from my travels is that certifications have no more steam today than they did 10 years ago. Dopey, frightened, lazy people will continue to use them in hiring, just as they have for years.

uTest: Speaking of pervasive problems, what in your opinion has changed the most – for better or for worse – in the testing industry as a whole since we talked with you last almost 3 years ago?

JB: For the better: the rise of the Let’s Test conference. That makes two solidly Context-Driven conference franchises in the world. This is related to the general rise of a spirited European Context-Driven testing community.

Nothing much else big seems to have changed in the industry, from my perspective. I and my colleagues continue to evolve our work, of course.

uTest: In a recent interview, you mentioned that you see the future of testing, in 2020 for instance, as being made up just of a small group of testing “masters” that jump into testing projects and oversee the testing getting done…by people that aren’t necessarily “testers.” Do you see QA departments going completely by the wayside in this new reality of a leaner testing world? Wouldn’t this be a threat to the industry in general?

JB: I’m not sure whether you mean QA groups, per se, or testing groups (which are often called QA). I don’t see testing groups completely going away across all the sectors of the industry, but for some sectors, maybe. For instance, it wouldn’t surprise me if Google got rid of all its “testers” and absorbed that activity into its development groups, who would then pursue it with the ruthless efficiency of bored teenagers mopping floors at McDonald’s (a company as powerful as Google can do a lot of silly things for a very long time without really suffering. Look at how stupidly HP has been managed for the last 20 years, and they are still, amazingly, in business).

Henrik Andersson and David Greenlees are two well-known contributors to the context-driven testing community and together co-founded the Let’s Test conferences, which celebrate the context-driven school of thought. Let’s Test Oz is slated for September 15-17 just outside Sydney, Australia, and uTest has secured an exclusive 10% discount off new registrations. Be sure to email testers@utest.com for this special discount code if you plan on attending.

In this interview, we talk with Henrik and David on trends in the context-driven community, and get a sense of what testers can expect at Let’s Test Oz.

uTest: Like James Bach, you’re both members of the ‘context-driven’ testing community. What drove each of you to context-driven testing?

HA: Actually, James did. I had close to no awareness of the context-driven testing (CDT) community before I hosted James’ RST class in Sweden in spring of 2007. During my discussions with James, I found that we shared lots of fundamental views on testing, and he insisted that I should meet more people in the CDT community.

James told me about the CAST conference that took place in the States, and that just before this, there would be a small peer conference called WHET 4 that his brother Jon hosted. A few days later, I got an invitation from Jon Bach to attend. At this workshop, where we spent a weekend discussion on Boundary Testing, I met testers like Cem Kaner, Ross Collard, Scott Barber, Rob Sabourin, Michael Bolton, Dough Hoffman, Keith Stobie, Tim Coulter, Dawn Haynes, Paul Holland, Karen Johnson, Sam Kalman, David Gilbert, Mike Kelly, and, of course, Jon and James Bach. From then on I was hooked!

DG: Difficult question to answer without writing a novel! I wrote about my testing journey some time back, however, that doesn’t really touch on my drivers toward the CDT community. If I was to pinpoint one thing, it would be the book Lessons Learned in Software Testing (Bach, Kaner, Pettichord). This was my first introduction to the community and to what I believe is a better way to test…in fact…the only way to test.

What keeps me here is the fantastic people I come across each and every day. We challenge each other, we’re passionate, and we’re not afraid to put our opinions out there for the world to hear and critique. This all adds to the betterment of our craft, which is our ultimate goal. I’m a firm believer that there is no ‘one-size-fits-all’ approach to testing, and when you add that to my natural tendency to explore rather than confirm, I find that the CDT community is a great fit for me.

uTest: And speaking of James Bach, he’s one of the keynote speakers at Let’s Test Oz in the Fall. Can you tell us a little bit about the idea behind the show, and why you felt it was time for context-driven conferences in Europe and Australia?

HA: Let’s Test is all about building, growing and strengthening the CDT community. We have successfully arranged Let’s Test three years in a row in Europe, but the attendees are coming from all over the world. The idea behind Let’s Test is to create a meeting place for testers to learn, share experiences, grow, meet other testers, do some real testing, and, of course, to have a whole lot of fun.

When David Greenlees and Ann-Marie Charrett told me about what they were looking to achieve, I immediately felt that it was in line with Let’s Test, and believe Let’s Test can be a great vehicle to grow the CDT community in Australia.

Last year, we did a one-day tasting of Let’s Test in Sydney, and this year, we did one in the Netherlands. In November, we will be hosting one in Johannesburg, South Africa. The purpose of the small tastings of Let’s Test is for testers to get a glance at the Let’s Test experience, at a really low cost. If you cant come to the real Let’s Test, this is a great alternative to check out what it is all about.

DG: From the Australian point of view, it’s fair to say that the CDT community is very small. We refer to the area as ‘Downunder’ — this is our way of saying Australia and New Zealand. I felt it was time to change that, and one way to help the CDT community thrive is to hold a CDT conference.

For quite a few years now, I’ve felt that Downunder needed a different style of software testing conference, one where conferring is the ultimate goal, and so I emailed Henrik, and he was extremely positive and encouraging…so here we are.

In the second part of this two-part interview, application security expert Dave Ferguson talks about the security testing landscape, top security tools and the job market for AppSec professionals. Be sure to follow Dave on Twitter @dferguson_usa or his blog, and get to know him along with the first part of our interview.

uTest: You tend to hear about breaches and security the most when they hit consumers’ wallets (i.e. Target). Is retail, for instance, more vulnerable than another industry right now?

DF: Higher education has a constant stream of data breaches as well, but retailers are definitely a huge target (no pun intended). Retailers process payments and handle personally identifiable information, but they don’t often have a culture of security like a financial services company, government, or defense contractor. They also don’t have big security budgets or vast resources like those other types of organizations. I have a feeling retailers are starting to devote more attention to security now, though.

uTest: Do you think that something as huge as Heartbleed awakened some organizations that may have otherwise been lax in certain areas of their security strategies?

DF: Absolutely. The Target data breach and the Heartbleed flaw in the OpenSSL library have spurred action within many organizations. Company executives and boards of directors want some assurance that they are not vulnerable. Increased security testing of applications, especially Internet-facing apps, is going to be a major component of that.

uTest: What’s changed the most in the security testing landscape just in the past couple of years?

DF: The most dramatic change is that formal bug bounty programs are now being rolled out by many organizations. This would have been a very radical idea just a few years ago. A bug bounty program defines rules of engagement and offers cash rewards to security researchers who find vulnerabilities and disclose them in a responsible manner. Bug bounties are a welcome change. I wish the streaming media company I had contacted had a such a program back in 2006!

Two other changes I’ve seen are a dramatic increase in the need for security testing of mobile applications, and a realization that the security of third-party software components needs to be verified.

Our guest in this installment of Testing the Limits is Dave Ferguson, a former software developer and specialist in Application Security since 2006. As a consultant, he tested for security holes in countless web applications. Dave also taught developers about security in a formal classroom setting to help them understand how to write secure code. For three years, he held QSA and PA-QSA qualifications from the Payment Card Industry Security Standards Council (PCI-SSC).

Dave currently serves as the Application Security Lead at a multibillion dollar travel technology company in the USA. You can find him on Twitter or over at his blog.

In the first part of this two-part interview, Dave talks about where organizations’ apps are most vulnerable today, and how he contacted a top-tier streaming media company about a major hole in their security.

uTest: You’re a web application security professional. How and why did you break into this subset of security?

DF: I was an application developer and manager for over a decade, and didn’t give much thought to security at all. In fact, I’m sure I coded my share of vulnerabilities over the years. Eventually I discovered this knack for finding unexpected bugs in our software, such as URL manipulation to view another person’s data. It wasn’t my job to test for security bugs. It just came from being a curious fellow and wanting to understand how the application would behave if I tried to do “X” as an end user. The QA teams were certainly not finding these types of bugs. In 2004, I decided to pursue a career in the field of Application Security and by 2006, had a full-time job doing penetration testing of web applications.

In the second part of this two-part interview, Usability Expert Craig Tomlin talks the best user experiences and the future of usability. Be sure to follow Craig on Twitter @ctomlin, and get to know him along with the first part of our interview.

uTest: When functional testing, you’ll probably start with the mission-critical functionality. When an end-user is given an app or site to provide Usability feedback without direction, which areas are most critical to look at first?

CT: Usability feedback without direction is just opinion, which is not valuable. Usability testing is always conducted on critical tasks. What are critical tasks? Those are the tasks that must be accomplished for the user to be successful. Note that it’s the ‘user successful’ part of that definition that is so important. Many times, businesses seek to optimize tasks that are important for their business, but not at all important to the user. That’s a waste of resources. Yes, businesses must be successful, but ultimately that success only comes from providing a valuable service or product to the target audience. Making sure the user experience is maximized for the user is the best way for a business to be successful.

So usability should never be based on feedback with no direction. Usability if done properly identifies critical tasks, the personas of representative users who need to accomplish that task, a protocol that tests the task in a non-biased manner, and the metrics that will provide the insight into whether that task is being accomplished as efficiently and effectively as possible, and with the best possible satisfaction.

uTest: Give us your definition of ‘the best user experience.’

CT: First, it’s important to define what we mean when we say ‘user experience.’ I’ve been known to rant about this subject, because it’s rare to find two people that have the exact same definition of what ‘user experience’ actually means. The original concept that was popularized by Don Norman was a broad or holistic viewpoint on how design and humans connect, and incorporated much more than just a UI or specific functionality.

In that broad context, the user experience you have with a brand includes your experiences with the brand’s product, with their education, marketing and sales communications, with their customer service be it in store, online or via phone, with the pleasure and satisfaction you receive from using the product or service, and with your interaction with others involved with usage of that product or service.

If we use that definition of user experience, then the best user experience is something that was designed with our needs in mind to give us satisfaction and pleasure from using it again and again. Consider an iPhone or iPad, or even an exit door that opens automatically when you approach it. All are examples of the best user experience.

I think it’s sometimes easy for us to forget that definition, to become so wrapped up in our own unique smaller piece of the broader user experience, that we miss opportunities to truly make a best user experience for our customers and clients. Taking a step back and remembering the bigger, broader definition of user experience can help to reinforce how what we do is making a difference for the people that use our products or services.

Our guest in this installment of Testing the Limits is Craig Tomlin, an award-winning digital marketing and User Experience (UX) consultant with over 20 years’ experience in B2B and B2C demand generation and eCommerce. He leads marketing and UX strategies and tactics for firms like: Blue Cross Blue Shield, BMC Software, Disney, IBM, Kodak, Prudential, WellPoint and more. You can follow Craig on Twitter @ctomlin.

In the first part of this two-part interview, Craig talks the differences between Usability and functional testing, and why Usability often doesn’t get the spotlight it deserves.

uTest: What initially drew you into exploring usability?

CT: I was drawn to usability in the mid-1990s, when preparing to redesign 22 websites for WellPoint Health Networks, a major health insurance corporation in the U.S. My earlier attempts at redesigning sites sometimes failed miserably. I didn’t understand why. After all, we had brilliant I.T. teams, the best website design vendors, internal stakeholders that knew their products and subject matter backwards and forwards. So why the failures? I realized the missing ingredient was the users.

Without user-centered design, we were just guessing about important usability issues when designing new experiences. I took it upon myself to become certified in usability and apply user-centered design principles to the 22 website redesign project. We conducted extensive usability testing, and created mental maps, information architectures and taxonomies based on user-defined requirements. Because of the inclusion of users as part of the process, we were very successful with our new designs. I’ve been a big fan of usability and user-centered design ever since.

uTest: When folks think of usability, it can often get muddied or confused with the functionality of a site. What goes into a Usability Audit?

CT: The International Organization for Standardization (ISO) defines usability as “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Therefore, usability includes both functionality (the effectiveness and efficiency portion) and satisfaction, of which both can be measured using quantifiable data. Usability testing if done properly includes evaluations of the critical tasks associated with a website or app in conjunction with the overall satisfaction of that experience.

As to a usability audit, it is an evaluation of effectiveness, efficiency and satisfaction using a well-defined set of criteria that can be reproduced over any number of websites, apps or other objects. I think that strictly speaking, functional testing only evaluates that effectiveness portion of the definition, in that a function either works as specified, or does not work as specified. Any data on efficiency and satisfaction would typically not be part of a functional test.

uTest: You’re one of 4,900 certified usability analysts in the world. How does one get certified, and what differentiates a certified usability analyst from an ordinary tester who’s providing their off-the-cuff feedback about a site or app?

CT: Becoming a certified usability analyst was very beneficial to me, my clients and the people who use the websites and applications I create. It’s interesting that accountants, doctors, lawyers and even hair stylists all have to become qualified through passing exams before they start dealing with real clients. Isn’t it sad that anyone can call themselves a ‘usability expert’ even though they have absolutely no training at all in usability? It’s a real case of ‘buyer beware’ for anyone seeking a knowledgeable usability expert to help them improve their website or app user experience. I’ve seen plenty of examples of untrained usability advisors providing bad advice to clients that caused them harm.

Being certified means that person has taken the time to learn the appropriate skills to properly test usability principles and a mark of someone who not only talks the talk, but also walks the walk. There are multiple ways to become educated in usability, including taking usability and human computer interaction courses at universities, or doing what I did and becoming certified through the Human Factors International’s CUA program. The 4,900 or so CUAs in the HFI program all completed an extensive set of educational programs and then passed the test (it took me two and a half hours to complete). I highly recommend either going through a university program or taking the CUA course to become a proficient usability analyst.

This month, we revisit Cem Kaner. Cem recently published The Domain Testing Workbook and is working on a collection of other workbooks and projects in addition to teaching several courses at Florida Tech.

In today’s interview, Cem explains his new workbook, discusses why it’s important for experienced testers to keep studying and improving, tells us what’s wrong with the testing culture today and hints at maybe having a solution for the QA credentials battle.

*****

uTest: You’ve written quite a few books on software testing and you had a new book – The Domain Testing Workbook – come out in the past few months. Why do you enjoy writing these books and who are you trying to help?

Cem Kaner: My overall goal is to improve the state of the practice in software testing. How can we improve what working testers actually DO so that they are more effective and happier in their work?

The Domain Testing Workbook is the first of a series that focus on individual test-design techniques. Our intent is to help a tester who has some experience develop their skills so that they can apply the technique competently.

What’s wrong with the way domain testing is currently taught?

CK: There’s nothing wrong with the way domain testing is taught. Teachers introduce students to the two basic ideas: (a) subdivide a large set of possible values of a variable into a small number of equivalence classes and sample only one or a few values from each class. This reduces the number of tests to run. (b) When possible, select boundary values as your samples from each class because programs tend to fail more often at boundaries.

In general, students understand these introductions and can explain them to others.

This level of analysis works perfectly when you test Integer-valued variables one at a time. There are lots of Integer-valued variables, and it makes a lot of sense to test every variable on its own, if you can, before you design tests that vary several variables at the same time. So, I think many courses do a fine job of introducing a useful idea to students in a way that helps them use it.

However, there is much more depth to the technique than that. Here are four examples:

It is common to look only at the input values and decide pass/fail based on whether the program accepts “good” inputs and rejects “bad” ones. A stronger approach goes past the input filter. For example, enter the largest valid value. The program should accept this. Suppose it does. Now continue testing by considering how the program uses this value. What calculations is it used in? Where is it displayed, stored, or compared to something else? Is this largest-acceptable value actually too large for some of these later uses?

There are other types of variables, not just Integers. Different risks apply when you are testing floating-point numbers or strings. Dividing them into equivalence classes is a little trickier.

We usually test variables together. Any test of a real use of the program will involve many variables. Even if you leave most of them at their default value, the program considers the values of lots of variables when you ask it to do something meaningful. We can manage the number of combinations to test using techniques like all-pairs. In all-pairs testing, the tester chooses a set of maybe 10 variables to test, then chooses a few values of each variable to test, then uses a tool like ACTS or PICT to create a relatively small set of tests (maybe as few as 30) that will combine these values of these 10 variables in an optimized way. (ACTS and PICT are free tools from the National Institute of Standards and Technology (ACTS) and from Microsoft (PICT). One of the challenges of this type of testing is picking the best values for the individual variables—and that brings us back to domain testing.

We often test variables together that are related to each other. How do you choose boundary values when the boundary (such as largest valid value) of one variable depends on the value of the other? This particular issue appears often in university textbooks but there hasn’t been enough practical advice for working testers.

The Domain Testing Workbook goes beyond the perfectly-good introductory presentations that appear in many books and courses. We want to help testers apply the technique to situations that are a little more complex but still commonplace in day-to-day testing.