Pages

"Some birds aren't meant to be caged, their feathers are just too bright"- Morgan Freeman, Shawshank Redemption. This blog is from one such bird who couldn't be caged by organizations who mandate scripted software testing. Pradeep Soundararajan welcomes you to this blog and wishes you a good time here and even otherwise.

Never heard about it?I googled the term and found one helpful link. http://www.phadkeassociates.com/index_files/robusttesting.htm but Wiki doesn't have any results for it, as per me its not as concrete as other testing techniques.

and then XXX, PMP, Associate QA Manager from some_organization

Robust testing means the degree to which a software system or component can function correctly in the presence of invalid inputs or stressful environmental conditions
Sr Software QA Engg from elsehwere responds

Is it something like the stability Testing? I never heard of this terminology.

Pradeep Soundararajan : Independent Software Tester with an experience of 7 million and 4 hundred mistakes in testing responds

Same as Black Viper testing.

Senior Software Engineer - Qualty Assurance responds // Note the spelling of Quality as per the Quality Assurance Engineer. No offense. All humans are fallible. //
Black Viper Testing ... now what kind of testing is it?

Pradeep Soundararajan : Independent Software Tester with an experience of 7 million and 4 hundred mistakes in testing responds

@ Senior Software Engineer - Qualty Assurance,

What? You don't know Black Viper testing? I think if you had read about Selphar box of techniques for testers , you wouldn't have asked about Black Viper testing technique.

Senior Software Engineer - Quality Assurance again responds

@ Pradeep

Seriously no. Never got a chance to read the same.
Can you please pass over any link for the same?

Pradeep Soundararajan: Independent Software Tester with an experience of 7 million and 4 hundred mistakes in testing again says something

@ Senior Software Engineer - Qualty Assurance

Cool. Time to update you that none of those terms that I used exists or is meaningful. I just made it up.

The lesson is: Don't get fooled by terms people make on the fly, instead, focus on your skills as a tester and learn to ask a question, "What do you mean by that?"

I see terms like "Guerrilla testing" and "Smart monkey testing" mostly being asked in interviews by dumb testers who are usually successful in intimidating the candidates to believe such things exist.

Those candidates who couldn't give convincing answers to such question, try looking for answers after the interview and start ask questions in all forums they can get their hands on. Of course, we humans don't say "I don't know" (just like me) and hence try to give any answer we think might make sense.

The candidates believe the answers provided by someone who appear to be experienced is true. When those candidates turn out to be interviewers they are tempted to ask the same question, "What is guerrilla testing?" to new generation of candidates.

That solves the puzzle for where these great terminologies come from.

Plus:

In orkut software testing group, long ago, (just about an year back) I saw someone asking the difference between Monkey and Guerilla testing techniques (which was asked to them in an interview as per their claim)

Someone responded to it: If you do aggressive monkey testing, it is called Guerrilla testing.

(If that didn't frustrate you, the response to the above will)

Here was the response: "Thanks!"

_ huh _

I am back to implementing the Black Viper testing technique with a hissing sound. Be careful, don't go near anyone when they are using the Black Viper testing technique, it could be poisonous :))

I was once interviewed by , one of the largest investment bank in Gurgaon.

I was asked "what is noun-adverb testing?"

Good for me that I never cared for it and still don't know the answer.And good for the testing community that I did not become a part in spreading one more bad interview question. Bad for me that I told "I don't know." and did not ask him the answer.

Nobody should say I do some "kind of testing" without explaning what he means by it. For example when he writes into test plan he will do stress testing, he should write what he means by it.

But it is useful if you have list of different techniques. 1) You can give it somebody who is learning to be testers, because if he brainstorm this technique himself, there would be even more diffrent names for one testing technique. It helps astablish naming convetion. 2) If you have list of desriptions of techniques, you do not need think over desriptions whenever you need it quickly, like for test plan or presentation. I will rather think over testing excersises then wrote description. If it workes, reuse it.

1) We put names to things to improve communication (shorten sentences), we shouldn't do it to either confuse people, put ourselves in a higher status or intimidate test job candidates (though is funny ;-))

2) Most of the terms to coin different types of testing are relatively new, we don't care about finding if it is already known and we develop them in our internal co-workers environment.

3) Due 1) and 2), we should try to always expplain when we mention testing "types", what are we reffering to (or at least provide a source link to the complete category listing where it is explained).

Neverhteless, "types" and taxonomies do have some benefits when doing a task, they allow us to split the universe in (equivalence) partitions of things to make sure that later we don't forget any "type" of testing and we cover them all. The problem with types is that anything, including testing activities, can be classified under different points of view, or "dimensions" (set of types): according to the objective pursued during the testing, according to the knowledge of the underlying product, according to the level of integration with the rest of the system, etc. we will have a different set of orthogonal types for each one of them. And we always forget that there are different classifications for each one of this orthogonally different attributes and that we shall honor that when we use or invent new test "types", assuring that the type name corresponds with one value of the corresponding dimension (e.g.: if the type dimension is "the objective pursued with tesing", then all the types names should be defined in those terms).

Descartes found that many years ago, each point (test case) has a value (type) in any dimension that is present: a test case can be a "unit testing" done using "white box" techniques and be also a "regression" test to be runned on each source versioned repository check-in. Furthermore, tests can change their type during its lifecycle. What is was a test to check product maturity, can be later a test to do regression.

Some examples (BTW, these samples are considering the terms of type of testing according to SWEBoK: http://www.computer.org/portal/web/swebok/html/ch5#Ref2):

"Are you doing withe box testing?"

"No, I'm doing integration testing"

Wrong! you can do withe box testing (know how the thing was coded) while doing integration testing between different product components.

Wrong! Functional and Acceptance Testing are from the "what's your objective when testing" classification, while unit, integration and system are from the "what level of integration are you testing with" classification, and should not be messed. You can perfectly do functional testing in both integration and system testing, for instance.

Because of the sheer number of terms that exist in the testing world, at times, it is very difficult to differentiate between which term one should know and what not.

Other angle to the topic that you have started is that many a times testers that I have interviewed often do not know even know what ECP is. Now ECP is just another term, but one would certainly expect a tester to know this. If as an interviewer you start elaborating on ECP and tell the meaning of it, it would be equivalent to giving the answer to the question.

Similarly, load/stress/soak etc. are widely used terms. There's also a possibility (and I have encountered this on many occasions) that these terms are used differently by different organisations/people. That doesn't make the terms themselves meaningless, as long as people in communication have the same understanding.

"Robustness" is an attribute of the system tested under the umbrella of performance testing. We as testers usually form terms around what we test for. So, testing for robustness becomes "robustness testing", and probably this is the reason you heard about "robust testing". "Robust testing" as such may not have any meaning at all, apart from the person(s) who were using this in their context, and were the source of this term. If we start drawing meanings, it could mean anything - one I already drew, other could be "testing carried out taking care of all dependencies and variables", another could be used in the context of automation, where the testing code itself should be robust. So, it would certainly be very difficult to answer "What is.." or "How would you do ..." questions for such terms.

Another example is "False positive testing", essentially meaning testing for false positives. Without a context in place, this term may not mean anything, but the moment you ask someone from the security industry, even if he hasn't heard the term before, he would be able to explain the purpose of this form of testing. This means, contextual terms that are not widely used can also be meaningful.

Even famous authors float terms, with official definitions and reasoning, but do not becme widely used. One example is "Self Satisfaction Testing (SST)" by an author of a performance testing book. Now, this term is vey funny and if we start drawing meanings, there could be many offensive interpretations as well :-). Till the point we relate it to "Smoke Testing", which turn should be associated with the context of performance testing, test envrionment validation etc., we wouldn't reach the meaning which the author meant. Now, does it make the term meaningless? For me - "Yes". For All - "Probably No". I am sure many of the author's peers or his testing group must already be using this term (and in turn asking questions about it in interviews :-))

Terms should be used to make conversation precise not complicated and confusing. So, this becomes more of a responsibility of the people who are communicating rather than blaming the term as such.

Various discussions happen on "This is not an official testing term". Till this day I do not know what "official" is, for this context. I feel pretty comfortable in using terms as I like, hearing to new terms and trying to understand their meaning, as long as conversion progresses and work gets done.

Because of the sheer number of terms that exist in the testing world, at times, it is very difficult to differentiate between which term one should know and what not.

Other angle to the topic that you have started is that many a times testers that I have interviewed often do not know even know what ECP is. Now ECP is just another term, but one would certainly expect a tester to know this. If as an interviewer you start elaborating on ECP and tell the meaning of it, it would be equivalent to giving the answer to the question.

Similarly, load/stress/soak etc. are widely used terms. There's also a possibility (and I have encountered this on many occasions) that these terms are used differently by different organisations/people. That doesn't make the terms themselves meaningless, as long as people in communication have the same understanding.

"Robustness" is an attribute of the system tested under the umbrella of performance testing. We as testers usually form terms around what we test for. So, testing for robustness becomes "robustness testing", and probably this is the reason you heard about "robust testing". "Robust testing" as such may not have any meaning at all, apart from the person(s) who were using this in their context, and were the source of this term. If we start drawing meanings, it could mean anything - one I already drew, other could be "testing carried out taking care of all dependencies and variables", another could be used in the context of automation, where the testing code itself should be robust. So, it would certainly be very difficult to answer "What is.." or "How would you do ..." questions for such terms.

Another example is "False positive testing", essentially meaning testing for false positives. Without a context in place, this term may not mean anything, but the moment you ask someone from the security industry, even if he hasn't heard the term before, he would be able to explain the purpose of this form of testing. This means, contextual terms that are not widely used can also be meaningful.

Even famous authors float terms, with official definitions and reasoning, but do not becme widely used. One example is "Self Satisfaction Testing (SST)" by an author of a performance testing book. Now, this term is vey funny and if we start drawing meanings, there could be many offensive interpretations as well :-). Till the point we relate it to "Smoke Testing", which turn should be associated with the context of performance testing, test envrionment validation etc., we wouldn't reach the meaning which the author meant. Now, does it make the term meaningless? For me - "Yes". For All - "Probably No". I am sure many of the author's peers or his testing group must already be using this term (and in turn asking questions about it in interviews :-))

Terms should be used to make conversation precise not complicated and confusing. So, this becomes more of a responsibility of the people who are communicating rather than blaming the term as such.

Various discussions happen on "This is not an official testing term". Till this day I do not know what "official" is, for this context. I feel pretty comfortable in using terms as I like, hearing to new terms and trying to understand their meaning, as long as conversion progresses and work gets done.

That's a very nice post and thanks for spreading awareness against such bad practice(hope bad practice exist)To add my experience, I was given a problem and asked to give an out of box solution for that :)in an interview--Dhanasekar S

Amazing piece of write! I wish monkeys and Guerrillas be given the credit... :) Sincere Thanks Pradeep. This should enlighten fellow Testers who don’t think and reason and accept the terminologies intimidated by some wise men. ;)

These days, you just put any word before testing and it becomes a form of testing and and if you post it as an interview question on a site, many testers will readily believe it is an authenticated form of testing. In an interview, I was asked if I know Black-box and While-box and Grey-box testing. i promptly said yes. But the interviewer didn't want to know that. he asked me about Yellow-box testing, green box testing, pink box testing, red box testing and a few more colors! he said these forms exist. What was he thinking? Does all of this have any relevance to the real testing work we testers do?

mr. pradeep i am new person in software testing what is your advice for me,what should i focus on and how should i manage my career, so that i can be an expert and on the other hand help other people to be one? thank you and really liked your post its so realistic

Posts & Comments

Search this blog

Copyrights

Tester Tested! by Pradeep Soundararajan is licensed under Creative Commons. You must owe credits to Pradeep Soundararajan when you copy paste anything from here by mentioning the name and proper linking to the post. You are not allowed to edit any of the post without permission. For permissions, write to pradeep.srajan@gmail.com