Rajiv is an Associate Director with Perceptive Informatics having 15 years of experience in dev, architectural road map, laying automation strategy etc.

This was my first appearance in a MeetUp group and I can say I am satisfied with my decision.
I knew some hands on stuff over Groovy and few things about JBehave like writing stories etc..

I would like to share my take aways from the Session.

In Rajiv’s words

BDD ( Behavior Driven Development ) is a way that can let 3 Amigos – BAs, Developers and Testers collaborate.
Software artifacts/documents like Requirement Specs, Use Cases, functional specs, flow charts, stories etc can be used to describe the content of the software but But “How software will behave under specific condition?? ” none of them answers it.

BDD gives an answer to “How should a software will work under specific situation.”

BDD offers a template for defining behavior.

Template :-
++++++++

Given Some precondition (Pre-conditions)

Some Action by actor (Steps)

Then some testable outcome is achieved (Expected Behavior)

Similar way we can have multiple conditions.

A Use Case can be defined as a set of scenarios and easyB scripts allow us to write stories that can implement those scenarios in a beautiful way thus giving a common platform to the Business Analyst and Tester to working on same ground.

Writing stories in easyB

writing stories… in easyB script

description “As simple as it gets”

scenario “Testing easyB setup”
{

given “there is some precondition”, {
myList = new ArrayList()
}
when “some steps are performed”, {
myList.add(‘Hello easyB’)
}
then “results can be validated”, {
myList.size.shouldBe 1
}

}

description, scenario, given, when and then are KEYWORDS. A business analyst can understand statements written in “” eg. “there is some precondition” and an Automation Tester can actually write automation code/test script with in {}e.g. { myList = new ArrayList() }

Greater Boston Selenium Users Group

Another example –

ThisIsSampleStory

description “”

scenario “search on google”, {
given “machine is connected to internet”
}

So here a layer is written that will interpret this story file and execute it giving the excution results in the simple Story based format.
These results can be generated in the HTML or any other format.

There are many things we can achieve through easyB like prioritizing the test cases using Tags, parallel script execution etc.
To be frank I would like to learn more about BDD and tools like JBehave with selenium or RSpecs and Cucumber.

Though I personally had a feel that its a better way to let a BA and Tester work together and a way through which application behavior can be tested based on requirement specifications.

Can we have a layer that will take care of all the Automation stuff in a smart way that it will take the manual regression test cases as commands to execute on AUT.
Anyone can write dumb scripts, time needs innovation and I will think about it.

Mugging it more I came to know they are using FLAC – Free Lossless Audio Codec, here is the linkhttp://en.wikipedia.org/wiki/Free_Lossless_Audio_Codec
Checking more about FLAC I came to know it compress audio by 50% but the good thing is it does not lose a single bit during the process.

My sincere advice to them, please please please do a sincere alpha testing… get people on board from various countries and see if they can extend support for
various phonetics.. BEFORE that see what improvements are required in the original APIs.

Now that Google Chrome is out with the voice recognition support and I am curious about many things e.g. like how they are handling accents around the world. The biggest challenge(they must be have accessed it already) could be Indian, Chinese, Spanish, German english and accent.

Google Chrome Voice Recognition Support

I tested various words with my Beats earphone/microphone and testing it over Lenovo T410 with genuine Audio driver (I am not sure if they care about all this). But the results came are not good. Few word that Chrome detected clearly on the first go were “Hello” & “John”.

One thing I am sure about is auto search suggestions below original search given by it are absolutely vague .. like below “Hello” it suggests “I will” Below “Dance” it gives “Jazz” and below that “Dancing”.
One thing is really freaky… please check the Image

I would like to know about the APIs and algorithms they are using for Voice Recognition. Weekend is coming and I will be after it. That gives me ultimate pleasure wor