Continuous Testing and Service Virtualization at StarWest 2015

At StarWest all day Wednesday-from the standing-room-only 7:15am Service Virtualization session (more on this later), to the lively Continuous Testing lunch table hosted by Adam Auerbach, to the conflicting 3pm Continuous Testing and Service Virtualization sessions that forced attendees to make a tough tradeoff-it was clear that adoption of Agile and DevOps is driving a surge of interest in Continuous Testing...which is, in turn, increasing the demand for Service Virtualization.

Although we can't give you on-demand access to the impressive lunches and chocolate-dipped Mickey Mouse treats that everyone was raving about, we were able to round up a set of resources that give you a taste of the show's hottest Continuous Testing and Service Virtualization discussions-whether or not you made it to the Magic Kingdom...

"It takes more than guts to hold a late addition "bonus" session at 7:15 in the morning; it also takes having an outstanding story to tell so you're not telling it to an empty room while everyone else catches up on their sleep.

Parasoft and Alaska Airlines had both this morning, and they were rewarded with a standing room-only crowd that was hardly ready to leave when it was over. "Service Virtualization in Action: How Alaska Airlines tests for snowstorms in July" was a huge hit.

We've actually covered this story before as Parasoft and Alaska Airlines teamed up for a similar webinar earlier this summer, but the impressive crowd size proved that there are still loads of people in every industry imaginable that are still trying to figure out how to remove the constraints and timeboxes around software testing, and what exactly service virtualization really is.

Making sure the audience knew that this very different from server virtualization, Parasoft Chief Strategy Officer Wayne Ariola defined service virtualization as "delivering simulated test environments to enable earlier, faster, and more complete testing".

Ariola credits this shift toward testing continuously throughout the software delivery lifecycle to testers moving away from asking the question, "Are we done testing?" to the far more quality-focused, "Does the release candidate have an acceptable level of risk."

Alaska Airlines test automation engineer Ryan Papineau then took over the session and detailed how service virtualization helps his team test incredibly complex flight operations software that is loaded with dependencies around passengers, cargo, fuel, staff, check-in/boarding times and more. The challenge of maintaining a high level of quality in a system this complex is incredibly difficult, and Ryan broke down the three areas that caused the biggest headaches for his team before adopting service virtualization. Those areas were around:

Environments continuously changing and having to be shared between dev/test teams

Integrated data that was often inconsistent or unavailable to test

Impactful events that do not exist at the time of testing, and a lack of resources to make them happen

While the airline industry certainly has unique dependencies to test against, constraints around a lack of environments, and test data and resources are all too common in all industries. By virtualizing complete dev/test environments, and the services, resources, and events that testers need access to early and throughout the software lifecycle, enterprises like Alaska Airlines are able to ensure that consistently maintaining an "acceptable" level of risk is much easier than it used to be."

"Continuous testing is about fast and continuous feedback. Specifically, it is the practice in which tests are run as part the build pipeline so that every check-in and deployment is validated. This includes all types of testing, across all non-production environments. This does not mean that all tests are run all the time, but they are all executed at some point, thus providing the necessary gates to know that the deployment package(s) can move into production with high quality.

If you are doing test automation today, you most likely have some type of keyword or hybrid framework that makes automating regression tests easier. Perhaps you have moved to one of the open source tools, but your tests are still dependent on some amount of working code in the testing environment. That approach to automation is no longer effective in today's world of continuous delivery; you must transition to continuous testing.

With continuous testing, tests have to be atomic, which means they are small, independent units. They cannot have dependencies with other tests; otherwise, you will have large amounts of refactoring when small changes are made. Also, debugging time will be increased with large tests. With smaller tests, you are able to categorize them and easily determine when and where to run them and run in parallel.All testing has to be part of the pipeline, which means that automation, performance, and security engineers have to be familiar with tools such as Maven, Nexus, and Jenkins. They have to ensure that their tests can be kicked off with these tools, and that when a test fails, results are integrated back into the pipeline triggering a failed build.

Pipeline integration requires that test creation becomes more of a design effort. For example, if we run all our regression tests early on, we delay getting feedback to the team versus tagging some tests as a smoke test, running them within seconds, and immediately knowing the build results.

The other design aspect that has to be accounted for is the focus of the tests. Are you following the testing pyramid (from high to low: Unit > Service Layer > UI), or are you more like an ice cream cone (from high to low: UI > Service Layer > Unit)?..." <continue reading the article here>

At the 7:15am session, those brave enough to ask (or respond to) speaker questions were given their choice between a Continuous Testing book or a beach ball. To the surprise of our Southern California staff, the book was the unanimous favorite.

Getting Started With Service Virtualization: Implementation Strategies and Best Practices

At the sessions and across the exhibit floor, many attendees were excited by the prospects of service virtualization and wanted to educate themselves on best practices for getting the (beach?) ball rolling within their own organization. Two vendor-neutral resources specifically designed to help people get started with service virtualization include:

Service Virtualization Implementation Strategies: A successful start to your service virtualization initiative can make or break its success, but there's no easy answer to the question "Where should I start?" This paper outlines several critical things to consider in order to develop the strategy that's best suited to your organization's specific needs. Read it to learn:

General decision criteria you need to think through before you can determine where and how to get started.

How to assess what service virtualization implementation focus (environment-based, project-based, demand based, or hybrid) is best suited to your team's specific goals.

The pros and cons of the 3 fundamental service virtualization team structures being adopted across the industry.

Service Virtualization Best Practices Guide: A compilation of vendor-agnostic service virtualization best practices, stories, insights, and advice. Experts from a broad cross-section of industries address topics such as:

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

Cloud computing is Internet based development and use of computer technology. It is a style of computing in which typically real-time scalable resources are provided 'as a service' over the Internet to users who need not have knowledge of, expertise in, or control over the technology infrastructure ("in the cloud") that supports them.

Cloud Expo

Cloud Computing & All That
It Touches In One Location Cloud Computing - Big Data - Internet of Things
SDDC - WebRTC - DevOps
Cloud computing is become a norm within enterprise IT.

The competition among public cloud providers is red hot, private cloud continues to grab increasing shares of IT budgets, and hybrid cloud strategies are beginning to conquer the enterprise IT world.

Big Data is driving dramatic leaps in resource requirements and capabilities, and now the Internet of Things promises an exponential leap in the size of the Internet and Worldwide Web.

The world of SDX now encompasses Software-Defined Data Centers (SDDCs) as the technology world prepares for the Zettabyte Age.

Add the key topics of WebRTC and DevOps into the mix, and you have three days of pure cloud computing that you simply cannot miss.

Delegates will leave Cloud Expo with dramatically increased understanding the entire scope of the entire cloud computing spectrum from storage to security.

Cloud Expo - the world's most established event - offers a vast selection of 130+ technical and strategic Industry Keynotes, General Sessions, Breakout Sessions, and signature Power Panels. The exhibition floor features 100+ exhibitors offering specific solutions and comprehensive strategies. The floor also features two Demo Theaters that give delegates the opportunity to get even closer to the technology they want to see and the people who offer it.

Attend Cloud Expo. Craft your own custom experience. Learn the latest from the world's best technologists. Find the vendors you want and put them to the test.