QUEST Conference
Workshops

The “Data-Driven Automation Test Architecture,” or D-DATA, is a test management system that leverages Selenium for automation testing in a dynamic corporate setting. D-DATA offers several advantages over more basic Selenium testing architectures such as JUnit, and will serve as a platform for further innovation in Quality Assurance Automation as it continues to evolve. While JUnit may be a common design for selenium tests, it represents a relatively simplistic architecture that doesn’t scale very well, and is fairly rigid and inflexible in how it can be used. The D-DATA approach overcomes these limitations by starting from a more pure object oriented approach, with separation and encapsulation of responsibilities. This design lends itself to flexibility in the definition and implementation of tests, unlocking configuration and customization options that JUnit can’t compare to. Join Jacqueline to learn about this new approach.

Discover why D-DATA architecture instead of JUnit for automation

Learn how to extend your automation capabilities using D-DATA architecture

A well groomed backlog is critical to the success of any agile project. In this workshop, you’ll work with a sample backlog, letting the instructor act as the Product Owner. You’ll groom the backlog and estimate Story Points using a variation of Planning Poker, focusing on the unique perspective the QA professional brings to the discussion. You’ll practice verifying that the Story is testable and has valid acceptance criteria. You’ll use risk based testing to determine the test estimates that are supplied.

Gain practical experience participating in backlog grooming as a QA professional

Exercise your testing expertise to verify the Story is properly groomed

Use your testing experience to ensure the team develops a valid estimate

There are numerous benefits for companies that move to Service Oriented Architectures (SOA) including reuse of functionality, decreased maintenance, sharing ease with external applications, and increased application consistency. As reliance on services increases, the importance of thoroughly testing services and API’s is paramount. A single defect in a web service can have a wide spread impact on all the service consuming applications. With such an extensive potential impact, QA teams must use an approach and process to test services and API’s that ensures full coverage. Attend this workshop to learn the level of testing required and both manual and automated approaches for testing services and API’s.

Understand the types of services and API’s and the benefits of using them

Do you call yourself a context-driven tester? Would you like to, but haven’t been able to implement context-driven testing at your organization? Have you read about things like session-based exploratory testing, coverage mind maps, and low-tech dashboards but haven’t been able to make them work for you? This workshop will highlight some of the key aspects of CDT, discuss the benefits, explain how to make it work for you, and how to sell it to your organization. Real-world examples based on actual experience using CDT at multiple organizations of varying sizes will be presented. In this workshop, Jason will share some tips and tricks for successfully taking a context-driven approach to testing at your organization.

Receive an overview of context-driven testing

Discuss the benefits of CDT for testers, test managers and project teams

Test teams come in all shapes and sizes. Some are highly technical, some are filled with subject matter experts and others are a blend. Driving team performance and productivity up requires recognition of team strengths, understanding of organizational need, and most importantly, a common language. In this session, Anne will share essential steps she has taken to transform teams from meeting expectations to raising the bar. Building high performing teams, whether in testing or other parts of the organization, results in better customer experience, higher productivity, and better management insight. Arm yourself with practical ideas and tools to take your team to the next level. Discover the value of getting, maintaining, and contributing to professional certification groups.

Learn how to capture a current state of capability, engagement, and performance

Identify the factors in selecting the professional certification and common language the team will use

Define targets and metrics for measuring the impact of team performance

Defects are a time consuming part of every company’s development cycle. How we deal with defects can vary greatly from company to company, department to department. Defect review meetings are costly in both time and dollars and getting the defect from reporting by customers to development teams in a timely fashion is critical. In this tutorial, a new way of dealing with defects from prioritization to resolution is presented and explained. You will learn about the new agile defects resolution process in detail, its origins, history, and how it relates to risk management and FMEA. Each defect rating will be explained as well as how it can be integrated with process tools such as TFS or Version One. Finally, resolution and response will be covered. Join John and Jeff for a fun learning session and learn an agile method for handling defects.

Participate in a detailed overview of the new agile defects resolution process

Learn the implementation explanation of each defect rating and how to integrate these wit process tools

People frequently talk about “product improvements” or “product innovations” and even more abstractly about “process improvements” or “process innovations,” yet there is often fairly little discussion relating to the differences between improvements versus innovations. Because these terms are definitely not synonyms, this workshop examines the similarities and differences between improvements and innovations specifically relating to processes and methods for testing and evaluating software and software-intensive products and systems. Come to learn strategies you can use that will help you identify when process change is essential, and help you distinguish whether it is better to pursue improvements or innovations. The TRIZ (Theory of Inventive Problem Solving) system will be introduced and its application to process improvements and innovations in system and software testing will be described. Special emphasis will be given to introducing innovative thinking and solution techniques in combination with your other process improvement efforts.

In this age of offshore, near shore, and outsourced testing, who is monitoring performance outcomes? How do you assess if you are getting the service results you paid for? Is there a “best” or “better” way to measure performance in service contracts? Contractor performance is a new area for many testing and quality managers but supplier management is not. This workshop provides test managers best practices that can lead to more effective measurements of expected and actual performance outcomes. Quality assessments in a performance-based environment represent a significant shift from the more traditional scrutiny of process compliance to measuring outcomes. This workshop addresses issues related to measuring supplier efforts in a performance-based environment, determining what a “good job” looks like, and identifying key problem areas and best practices to assess whether or not outcomes are being achieved.

Discover cornerstones of performance measurement and surveillance

Explore issues in measurement in a performance-based environment

Learn what to measure, how to measure, as well as tips on identifying outcomes

Mobile apps have brought a whole new set of challenges when it comes to testing, fast paced development cycles with multiple releases per week, multiple app technologies and development platforms to support, tons of devices and form factors, and additional pressure from enterprise and consumers less patient with low quality apps. With these new challenges, come a new set of mistakes testers can make. Fred has worked with dozens of mobile test teams to help them avoid common traps when building test automation for mobile apps and would like to share some best practices that are useful to starting with mobile test automation. Fred will bring some real stories and struggles and show you how small changes in process made these mobile apps 10 times more reliable! Mobile test automation can quickly become a nightmare if you don’t get it right from the start. These industry best practices will help you get started quickly and correctly.

Performance engineering has become increasingly critical to the success and user adoption of web applications, especially with increasing market competition and the demand to be at internet scale. It is well known that site performance directly impacts the bottom line of online businesses. But, not every performance testing effort is implemented in a valuable fashion, nor does it fulfill the needs of the business. Its failure and successes are dependent on its foundational blocks. Performance engineers can no longer linger in the comfort zone and the techniques of the past. Not every performance testing strategy needs to be equally elaborate, nor does it need to leverage similar tools and techniques. To effectively deliver on performance testing, we need to be adaptive and flexible riding the wave of change and pressures for faster delivery. We need to better understand the technologies, project drivers, and constraints to better assess and design our testing approach and implement the appropriate flavor of performance testing.

Learn about technology trends and their implications on performance testing

Explore the different shades of performance testing and explore applicable scenarios

Understand the critical steps to delivering an actionable performance testing solution to drive faster and more scalable applications