Quality assurance (QA) used to be a compliance activity. You were releasing a product and needed to test it and stamp it “approved.” QA was about testing that the code worked. You might manually test the code. You might have even tried some automation — coding a set of test scripts that would try to capture regressions or errors that you had eradicated in the past, but which somehow crept back in. All in all, you were reasonably satisfied that you achieved a level of test coverage that met your goals. Then, you put your code into production and crossed your fingers that nothing went wrong. And if it did, you tried to fix it as quickly as humanly possible.

But the world has changed. Now, users demand to be delighted with ever-higher user experience (UX) levels. If you make a mistake, you could lose the user forever. And you need to constantly improve. With continuous delivery, you’re probably releasing your app weekly — or even daily, dealing with more and more requirements as well as shrinking timeframes in which to test it. UX, performance, functionality … your app must delight, it must respond in time, it must work. At the same time, your competition is growing. They’re on it and you need to outperform them by beating them on revenue or whatever positive business outcome you measure yourself by. Simply put, your app is your business. So really, you need to test your app to ensure it will meet and exceed your business goal — and monitor it to ensure you stay there.

Let’s consider an example. Say you’re in retail and the goal for your e-commerce website is to sell $10 million in merchandise per month, but you’re only selling $5 million. How do you figure out how you’re doing against your competitors, why you aren’t meeting your goals, and what the reasons are for this? Is it that senior citizens are not checking out because they can’t read the text? Or that millennials find the delay under loads of greater than 5,000 users to be so frustrating that they sign off? Or, simply, that an undetected error is causing a failure for 1 in 3 users that try to check out. The answer is smart software testing that doesn’t just stop pre-production but continues in production to see if we’re meeting the business outcomes — and even identifies why not and how we could fix the issue back in the development process.

Smart software testing is about using artificial intelligence (AI) and analytics to continuously test and monitor your end-to-end, digital UX; analyze your apps and real data to create a model of system and user journeys; and automatically create test cases that provide strong coverage of the UX — as well as of system performance and functionality. Through feedback loops, you can zoom in on problems quickly and address them to ensure that your product delights customers and that you’re meeting your business objectives. From there, you can base your testing strategy around a learning model that can go even further — where it builds the model itself by watching and understanding the system.

Getting back to the e-commerce scenario presented above, we can see how real users are using the website by tracking each of their sessions against predefined user journeys. We might note that something is going wrong in the checkout process and need to zoom in to diagnose. In this situation, synthetic users can play a key role in recreating the problems — operating as if they are real users utilizing the app or website in production under real load. You can give synthetic users different personalities and profiles — like the 30-something price shopper in Cincinnati, the 22-year-old graduate student in Boston looking for trendy items, and the senior citizen in North Dakota who likes colorful sweaters. By deploying synthetic users of each demographic to zoom in on the area where we’re not hitting our key business outcomes, we can find out, for example, that the seniors aren’t buying because your checkout process is confusing or that the grad student is looking and leaving, not impressed with your product selection. But the price shopper is buying loads from your site’s sale section. Armed with that information, you can fix what’s wrong and take a step closer to realizing your $10 million monthly merch goal.

Eggplant customers are doing all of this using our Digital Automation Intelligence approach and suite. Our integrated solutions use AI, machine learning, and analytics to recommend tests to carry out, learn continuously, and perform intelligent monitoring that can predict business impacts and enable development teams to fix issues before they occur. Our solutions help companies radically improve automation, time to market, and test coverage to delight customers and drive revenue.

Dr. John Bates is the CEO of Eggplant, a visionary technologist and highly accomplished business leader. He's also the author of the book “Thingalytics: Smart Big Data Analytics for the Internet of Things.”

Check out our newsletter for the latest in Eggplant news, events, blogs, and more.

About Us

Eggplant provides user-centric, Digital Automation Intelligence solutions that enhance the quality and performance of the digital experience. Only Eggplant enables organizations to test, monitor, analyze, and report on the quality and responsiveness of software applications across different interfaces, platforms, browsers, and devices, including mobile, IoT, desktop, and mainframe.