Topics

Featured in Development

Peter Alvaro talks about the reasons one should engage in language design and why many of us would (or should) do something so perverse as to design a language that no one will ever use. He shares some of the extreme and sometimes obnoxious opinions that guided his design process.

Featured in AI, ML & Data Engineering

Today on The InfoQ Podcast, Wes talks with Katharine Jarmul about privacy and fairness in machine learning algorithms. Jarul discusses what’s meant by Ethical Machine Learning and some things to consider when working towards achieving fairness. Jarmul is the co-founder at KIProtect a machine learning security and privacy firm based in Germany and is one of the three keynote speakers at QCon.ai.

Featured in Culture & Methods

Organizations struggle to scale their agility. While every organization is different, common patterns explain the major challenges that most organizations face: organizational design, trying to copy others, “one-size-fits-all” scaling, scaling in siloes, and neglecting engineering practices. This article explains why, what to do about it, and how the three leading scaling frameworks compare.

Continuous Mobile Application Testing

Fingers and eyeballs. Given the onslaught of mobile devices and apps into the SDLC, fingers and eyeballs seem to be the only way apps can be tested right now. This translates to manual testing through the pre- and post-release lifecycle. But manual testing is problematic for many reasons: it drastically slows down the development process, it leaves a huge margin for error, and it ultimately lowers confidence in a team’s ability to release quality software in a short amount of time.

Today’s mobile app testing solutions must provide for continuous integration testing of web and native applications. They should allow teams to quickly create, modify, and execute tests. The testing solution should also offer plugin integration with Jenkins, or integrate as an ANT task in any CI server, and ultimately delivering test results in the standard JUnitXML format back into the CI server. In having their test product integrated with a continuous integration server, companies can release multiple builds per day with unit, functional, and performance tests all conducted in an automated fashion.

Here’s the SOASTA perspective for continuous mobile app testing execution in the SDLC.

First, the requirements for a solution

Record gestures and interactions with 100% accuracy and at the same speed, velocity and precision the mobile OS sees. This is especially important for gaming, a hot area in mobile apps. When a gesture, for example a finger swipe, is performed on a mobile device, it is made up of a lot of metadata. It might seem like a simple swipe of a finger, but what the mobile OS sees is a large array of data including the speed and trajectory of the gesture sampled many times per second. Simply testing with X/Y coordinates for a start and end of the swipe would not provide the right level of precision to accurately test many apps, and virtually none in the gaming arena.

Test on the real device. This requirement cannot be stressed enough. Even Apple has said testing on the iOS device simulator that ships with XCode is not the recommended way to test an app before shipping it. Apps must be tested on the real device to understand the real-world customer experience, performance, and overall quality. Solutions based on simulators or virtual devices will always be a few steps away from the real world and leave coverage gaps in quality.

Perform validations based on the actual objects and application states present on the device. Early mobile testing products used optical recognition for both test creation as well as playback. This meant to validate that the test saw something on the screen, the product would take a screen capture of the device and attempt to locate regions of the image to validate if something was present, such as a login box. This approach is very fragile; the margin for error is around 4 pixels. As a result, if a developer changed the color of the login box, or moved it around on the screen, the tests would need to be rewritten or modified. Being able to access the application objects at a native level is critical. A viable testing solution in today’s mobile SDLC must be able to access the real objects in the app at a native level to performance tests and validations.

Second, the blueprint for mobile application automation

Related Vendor Content

Related Sponsor

There are challenges in mobile test automation to be solved for a mobile testing solution to be complete. One is getting a testable version of the application deployed to the mobile device with no manual intervention. Having to manually deploy the application in order to kick off a battery of automated tests defeats the purpose of continuous integration testing.

With a typical web application, a Jenkins CI workflow would check out source code, build, deploy to a test environment, and then execute the compositions. Deploying web applications to test environments is available today as a feature of Jenkins. However, deploying apps to mobile devices without manual involvement from the engineer is a problem not yet solved industry-wide.

CI Architecture

When building iOS applications, your continuous integration slave for mobile testing is a Mac. Code is built and deployed to the mobile device through XCode. Using Jenkins as the example CI server, the workflow is as follows:

1) Jenkins CI server wakes up the slave and instructs it to check out the source code

2) Code is built on the Mac slave using the XCode command line utilities

These are the typical steps for automating the build of iOS applications. The following steps are unique to some of these products:

3) One or more iOS devices can be connected to the Mac slave via USB

4) The deployment utility, as a step in the Jenkins job definition, will automate deployment of the app to the devices through XCode

5) Once deployed, devices will automatically connect to the product and be ready to receive and execute tests

After the application is deployed via automation through Jenkins, execution and reporting can begin.

6) Tests are executed

7) Results are fed into Jenkins for pass/fail reporting and post test analysis

Results are fed back into Jenkins using the industry standard JUnitXML format for viewing. Any time a failure is encountered, the Jenkins plugin allows the engineer to view the exact failure - within the scope of Jenkins without having to separately log in to the product - search for the results and drill into the exact failure: a link takes you directly to the result and failure for quick analysis.

The SOASTA CloudTest Methodology

Our methodology, released in 2009, is the body of best practices for testing applications in a production environment, testing in a lab and in production with continuity between them, and testing on a routine schedule between both environments on two separate trains. Using this methodology, we’ve enabled numerous companies to achieve best-in-class results in their application testing operations, including six of the top ten online retailers in the United States.

In the CloudTest Methodology, within the execution phase, CloudTest enables rapid test cycles by accelerating test definition, design, execution, and assessment. The visual development environment allows teams to create tests with unparalleled speed, as little to no code is written throughout this process. Execution can be done in an automated fashioned via schedulers or continuous integration systems. As with all data in CloudTest, results are delivered in real time. This means engineers can get immediate feedback by creating and manipulating dashboards in real time. They get instant visibility into application performance and stability.

(Click on the image to enlarge it)

This methodology suggests two trains, one in each of the lab and production environments, which maintain a schedule of continuous testing in a lab with targeted production tests with go/no-go decisions made at each tollgate.

Conclusion

Success in today’s fast moving online world demands companies have the right testing solution. Creating a poor customer experience can be catastrophic. Poor performance can result in lost revenue, failed customer retention, and a viral spread of poor brand reputation. The traditional players in the marketplace have been slow to respond with solutions delivering a high-level confidence these problems will be avoided. That confidence comes from having a modern testing approach and the right tools for the job. New technologies are available to deliver an approach and tool set that overcomes the challenges associated with mobile application testing.

About the Author

Dan Bartow is Vice President of Product Management SOASTA – the leader in cloud testing. You’d be hard-pressed to find someone with more knowledge of these matters than Dan, whose career has included stints at Intuit, Turbo Tax, Neiman Marcus, ATG and several start-ups.