Category: Open Source

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

“CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery”

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement CT?

A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.

Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:

Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.

Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.

Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.

Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.

Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

DevOps and Test Automation on Steroids Will Become Key for Digital Winners

Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline

IOT and Digital Transformation Moving to Prime Time

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Nothing new in the land of cross-browser testing. Selenium as the underlying API layer serves leading frameworks including WebDriverIO, Protractor (Angular based testing), NightWatchJS, RobotJS and many others.

For web application developers that require fast feedback capabilities post their code commit or bug resolution, there are various testing options. Some would quickly test manually on a set of local or cloud based VM’s, some will develop unit tests (qUnit etc.), but there are also very mature cross browser testing solutions that add more layers of coverage and insights in an automated and easy way.

In a recent eBook that I developed, I’m covering the 10 emerging cross-browser testing tools with a set of considerations around how to choose the right one or the right mix of them.

As can be seen in the 10 tools shown above, there is a mix of a unit as well as E2E functional testing tools mostly javascript based.

Developers who would like to include as part of their quick sanity post commit a validation of the load time it takes the site to load, can easily add this PhantomJS based test into their CI post build acceptance testing and get such visibility after each successful build – that, match the result with a benchmark and take decisions.

In a quick test that I ran on the NFL.com website, I was able to not only detect a slow load of 10sec. but I also identified a long set of errors while the page is loaded.

Another powerful capability tools like PhantomJS can offer is the ability to both capture a specific rendering of a web page by a pre-defined viewport, as well as the ability to generate a page HAR file for network traffic analysis (I am aware that it is not the newest tool, and that Goole already provides a newer version, but still this is a valuable open-source free tool that can help add coverage capabilities to any web development team).

So if as an example, the load time with errors above turns on a red light regarding that site, with 2 simple tests that BTW PhantomJS provides as their starting kit in GIT, the developer can address the above 2 use cases of HAR file generation as well as page rendering screenshot.

The result of the above snippet is the screenshot below:

The HAR file creation that is based on the following GIT code sample will result in the following (I am using the google add-on HTTP Archive Viewer for Chrome, it can be done simply with other HAR viewers as well):

Bottom line

You can download my latest eBook and learn more, but in general – leverage both unit testing powerful tools, as well as traditional E2E tests, hence they do complement each other and add their unique value – And it’s Free!

In this blog, I actually wanted to share some up to date trends as I see them in the web landscape.

The web market has shifted a lot over the past years alongside the mobile space. We see a clear use of specific development languages, development frameworks and of course specific test frameworks aimed to test Angular, jQuery, Bootstrap,.Net and other websites.

From a Dev Language perspective, the web FE developer is mostly using the following languages as part of his job:

As a clear trend in web development, it shows that JavaScript is the leading language used by web developers. It’s actually not a huge surprise since if you move to the top frameworks used by these web developers, you will see quite a few that are based on JavaScript.

There are some trends seen recently by developers around shifting to nonAngularJS web development framework like Aurelia, React, and Vue.JS that are seeing a growing usage and adoption by developers due to considerations such as (larger list of Pro’s/Con’s are in source 1 below). With this trend in mind, and you’ll read in my references below, the new solutions are still not as complete as AngularJS is.

Shorter learning curve

Simple to use, clean

Flexibility

Lightweight compared to others (less than half the size of AngularJS e.g.)

If we examine the below visual (Source: NPM Trends), it’s a clear market dominance between Selenium and Protractor that underneath its implementation does uses Selenium WebDriver, and supports Jasmine and Mocha tools.

The advantage of tools like Protractor is that they support much easier web sites that were developed in various frameworks like AngularJS, Vue.JS etc. Such advantage allows test automation engineers to agnostically use them for multiple websites regardless to the frameworks they are built with.

It is not that easy, and pink as I described above, but it does give a good headstart when starting to build the test automation Foundation.

Thre are few other players in that space that are aimed at specific unit testing, and headless browser testing (Phantom.JS, Casper.JS, JSDom etc.).

As I blogged in the past, from a test automation strategy perspective, teams might find it beneficial and more complete to leverage a set of test frameworks rather than using only one. If the aim is to have non-UI headless browser testing together with Unit testing and also UI based testing, then a combination of tools like Protractor, Casper.JS, QUnit might be a valid approach.

I hope you find this post useful, and can “swim” in the hectic tools landscape. As always, it is important to match the tool to the product requirements, development methodology (BDD, Agile, Waterfall etc.), supported languages and more.

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

Application under test

Test code itself

Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

Map the different features that your app supports or requires the users to grant permissions for

Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

A code sample in GIT that runs on a target Android APK file and extracts from its Manifest.MF file the requires permissions, supported features and Android SDK/API’s

Use an existing ADB command that extracts from a connected device/s the supported feature

ADB SHELL

PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprintfunctionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same

Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that

The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉

This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

Added 2 Gherkin commands

// If you navigate directly to this pageThen I check mobileFriendly URL "http://www.nfl.com"

// If you got to this page through clicksThen I check mobileFriendly current URL

Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)

And the script example is pretty simple:

@WebFeature: NFL validate @SimpleValidation Scenario: Validate NFL Given I open browser to webpage "http://www.nfl.com" Then I check mobileFriendly current URL Then I check mobileFriendly URL "http://www.nfl.com" Then I wait "5" seconds to see the text "video"

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

Open source connectivity to a Lab

Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)

open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

In my previous blogs and over the years, I already stated how complicated, demanding and challenging is the mobile space, therefore it seems obvious that there needs to be a structured method of building test automation and meeting test coverage goals for mobile apps.

While there are various tools and techniques, in this blog I would like to focus on a methodology that has been around for a while but was never adopted in a serious and scalable way by organizations due to the fact that it is extremely hard to accomplish, there are no sufficient tools out there that support it when it comes to non-proprietary open-source tools and more.

Model-based testing (MBT) is an application for designing and optionally executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment.

Fig 1: MBT workflow example. Source: Wikipedia

In the context of mobile, if we think about an end-user workflow with an application it will usually start with a Login to the app, performing an action, going back, performing a secondary action and often even a 3rd action based on the previous output of the 2nd. Complex Ha

The importance of modeling a mobile application serves few business goals:

App release velocity

App testing coverage (use cases)

App test automation coverage (%)

Overall app quality

Cross-team synchronization (Dev, Test, Business)

As already covered in an old blog I wrote, mobile apps would behave differently, and support different functionality based on the platform they are running (E.g., not every iOS device support both Touch ID and/or 3D Touch gesture). Therefore, being able to not only model the app and generate the right test cases but also to match these tests across the different platforms can be a key to achieving many of the 1-5 goals above.

Looking at an example provided by one of the commercial vendors in the market (Tricentis), it can show a common workflow around MBT. A team aims to test an application; therefore, they would scan it using the MBT tool to “learn” its use cases, capabilities, and other artifacts so they can stack these into a common repository that can serve the team to build test automation.

Fig 2: Tricentis Tosca MBTA tool

In Fig 2., Tricentis examines a web page to learn its entire options, objects, and other data related items. Once the app is scanned, it can be easily converted into a flow diagram that can serve as the basis for test automation scenarios.

With the above goals in mind, it is important to understand that having an MBT tool that serves the automation team is a good thing, but only if it increases the team efficiency, its release velocity, and the overall test automation coverage. If the output of such tool would be a long list of test cases that either does not cover the most important user flows, or it includes many duplicates than this wouldn’t serve the purpose of MBT but rather will delay existing processes, and add unnecessary work to teams that are already under pressure.

In addition to the above commercial tools, there is an older but free tool that allows Android MBT with robotium called MobiGuitar. This tool not just offers MBT capabilities but also code coverage driven by the generated test scripts.

A best practice in that regards would be to probably use an MBT tool that can generate all the application related artifacts that include the app object repository, the full set of use cases, and allow all of that to be exported to leading open-source test automation frameworks like Selenium, Appium, and others.

Mobile Specific MBT – Best Practices and Examples

Drilling down into a workflow that CA would recommend around MBT would look as follows – In reality, the below is easier said than done for a Mobile App compared to Web and Desktop:

The business analysts will create the story using tools like CA Agile Requirement Designer or such (see below more examples)

The story is then passed to an ALM tool (e.g.: CA Agile Central [formerly Rally], Jira, etc.) for project tracking

Teams use the MBT tools to collaborate

The automation engineer adds the automation code snippets to the nodes where needed or adds additional nodes for automation.

The programmer updates the model for technical specs or more technical details.

The way it is easily used from within Android Studio and IntelliJ IDEA IDE’s makes it a powerful tool that differentiates it from other open-source cross-platform solutions such as Appium and other commercial tools.

Before drilling into basic setup and execution of an Espresso simple test, let’s first understand some of the basics:

Espresso is an Android only test automation framework (not cross platform like Appium/Selenium)

Espresso requires a separate APK package running in parallel with the application under test

The Espresso framework is embedded into the entire dev workflow and IDE, and that makes the adoption and leverage higher

Espresso can be used to do a quick post-commit validation of a fix or new code implementation, and also as part of a larger test scale within the CI workflow.

Espresso provides fast feedback to its users which is a big advantage since it is running on the device/emulator side-by-side with the app

Espresso supports annotations to determine the test execution scope (small/medium/large) which organizes the overall testing cycle for both dev and test

Espresso has unique synchronization method in its core making the tests less flaky and more robust. It will pass to the next test step in the code only once the view is available on the device screen in opposed to other tools that can easily fail without having timers, validation points and more.

Basic Espresso Framework Methods:

Espresso framework allows the automation developer to manipulate the test using 3 concepts:

View Matchers

View Actions

View Assertions

As seen in the above definition, onView(xxx) of a specific object on the app screen, an Action will be performed and an Assertion will be made to validate the end result.

Espresso Setup

The setup within Android studio is quite simple, and there is plenty of documentation in the google community around it.

The developer will edit his build.Gradle file for the application under test to include the Espresso framework dependency, the JUnit version, and the InstrumentationRunner (see below example)

Once the above is done, it is time to create for the corresponding app the test class.

This class will need to include through Import, few libraries that are required by the Espresso test (below example)

Test Code Implementation

In order to develop an Espresso UI automation, the developer must have the unique object identifiers for the application under test.

To study the app objects (Hamcrest Matchers) the developer can use various methods:

UIAutomation.bat tool that is built into the Android Studio SDK

All resource ID’s should be automatically be stored in a dynamically generated R.java file

At the end of a test, a basic test report will be provided to the user.

Running Espresso Tests in Parallel – Using Spoon

No test engineer or developer will be quite unless it validates the functionality of his app on multiple devices and emulators. For that, there is another widely used tool called Spoon (there are also cloud-based solutions as mentioned above that support parallel execution on real devices). This tool, will collect all the target devices (that are visible via adb devices) test results and aggregate them into one HTML view that can be easily investigated.

In order to leverage Spoon, please download the Gradle for spoon plugin and install it. Post installation, configure as follows

By default, Spoon will run your tests on all ADB connected devices, however, if you want to run concrete devices and skip others in order to reproduce a specific defect on 1 device, you can configure spoon accordingly

It’s no doubt that quality is becoming a joint feature team responsibility, and with that in mind – it is not enough for traditional QA engineers to develop and execute test automation post a successful build, but actually the growing expectations now are from the Dev team to also take part and include as many tests as they can in their build cycles per each code commit.

Tests can be either unit, functional, UI or even small scale performance tests.

With that in mind, Dev team need a convenient environment that allows them to perform these quality related activities so they deliver better code faster!

Developers today are specifically challenged with the following:

Solving issues that come from production or from their QA teams that require a specific device or/and environment that’s usually not available for the dev team

Validation of newly developed apps or features within apps across different environments and devices as part of their dev process

Lack of shared assets for the entire dev team

Ability to get a “long USB cable” that enables full remote device capabilities & debugging

Perfecto just made available as part of its continuous quality lab in the cloud a set of new tools and capabilities that addresses these requirements and enable Dev team to accomplish their goals.

Perfecto’s DevTunnel solution for Android that is part of the recent 9.4 release is the 1st significant step toward helping developers accomplish more tests as part of the build cycle.

With the above challenges and requirements in mind, Perfecto has developed a unique solution called the “DevTunnel” which enables developers to get enhanced remote access to mobile devices in the cloud and perform any operation that they could have done with these devices if they were locally connected – things like debugging, running unit tests, testing UI at scale from within the IDE and more.

In addition, when referring to Android Dev activities, it’s clear that Android Studio & IntelliJ IDEA are the leading IDE’s to operate in. For that, Perfecto invested in developing a robust plugin that integrates nicely into the development workflow.

Espresso Framework

It’s no doubt that Espresso test automation framework is becoming more and more adopted across the developers for various reasons like:

Embedded into Android Studio play an important role for Android developers.

It’s very fast and easy to execute and receive feedback on Android devices

Espresso can be used within the Perfecto lab today in the following 2 modes

In the community series targeted to Dev Tunnel, you can learn more about the capabilities, use cases and get samples to get you started with the new capability.

To see this also in action, please refer to the video playlist that demonstrates how to get started and install DevTunnel, use Perfecto Lab within Android Studio with Espresso for testing and debugging purposes and more.

Blogroll

If we all thought we’ve figured out the digital market from an application type perspective, and have seen the rise of mobile, and the transformation of web to responsive web – now we should all start getting used to a whole new type of application that should change the entire user experience and offer new […]

A Guest Blog Post by Alli Davis. The Samsung Galaxy Note 8 is one of the best phones that the smartphone manufacturer has in its lineup. The Note 8 is revered for its large and gorgeous looking display, a fast performance, and improved S-Pen among other features. Despite its release in the latter half of […]

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant. While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation. To mature DevOps practices, a continuous testing approach needs to be in […]

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape. To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones) […]

A Guest Blog Post by Ravindra Savaram When we think about artificial intelligence(AI), the first thing that comes to our mind is a self-driving vehicle or a Terminator-like robot. Both robots and AI are not exactly one and the same. Though often utilized together with bots, artificial intelligence particularly refers to the stimulation of human […]

Guest Blog Contributed by SkyHopper Unmanned Aerial Vehicles’ (UAV) Quick Evolution Fast development in technology in the last few years has seen increased abilities of the Unmanned Aerial Vehicles. Though once restricted to government bodies and a few large establishments, today, the large majority of people can afford a UAV or an RC drone. Think […]

48 hours ago, Apple revealed its new and futuristic iPhone X. Regardless of its design, and debatable price tag, this device also introduced a whole set of functionality, display, and engagement with the end-user. iOS11 is turning to be quite different from previous releases from both user adoption which is still low (~30%) and also from a quality perspectiv […]

6 months ago I launched my 1st book called “The Digital Quality Handbook”. The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry. I have also recently joined the working group of ISTQB to influence the material in […]

As the author of the Factors coverage magazine, I am often asked about the operating system (OS) testing strategy. Some teams would state that testing on the Latest, Latest – 1, and Latest -2 for Android is sufficient, and for iOS testing on the Latest and Latest -1 would satisfy their coverage risks. I actually […]

The need for mobile testing of gaming applications has long been apparent. With app stores for Android and iOS devices more or less overflowing with games of all different kinds, it stands to reason that a lot of developers are rushing the process. That is to say, there are a lot of cheaper or less […]