Category: Mobile Testing

The Samsung Galaxy Note 8 is one of the best phones that the smartphone manufacturer has in its lineup. The Note 8 is revered for its large and gorgeous looking display, a fast performance, and improved S-Pen among other features. Despite its release in the latter half of 2017, the Samsung Galaxy Note has already received gadget of the year awards and other accolades. Here’s what the awarding bodies has to say about the Samsung Galaxy Note 8.

Exhibit Tech Awards

The Note 8 received the Flagship Smartphone of the Year Award at the Exhibit Tech Awards in India. It received the award for its screen size battery life as well as a great camera. The award is the most recent one for the Note 8, having come in late December of 2017.

Considering the place Samsung was with the Note 7, the Note 8 is nothing short of an amazing comeback. After the previous iteration’s battery issues, some reports suggested that the company might be considering ending the Galaxy line with the 7. But coming out with the next generation 8 proved to be the right move.

India Mobile Congress

Just over one month after the Samsung Galaxy Note came out in August, the India Mobile Congress awarded the device with its Gadget of the Year honor. The inaugural event awards honor innovations and initiatives designed to better the ICT ecosystem. The ICT ecosystem refers to everything that goes into making a technological environment for a business or government. This includes things like policies, processes, applications, and of course technologies.

Samsung introduced the Galaxy Note 8 in the Indian marketplace as an effort to give consumers the opportunity to do more with their smartphones. Upon receiving the award, Asim Warsi, the Senior VP of Mobile Business at Samsung India confirmed the effort to improve lives in India. Introducing the phone into the premium sector of the market has only strengthened the company as a leader in that category. It leads the way not only with a great screen and camera system but also with Samsung Pay and Samsung Knox, the security feature of the phone.

MySmartPrice Mobile Of The Year Awards

The MySmartPrice awards also honored Samsung’s Galaxy Note 8 in December 2017. The device received My SmartPrice’s Flagship Smartphone of the Year 2017 Award for the most feature packed phone in this category. Its 6.3 OMLED screen and camera features captured the attention of people for the award. It was slightly out edged by the iPhone X and Google Pixel 2 in areas such as the screen and camera, But the phone manages to hold its own among the stiff competition in the marketplace.

The Galaxy Note’s Snapdragon core processor makes it a very good option for day to day use according to reviews and performs relatively well in low light conditions. The reduction in price also makes it stand out amongst the competition. In the higher-end phone market, Samsung prices itself quite well for the features it offers.

At a time when Samsung could have ended the Galaxy Note line forever after the Note 7’s battery issue, the company persevered with its Note 8 and the risk paid off. In a highly competitive marketplace, Samsung still produces a Note that is productive, powerful, and innovative with features and a price that a wide variety of customers can enjoy. With all of the Accolades that the Samsung Galaxy Note 8 has received, the future looks bright for future iterations of the phone to come.

Alissa B. Davis is a freelance writer who has a passion for writing about technology and stories about her community. Read her latest stories at http://alissabdavis.com/

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

“CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery”

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement CT?

A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.

Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:

Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.

Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.

Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.

Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.

Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

48 hours ago, Apple revealed its new and futuristic iPhone X. Regardless of its design, and debatable price tag, this device also introduced a whole set of functionality, display, and engagement with the end-user.

iOS11 is turning to be quite different from previous releases from both user adoption which is still low (~30%) and also from a quality perspective – 4 patch releases in 1.5 months is a lot.

Most of the changes are already proving to cause issues for existing apps that work fine on iOS11.x and former iPhones like iPhone 8, 7 and others.

In this post, I’d highlight some pitfalls that testers as well as developers ought to be doing immediately if they haven’t done so already to make sure their apps are compliant with the latest Apple mobile portfolio.

The post will be divided into 2 areas: Mobile testing recommendations and App Development recommendations.

Mobile Testing for iPhone X/iOS 11

Test across all supported platforms as a general statement. iOS11 isn’t for every device, and apps are stuck on iOS10 that has different functionality than the iOS11. Test your apps across iOS9.3.5, iOS 10.3.3, and the latest iOS11.x

iPhone X comes from the factory with iOS11.0.1, requesting for an update to iOS1.1 – that means, this device will never get the intermediate iOS11.0.2/iOS11.0.3 – if customers haven’t yet updated to iOS1.1, you may want to have 1 device like iPhone 8/7 still on iOS11.0.3 so you have coverage for iOS11.Latest-1

Display and Screen Size for iPhone X specifically changed, and this device has a 5.8” screen size that is different for all other iPhones. Testing UI elements, Responsive apps layouts and other graphics on this new device is obviously a must (below is an example taken from CVS native app showing UI issues already found by me while playing with the device). This device is also full screen similar to the Samsung S8/Note 8 devices. A lot of tables, text field, and other UI elements need to be iOS11/iPhopne X ready by the developers.

New gestures and engagement flow impact usability as well as test automation scripts. In iPhone X, unlike previous iPhones, the user has no HOME Button to work with. That means that in order for him to launch the task manager (see below) and switch or kill a background running app, he needs to follow a different flow. What that means is that at first, the app testing teams need to make sure that this new flow is covered in testing, and more important, if these flows are part of a test automation scenario, the code needs to be adapted to match the new flow.

In addition to the removal of the Home button that causes the new way of engaging with background apps, the way for the user to return to the Home screen has also changed. Getting back to the Home screen is a common step in every test automation, therefore these steps need to account for the changes, and replace the button press with a Swipe Up gesture.

Authentication and payment scenarios also changed with the elimination of the Touch ID option, that was replaced with the Face ID. While iPhone X introduced an innovative digital engagement with the Face recognition technology, the de-facto today to log in into apps, make payments and more, is still the Fingerprint authentication. Testing both methods is now a quality and dev requirement. From a scan that I ran through the leading apps in the market (see examples below), there is a clear unpreparedness for iPhone X. Most apps will either show on their UI the option to log in via Touch ID or if they support Face ID, they will allow users to use it, while still showing on the UI and in the app settings the unsupported option.

Testing mobile web and responsive web apps in both landscape and portrait mode with the unique iPhone X display is also a clear and immediate requirement. I also found issues mostly around text truncation and wrong leverage of the entire screen to display the web content.

In addition, trying to work with Hulu.com website proved to also be a challenge. Most menu content is being thrown to the bottom of the screen under the user control, making it simply inaccessible. Obviously, the site is not ready for iPhone X/Safari Browser.

Mobile Apps Development

Optimize existing iOS apps from both UI as well as authentication perspective. As spotted above, there are clear compatibility issues around the removal of the Touch ID option, that needs to be modified on the UI side of the apps when launched on iPhone X. In addition, scaling UI elements on the new screen whether for RWD apps or mobile apps needs refactoring as well. Apple is offering app developers a ui guidlines to help make the changes fast.

Leverage advanced capabilities in iOS11 that best suit the new chipset (AI11 Bionic) and the camera sensors, to introduce digital engagement capabilities around augmented reality (ARKit API’s) and others. Retail apps and games are surely the 1st most suitable segments to jump on these innovative capabilities and enrich their end users’ experiences.

Bottom Line

The new iPhone X might be paving the way together with the Android Note 8 for a new era of innovation that offers App developers new opportunities to better engage and increase business values. If quality will not be aligned with these innovative opportunities, as shown above, that transformation will be quite challenging, slow and frustrating for the end users’

It is highly recommended for iOS app vendors across verticals to get hands-on experience with the new platform, assess the gaps in quality and functionality, and make the required changes so they are not “left behind” when the innovative train moves on.

The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry.

I have also recently joined the working group of ISTQB to influence the material in the mobile certification course, where I plan to include insights from the book as well.

In this book, I am hosting top leaders from the industry touching the most important aspects in assuring DevOps.

The above image is taken from Amazon recommending my book close to the leading DevOps practitioner books, this is another strong validation of the book relevancy and value.

Few highlights from the book are below:

Shifting quality left and right to cover as many tests automatically throughout the release pipeline is a key to move faster and identify issues earlier in the process (Angie Jones from Twitter, Manish Maturia from InfoStretch and others provide practitioner level insights and tips)

Testing on the right platforms and OS’s is a key to assure high quality across different devices (new, legacy, popular) in various locations and environments

Robust automation is achieved through best practices such as building a page object model (POM) and using unique object locators rather than flaky XPATHs etc. I am referring to a free online tool that can help score your object as part of your test automation development http://xpathvalidator.projectquantum.io/

Testing not only via the UI is another key for success, so complementing UI testing with API level testing can reduce the time of testing, provide faster feedback and other values. This chapter was actually developed by my twin brother Lior Kinbruner 🙂 – worth checking it out!

Performance testing and UX is another challenge and key to success. A full section of the book is dedicated to wind tunnel testing, user experience testing (JeanAnn Harrison contributes a lot here together with Amir Rozenberg).

The book was #1 in the new best selling book on Amazon, and still rocking today after more than 6 months. It is #43 as of today in the overall Software Testing Book which is a great validation and honor for me and the contributors.

If you still haven’t got a copy of the book, i really encourage you to do so – I am already planning on my next journey so stay tuned 🙂

As the author of the Factors coverage magazine, I am often asked about the operating system (OS) testing strategy.

Some teams would state that testing on the Latest, Latest – 1, and Latest -2 for Android is sufficient, and for iOS testing on the Latest and Latest -1 would satisfy their coverage risks. I actually say that it is not about covering the specific minor OS version, but covering sufficiently the representative OS families. In this post, i will explain my recommended strategy.

Mobile OS Versions Market Share

Late September 2017, Android platform is led by top 5 OS families (4.x, 5.x, 6.x, 7.x and latest 8.x).

In fact, Android 4.x includes 2 families – JellyBeans and KitKat, with the latest being more widely used and popular.

OEM’s are very late in updating their smartphones to the latest OS version, and when they do, they only update the most recent smartphone versions and not the legacy ones that in many cases, cover a large user base.

For Android, as we can see above, any app developer that supports the global market, cannot disregard a market share of 15.1% like the KitKat holds – KitKat is by far not a Latest -2 or even – 4 OS platform. Testing, therefore, an Android App needs to go quite far in the history of the Android OS versions to provide sufficient and risk-free testing.

iOS isn’t that simple either. This platform, especially after the recent iOS11 release, is now split into 3 major OS families that include iOS9.3.5, iOS 10.3.3 and iOS11.0. Each iOS family supports quite important devices that were left behind and did not receive the most up to date iOS update. As an example, iPhone 4S, iPad 2, iPad Mini 2 are stuck on iOS9.3.5, and iPhone 5C, iPhone 5 and iPad 4th Generation are stuck on iOS 10.3.3.

While the above market share still doesn’t reflect the iOS11 new market share, there is still 9% of iOS9.x users out there and there will be at least a similar number for iOS10.x once most users shift to iOS11. As in Android, here also we see 3 major OS families that need to be included in the testing strategies.

Beta Testing

Many organizations are being naive to the market innovation, and are not taking advantage of the public beta releases and developer previews coming from both Apple and Google. Android Oreo (8.0) and iOS11 were available for testing prior to their general availability release, however, many teams didn’t leverage this and are finding late in the game the defects, and in many cases, are hearing about them from their customers.

Above is the iOS11 GA bug that was reported, while below is an Android Oreo OS version specific defect that impacted many end-users.

Recommendations

Each customer has its unique user base, target geographies and in many cases also access to the user’s analytics.

Customers should follow the following guidelines

Monitor your user base and build a test lab that addresses the top user’s devices/OS permutations (use your own analytics)

Learn and monitor the market trends like the above so they do cover in addition to their analytics the major OS families (5-10% and above market share should be highly considered)

Testing on public OS beta’s is a must. Google Pixels, iOS Simulators as well as desktop web beta versions are always available for testing from day one

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Recommended Practices

Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.

Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.

Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.

Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

Application under test

Test code itself

Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

Map the different features that your app supports or requires the users to grant permissions for

Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

A code sample in GIT that runs on a target Android APK file and extracts from its Manifest.MF file the requires permissions, supported features and Android SDK/API’s

Use an existing ADB command that extracts from a connected device/s the supported feature

ADB SHELL

PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprintfunctionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

Yea, I know that my blog title is mobiletestingblog, but that’s not a mistake in the title 🙂

There is no distinction anymore around which platform is used to consume content today, whether it’s a smartphone, tablet or a desktop browser when it comes to web apps.

If your company is developing a web app or responsive website, these sites ought to be tested thoroughly against all of the above platforms. The majority of web traffic BTW today is coming from mobile devices.

In general, it is good to know that from a desktop browser market share, there are less familiar players such as UC browser by Alli-baba and Samsung Internet browsers that hold a nice chunk of the market (globally) – so, avoiding them as part of your test coverage matrix might not be a good strategy.

In general, the below would be the formula for web testing that I would recommend these days, however if from a web traffic analysis and supported geographies you have a requirement to target China, Europe, and others – then the above metrics should be added to the mix either in addition to the below, or as an adjustment.

With that in mind, I wanted to highlight in this post some recent web specific tools that are out there, free and can be extremely useful for both developers and testers.

In Google Chrome 59 (Beta is already available today!), Google is introducing new code coverage built-in tools that can allow both developers and testers to record the screen activities and report back in a nice dashboard how much of the site content (javascript, and more) was actually executed in an aim to optimize the website quality, performance and much more.

From a user perspective, they only need to enable the Code Coverage option from within the developer tools in Chrome, so it is added under the Sources menu option as seen in the below

Once that is done, simply start capturing the code coverage by clicking on the Record button to get an output like the below – simple, valuable, and unfortunately only available as free and built-in solution within this browser compared to FireFox/Safari and others 😦

I went and used this new tool on Geico.com responsive site and nearly completed the most common transaction of querying for a new car insurance. At the end of the recording, i received the below chart that as you’ll see – shows a usage of only ~60% of the site JavaScript code in this journey.

When drilling down deeper to a specific .JS source file, you can see a highlighted source with Green/Red where it is actually used and unused – this is what your web developers need to see and optimize wisely.

Let’s see a key feature that was recently introduced in FireFox also, and cab be useful for both Dev and testers.

2 weeks ago, Mozilla released FireFox 53 that is their 1st step in a new project called Quantum, that aims to enhance performance, stability and more.

Among the innovations in that release are compact themes, usability features like reading time for the page, new permission model (see below), faster performance and few other bug fixes for stability.

In addition to the newly introduced features, and if you’re not aware of – FireFox offer quite useful developer tools including object inspector, performance monitor, debugger and network monitor that can also enhance your overall web Dev and Test activities (see example below)

Performance Monitoring Tools From Within FireFox Developer Tools

Network Monitoring Options From Within FireFox Developer Tools

Bottom Line

With Chrome and FireFox being the leading Desktop and Mobile browsers, it is very important for web teams to continuously monitor the early releases from Google and Mozilla, and as the 1st Beta or Dev branches are available to validate – Do It. This can not only reveal earlier regressions but might also as mentioned in this blog, offer you some new productivity tools that can increase the value to your overall Dev and Test activities.

Last month, Google announced its plans to purge play store apps that do not include a privacy policy and the app required security permissions upon app installation.

Behind that requirement, Google is trying to provide its users maximum transparency on what the app is requiring, and what data it collects when users consume the apps.

An example for a native app that already implemented that requirement is StateFarm insurance, see below

So, with that simple request to mobile Android app developers, there are few quality implications.

Immediate Implications and Requirements

Revise and continuously maintain your test code

The above screen obviously was not planned for in the latest test automation cycle, which means that a new cycle will get stuck and fail since this is a new screen with a request for a user action – Teams ought to develop new test steps that upon initial app installation would test the following: When user Clicks Accept the app launches successfully, while when users Click Decline the app closes)

Most apps will require unique permissions that are (hopefully!) being used and required by the app to functions (see below visual from iUbenda).

Different OS versions of iOS and Android might behave differently and support different security features like in Android 6.0 and above (Doze, Permission groups etc.)

Compatibility of device/OS features and permissions

Wha happens in the above regards, once an app supports even a normal related permission e.g. USE_FINGERPRINT. This permission since it resides in the normal group, it will be automatically granted to the user, however, what if the DUT (device under test) does not support this feature? How are teams differentiating in an automated way the test execution with regards to the device capability? Matching the device features and the test case as part of a dynamic test execution can be a powerful agile capability especially in the growing mobile fragmented market.

As seen in the above, from a testing perspective, Android apps that support “Dangerous Permissions” would require the Dev/Test teams to develop and validate a varying use case or device/OS behavior test case when tested on Android 6.0 and above compared to Android 5.1 and below (e.g Android API level < version 23).

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same

Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that

The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉

This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

Added 2 Gherkin commands

// If you navigate directly to this pageThen I check mobileFriendly URL "http://www.nfl.com"

// If you got to this page through clicksThen I check mobileFriendly current URL

Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)

And the script example is pretty simple:

@WebFeature: NFL validate @SimpleValidation Scenario: Validate NFL Given I open browser to webpage "http://www.nfl.com" Then I check mobileFriendly current URL Then I check mobileFriendly URL "http://www.nfl.com" Then I wait "5" seconds to see the text "video"

Blogroll

If we all thought we’ve figured out the digital market from an application type perspective, and have seen the rise of mobile, and the transformation of web to responsive web – now we should all start getting used to a whole new type of application that should change the entire user experience and offer new […]

A Guest Blog Post by Alli Davis. The Samsung Galaxy Note 8 is one of the best phones that the smartphone manufacturer has in its lineup. The Note 8 is revered for its large and gorgeous looking display, a fast performance, and improved S-Pen among other features. Despite its release in the latter half of […]

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant. While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation. To mature DevOps practices, a continuous testing approach needs to be in […]

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape. To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones) […]

A Guest Blog Post by Ravindra Savaram When we think about artificial intelligence(AI), the first thing that comes to our mind is a self-driving vehicle or a Terminator-like robot. Both robots and AI are not exactly one and the same. Though often utilized together with bots, artificial intelligence particularly refers to the stimulation of human […]

Guest Blog Contributed by SkyHopper Unmanned Aerial Vehicles’ (UAV) Quick Evolution Fast development in technology in the last few years has seen increased abilities of the Unmanned Aerial Vehicles. Though once restricted to government bodies and a few large establishments, today, the large majority of people can afford a UAV or an RC drone. Think […]

48 hours ago, Apple revealed its new and futuristic iPhone X. Regardless of its design, and debatable price tag, this device also introduced a whole set of functionality, display, and engagement with the end-user. iOS11 is turning to be quite different from previous releases from both user adoption which is still low (~30%) and also from a quality perspectiv […]

6 months ago I launched my 1st book called “The Digital Quality Handbook”. The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry. I have also recently joined the working group of ISTQB to influence the material in […]

As the author of the Factors coverage magazine, I am often asked about the operating system (OS) testing strategy. Some teams would state that testing on the Latest, Latest – 1, and Latest -2 for Android is sufficient, and for iOS testing on the Latest and Latest -1 would satisfy their coverage risks. I actually […]

The need for mobile testing of gaming applications has long been apparent. With app stores for Android and iOS devices more or less overflowing with games of all different kinds, it stands to reason that a lot of developers are rushing the process. That is to say, there are a lot of cheaper or less […]