Mostly answer that because application behaves differently on different web browsers. That is what I’m asking that why application behaves differently on different web browsers when all web browsers follow industry standards?

I investigate on this and I came to know that everyone web browser has its own way of reading .css files. But, this is kind of lay man answer.

I would really appreciate if someone could help me understand the exact reason that WHY we require Compatibility testing?

3 Answers
3

The short answer is that each browser implements the industry standards based on the implementation team's understanding of those standards.

There are several different base engines that are used by different browsers, including but not limited to WebKit, Gecko, Trident, and Blink. That accounts for the majority of differences in behavior between different browsers in my experience.

In addition, each browser implements the engine in a different way, depending on the operating systems the browser supports, the range of devices the browser supports, the focus of the programming team, and many other factors.

Then, programmers are human, and make mistakes. Each browser is going to have different quirks and bugs because of this, so it makes sense to test in different browsers to ensure your application isn't caught by one of the bugs (for example: Internet Explorer has known issues with memory leaks involving AJAX and Jquery garbage collection due to its JavaScript implementation being less forgiving than Firefox or Chrome. Each successive version of Internet Explorer has reduced these issues, but has not eliminated them).

Since most of the major rendering engines are written in C++, they have to be recompiled for different operating systems - which introduces another layer of potential differences, because each compiler optimizes and generates binaries in a slightly different way - and that's assuming the authors of the original code did not include any operating-system-specific code (which is a rather large assumption to make in my experience). The end result is that the same version of the same browser will behave differently in different operating systems (Firefox in Windows as opposed to Firefox in Linux systems is a good example here).

Effectively, the browser running your web application is at the top of a multi-layered technology stack, and that stack is different for each browser. Even when the browsers all implement the same standards (and until very recently this was not the case - Internet Explorer 10 was the first of the IE family to use the same standards that Firefox, Chrome, and Safari implement). They all have different levels of support of JavaScript, Jquery, HTML, CSS and so forth.

One example that I know (because I've experienced it) is something as simple as when the OnChange() event for a dropdown fires: Firefox, Chrome, and Safari fire the event after the dropdown loses focus. Internet Explorer (all versions so far) fire the event each time the selected item changes. If your web application enables/disables or shows/hides fields based on the selection in a dropdown, exactly when those fields are enabled will vary depending on the browser - which in turn can cause other differences in behavior.

Implementing the same standards will not result in same product. There will be differences.

Different teams make different (human) mistakes, use a different architecture, etc... If you search for browser-benchmarks you will notice a great difference in the speed of the renderer and the JavaScript engines. This must prove they are different under the hood, with unknown consequences to compatibility.

If you want to make sure your application works on all browsers (that you support), then you need to test for compatibility issues on these browsers.
Maybe you can take the risk that it will not interfere with your end-users work-flow and only start this kind of test after the first critical issues have been reported.

If you do not want to test for compatibility issues in multiple browsers (since its very time consuming) you could try to find and use a framework that tries to handle all the incompatibilities.
Frameworks like jQuery give you a programming interface which should behave the same in all browsers. Putting your trust in others is another risk you have to weigh.

Keep in mind a lot of enterprise clients still standardize on old versions of Internet Explorer like version 7 or 8, due to old legacy web-apps. The horror... :)

For sure many corporates are on previous IE releases as Niels said. But a funny we found the other day was some javascript to set exceptions for IE. Unfortunately the code was written long ago and the routine would only work for IE versions up to 9. It appeared the code was written for IE v4 and I guess the programmer never imagined his code would still be in use when IE 10 arrived.

Just thinking that compatibility testing might reveal this type of bug which is only indirectly related to browser differences.