Browser engine forking isn't the end of the world

If you’re a fan of Web standards and compatibility, two announcements from last week probably have you clutching your cascading style sheets in terror. First, Firefox maker Mozilla struck a partnership with Samsung in which the handset manufacturer pledged to help bring an experimental Web browser engine called Servo to phones with ARM processors running the Android operating system. (ARM chips power most Android phones—and most mobile devices.) Then Google’s browser team said it would be “forking”—creating a new branch of code—from the WebKit engine that Apple helped bring into being and develop for its Safari browser.

These days, nobody has enough control of the market to win a battle of broken browsers.

This sparked a torrent of tizzies among Web developers and users with long memories. Surely, Twitter tweeters and Web commenters wailed, having different versions of the engines that browsers rely on to transform the HTML code in pages into graphical layouts in browsers means the return of incompatibility! Google (or Samsung or Mozilla) will use its deviance from WebKit to create new properties and options that will either break the display of standards-compliant existing sites or add new options that only users of browsers powered by the new engines can take advantage of! Or both!

Fear not, world. It won’t play out that way. In fact, structurally, it can’t. Nobody has enough control over desktop or mobile browsers—not even Apple with iOS—to win a new battle of the broken browsers. Still, for people who remember the way things were back to 2006 and earlier, last week’s browser machinations may have triggered some not-so-pleasant flashbacks. Let me talk them (and you) down from the ledge.

Engine of creation

A browser’s engine is the unseen motive force that, when you visit a webpage, requests the page’s HTML file and all the images and media associated with it, and turns them into something you can interact with graphically. What we think of as a browser comprises the front-end user interface that we poke at and view, and the hidden engine that handles parsing, scripting, formatting, networking, plug-in architecture, and display. (Like any other software, the browser leans on the operating system to hand off tasks such as drawing fonts and processing mouse clicks.)

In the dark days of the Web, between about 1999 and 2006, Microsoft held a dominant share (around 90 percent) of the market; but its Internet Explorer 5 browser was full of fail, and version 6 didn’t improve matters once it arrived in 2001. In that era, to make Web styles (CSS or Cascading Style Sheets) work correctly in layouts with any degree of object placement, you had to break CSS. You had to write a combination of good code (that did what you wanted) and bad code (that smart browsers would ignore but IE 5 and IE 6 would pay attention to). Each version had different broken parts, too.

Designers and developers who had to create HTML, CSS, and JavaScript that conformed to specifications and that “validated”—that is, passed tests to confirm their compliance against a spec—could be found weeping in the darkest corners of taverns after finding, for the umpteenth time, that IE6 had barfed up an unrecognizable HTML ball. Some websites gave in to the IE-using majority, adding workarounds for IE 6’s quirks to their carefully constructed standards-compliant pages. Microsoft further locked things down by allowing the use of ActiveX, a proprietary component for Web applications that was usable only in Windows, despite feeble attempts to port it to other platforms. (Corporations that developed ActiveX-based apps for in-house use during this period still lean heavily on the terrible troika of ActiveX, IE 6, and Windows XP.)

In 2003, Apple released Safari, built on top of what it called WebKit—a fork of the Linux-oriented KHTML project. In 2004, Mozilla released Firefox, based on a thoroughly overhauled Gecko engine it had inherited from Netscape. Four years later, Google’s Chrome, also built on WebKit, emerged.

Browsers based on WebKit and Gecko were faster, produced better-looking and more-interactive pages, and had fewer security flaws when used with Windows. More important, Apple put Safari on every Mac; Google promoted Firefox in its ever-more-popular search engine; and then Google encouraged visitors to download its own Chrome offering when that became available. These days, though IE still maintains a marketshare of at least 50 percent, Firefox, Chrome, and Safari split up most of the rest on the desktop. (Specifically, the numbers as of March 2013 are 56, 20, 16, and 5 percent for IE, Firefox, Chrome, and Safari, respectively, according to figures from Net Applications.)

On the mobile side, smartphones had a negligible share of overall browser usage until Apple released the iPhone in 2007 with a version of WebKit-based Safari. Android adopted WebKit from the beginning of its development in 2009. The two platforms together account for more than 90 percent of mobile browsing, according to Akamai. The separately available mobile Chrome (based on WebKit) and Opera Mini take most of the remaining share of usage. Opera, which currently has its own engine, said in February 2013 that it would move to WebKit, but then last week amended that plan to adopt Blink instead.

The world today

We’re in a world now in which Microsoft’s IE (using its little-mentioned Trident engine), Firefox’s Gecko, and the Apple-backed WebKit control substantial fiefdoms. Microsoft’s desktop dominance continues to erode, however, and it has multiple competitors there. Meanwhile, WebKit may own mobile, but Apple’s substantial share (in the form of mobile Safari) doesn’t give it absolute power—as the Blink fork shows. And Apple must contend with versions of IE, Firefox, and Chrome on the desktop, where Safari has only a tiny piece of the market.