Internet Explorer was up 0.63 points at 54.76 percent, its highest level since October 2011. Firefox was up 0.45 points to 20.44 percent, all but erasing the last six month's losses. Chrome, surprisingly, was down a whopping 1.31 points to 17.24 percent, its lowest level since September 2011.

We've asked Net Applications, the source we use for browser market share data, if it has made any change in its data collection that might account for this large Chrome drop. The company attributed this in part to the exclusion of Chrome's pre-rendering data. It estimates that 11.1 percent of all Chrome pageviews are a result of pre-rendering (where Chrome renders pages that aren't currently visible just in case the user wants to see them) and accordingly excluded this from its figures.

When it comes to mobile, Safari remains in the lead. Chrome for Android is starting to make its presence felt, picking up 1.14 percent of mobile traffic. As devices shipping with Android 4.1 and newer become more widespread, we can expect to see this number grow. After all, Android 4.1 makes Chrome the default browser rather than Android Browser. Internet Explorer is also showing small signs of growth, up 0.09 points to 0.95 percent.

November was the first full month of Windows 8's availability, with Windows 8 machines starting to show up in earnest online. Over November, 1.09 percent of Web users were using Windows 8. The stable, final version of Internet Explorer 10 is currently available only for Windows 8 (though there is a beta for Windows 7), and this has picked up 0.51 percent of the market in its first full month of availability. This discrepancy implies that just 47 percent of Windows 8 users are sticking with the built-in Internet Explorer browser, which compares poorly with the 60.0 percent of Windows users overall that use a version of Internet Explorer.

We're going to start to see if Firefox's Extended Support Release strategy is truly worthwhile in the coming months. The first ESR branch, based on Firefox 10, is getting phased out and replaced by Firefox 17 ESR. Currently 17.0.1 ESR is undergoing QA and testing, in parallel with the "production" release of 10.0.11 ESR. The next version, 17.0.2, will replace the 10 ESR series. So far there's no strong indication that Firefox 10 users are actually on ESR—update refuseniks are spread pretty evenly across several Firefox versions from version 9 to 14, when they should be concentrated on Firefox 10 if they really care about security and stability. But if we see use of Firefox 10 dry up in the next month or two, that might imply that current Firefox 10 users are in fact using ESR and migrating according to ESR timelines.

I suspect force of habit accounts for the discrepancy in IE usage. Many Windows 8 early adopters likely are people who work in the technology industry or for whom technology is a hobby interest. This group has tended to use browsers other than IE in recent years, and so automatically install a different browser out of habit. I have to believe that as Windows 8 gains traction with mainstream users, IE10 adoption will rise naturally.

IE10 is also a massively improved browser, so even those who explicitly chose not to use IE in the past may come around to reconsider it in the future.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply.

Browser market share shown to a hundredth of a percent with the methods of measuring we have today is ridiculous - yet I know every site on the internet is following this habit.

Is there any way to get at least a rough estimate on the error margin? Without this a comparison between Firefox and Chrome seems pointless, as their difference in market share may well be within the margin of error.

But I don't hold my breath that we will see such numbers anytime soon.

I remember you guys used to include Ars readers browser usage and then a small analysis in context with the previous information, that was my favorite part! Hopefully you include it again in the future.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply.

Browser market share shown to a hundredth of a percent with the methods of measuring we have today is ridiculous - yet I know every site on the internet is following this habit.

Is there any way to get at least a rough estimate on the error margin? Without this a comparison between Firefox and Chrome seems pointless, as their difference in market share may well be within the margin of error.

But I don't hold my breath that we will see such numbers anytime soon.

AFAIK there's no reported margin of error because these aren't polling samples. These are hit counts on certain web sites.The fact that they are reported as representative of the entire web isn't the fault of the people doing the counting.

I'm surprised Chrome for Android hasn't made further inroads. Just playing around with some phones in the store when I was shopping it seemed much nicer than the stock Android browser.

IE10 for WP8 is solid. I've set the mode to 'desktop' but a few sites I've tried to visit so far have still tried to shunt me off to the mobile version, I'm not sure why that's happening. I've also noticed that in IE10 a few pages render with abnormally large font sizes (Fark comments really stick out) but overall it's not a huge deal.

I bet a lot of the stragglers that are still using firefox are large enterprises. For instance, I go to a large university. I do not have admin access on any computer. Various systems run different versions of Firefox. generally the engineering computers are running two versions old at most, but other departments with much lower IT expenditures are running old versions. The computers in the library are running 11 last I checked. An English lab was running 3.6 and one godforsaken computer in the dorm lab is running 2.0! And since nobody has admin access, everyone gets prompted to update, yet they can't. I found a solution that works great for me and ensures I'm using an up to date Firefox on any computer: I installed Firefox on my personal drive, which I can update myself. In fact, I am running off the beta channel now and get updates frequently without a hitch.

I'm surprised Chrome for Android hasn't made further inroads. Just playing around with some phones in the store when I was shopping it seemed much nicer than the stock Android browser.

IE10 for WP8 is solid. I've set the mode to 'desktop' but a few sites I've tried to visit so far have still tried to shunt me off to the mobile version, I'm not sure why that's happening. I've also noticed that in IE10 a few pages render with abnormally large font sizes (Fark comments really stick out) but overall it's not a huge deal.

I wonder how many Firefox users consciously refuse to update due to add on incompatibility?

I did that for a few days because LiveClick rss add on has been deemed incompatible--even though it works just fine. I ended up creating a batch file that copies a modified chrome.manifest file to my Firefox profile to make the browser see LiveClick as compatible.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply.

Browser market share shown to a hundredth of a percent with the methods of measuring we have today is ridiculous - yet I know every site on the internet is following this habit.

Is there any way to get at least a rough estimate on the error margin? Without this a comparison between Firefox and Chrome seems pointless, as their difference in market share may well be within the margin of error.

But I don't hold my breath that we will see such numbers anytime soon.

AFAIK there's no reported margin of error because these aren't polling samples. These are hit counts on certain web sites.The fact that they are reported as representative of the entire web isn't the fault of the people doing the counting.

It's not quite a simple count. It's also weighted according to estimated Internet population of various regions, etc. As a statistical measure, the percentages are subject to margins of error.

But back to the original post, we can at least conclude Firefox usage is above Chrome. If it was only due to margin of error, then you'd expect Chrome to sometimes surpass Firefox due to random fluctuations. That's not the case. The probability of Firefox consistently coming above Chrome if their shares were equal is very low.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply.

Browser market share shown to a hundredth of a percent with the methods of measuring we have today is ridiculous - yet I know every site on the internet is following this habit.

Is there any way to get at least a rough estimate on the error margin? Without this a comparison between Firefox and Chrome seems pointless, as their difference in market share may well be within the margin of error.

But I don't hold my breath that we will see such numbers anytime soon.

AFAIK there's no reported margin of error because these aren't polling samples. These are hit counts on certain web sites.The fact that they are reported as representative of the entire web isn't the fault of the people doing the counting.

As by this statement in the article:

Quote:

We've asked Net Applications, the source we use for browser market share data, if it has made any change in its data collection that might account for this large Chrome drop. The company attributed this in part to the exclusion of Chrome's pre-rendering data. It estimates that 11.1 percent of all Chrome pageviews are a result of pre-rendering (where Chrome renders pages that aren't currently visible just in case the user wants to see them) and accordingly excluded this from its figures.

I got the impression that there was indeed some data trimming and processing going on the raw data used for this article. How do they "estimate" that 11.1% of Chrome page views need to be excluded (I don't object to excluding them but I would like to see how they come up with the 11.1% figure)?

To me it seems as they take the results of site hits and then work some guessing mojo. And then they present the result of the guessing mojo with a hundredth percent accuracy. I don't buy it and I think it's really not methodically sound to do so.

I usually download Chrome right away after a new install. This time, however, I did notice how much better IE 10 was while I was using it. I wasn't immediately repulsed, it was very fast with a simple UI, I may even run with it a while to see what's up. I give MS props for initial impression at least. After a week with this "Modern" UI on the desktop, I still don't like it. I do like Live Tiles though.

Is there any way to get at least a rough estimate on the error margin?

On what would you base it? Even if you could gather a gargantuan sample of the internet, like say capturing statistics on websites that accounted for 50% of all page views, you couldn't say much about the other 50% because website visitors are not likely to be demographically similar from one site to the next.

You really have to start with the question of why you care about browser stats. If you're planning to build a website, the only numbers that matter are the users that represent your target audience. Figure out what they're likely to be using and support that. Rough estimates will be good enough.

If you're a browser vendor trying to A/B test your way to larger marketshare, you've got a hell of a data collection and analysis problem in front of you. Better you than me.

If you're just some fanboy forum wanker who is carrying water for major corporations then get stuffed; your favourite browser is doing poorly and its going to get worse. And it's all your fault.

Sorry, I'm just depressed to see IE 8 and below still represents 30% of whatever is being measured here. Good luck using interesting tools like SVG in that environment.

After a week with this "Modern" UI on the desktop, I still don't like it. I do like Live Tiles though.

The 'Modern' or 'Metro' interface seems great for a touch-based mobile device like a phone or a tablet. For a PC though, I'm still not convinced. The nester folder and desktop GUI interface is so deeply ingrained that I can't see how obfuscating that is a good thing.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply. [..] Is there any way to get at least a rough estimate on the error margin?

Error margins make the assumption of random, independent, identically distributed (uniform) samples taken from the whole population you are investigating. No source of data on browsing market share is conducted in that way.

In the worst case, the error margin could be literally ~100%. Imagine if we were sampling a totally nonrepresentative subset of the whole population, like if we sampled only Belarus (the country where Opera is the majority).

In the best case, the error margin could be close to 0. Remember that a random representative sample of ~300 people is enough to get to 1% or so accuracy in any population size, even for example the US population in election season. Net Applications samples 40,000 websites, and many millions of users, so the error margin could be far lower than that.

But in practice, we don't know how representative any of the data sources on browser market share are.

After a week with this "Modern" UI on the desktop, I still don't like it. I do like Live Tiles though.

The 'Modern' or 'Metro' interface seems great for a touch-based mobile device like a phone or a tablet. For a PC though, I'm still not convinced. The nester folder and desktop GUI interface is so deeply ingrained that I can't see how obfuscating that is a good thing.

For some weird reason I adjusted quite easily to "Modern" interface. I only use it as a start menu replacement. In that light it isn't that dissimilar from Linux's Gnome Shell or Ubuntu Unity (Gnome shell is much more radical in my opinion.)

But in practice, we don't know how representative any of the data sources on browser market share are.

That's the whole point I am trying to make. Percent figures to a hundredth of a percent imply a very high degree of accuracy which I think does not exist in reality.

Also don't get me wrong, I do not have any stakes in the browser market and can easily live without exact figures. It is just an example where I think the numbers shown are not really justified in their accuracy by the methods used to derive them. Something that is done all too often and something that was hammered in my brain not to do during my experimental physics lessons :-)

We've asked Net Applications, the source we use for browser market share data, if it has made any change in its data collection that might account for this large Chrome drop. The company attributed this in part to the exclusion of Chrome's pre-rendering data. It estimates that 11.1 percent of all Chrome pageviews are a result of pre-rendering (where Chrome renders pages that aren't currently visible just in case the user wants to see them) and accordingly excluded this from its figures.

I got the impression that there was indeed some data trimming and processing going on the raw data used for this article. How do they "estimate" that 11.1% of Chrome page views need to be excluded (I don't object to excluding them but I would like to see how they come up with the 11.1% figure)?

To me it seems as they take the results of site hits and then work some guessing mojo. And then they present the result of the guessing mojo with a hundredth percent accuracy. I don't buy it and I think it's really not methodically sound to do so.

among other things, they claim to count only unique visitors, which they then weight by years old CIA population counts and percentage of online users per country. The fact that the most populous countries are the ones with the biggest recent changes in online usage, and the changing face of tech all over the world in the last few years, and the fact that the CIA explicitly says their numbers are only rough estimates couldn't possibly affect those hundredths of a percentage, could it?

The fact that they say they count only "unique visitors" and yet had to adjust Chrome's numbers such a large amount (relative to their reported precision) for pre-rendering *impressions* should give anyone pause. Even if you only attribute a fraction of this month's change to the adjustment, the fact that a change in impression count makes for a large change in unique visitor count means they their numbers were wrong (those pre-rendering numbers represented *someone* using Chrome, so a pre-rendered view had to equal at least one user. Since they adjusted the numbers down, that means they were over counting, even though they claim to be only counting unique visitors). There's no reason to believe their numbers are now right.

Also, just forget the fact that this is at least the second time they've claimed to have fixed their methodology to account for prerendering this year:

I'm sure this time they got it right, never mind all the numbers before March, and, er, all the numbers between March and November. After this, it's totally right, and let's get back to reporting on what a change of .04% really portends.

This is why we always acknowledge that error exists, do our best to calculate error margins, and *always* report them.

I wonder how many Firefox users consciously refuse to update due to add on incompatibility?

I did that for a few days because LiveClick rss add on has been deemed incompatible--even though it works just fine. I ended up creating a batch file that copies a modified chrome.manifest file to my Firefox profile to make the browser see LiveClick as compatible.

That used to be an issue, but since version... 11? 12? Firefox has been much better about how it handles add-ons during updates, and I haven't had any problems. Previously it was an annoyance, but now I think anyone who feels that way probably just hasn't tried it recently.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply.

Browser market share shown to a hundredth of a percent with the methods of measuring we have today is ridiculous - yet I know every site on the internet is following this habit.

Is there any way to get at least a rough estimate on the error margin? Without this a comparison between Firefox and Chrome seems pointless, as their difference in market share may well be within the margin of error.

But I don't hold my breath that we will see such numbers anytime soon.

AFAIK there's no reported margin of error because these aren't polling samples. These are hit counts on certain web sites.The fact that they are reported as representative of the entire web isn't the fault of the people doing the counting.

It's not quite a simple count. It's also weighted according to estimated Internet population of various regions, etc. As a statistical measure, the percentages are subject to margins of error.

But back to the original post, we can at least conclude Firefox usage is above Chrome. If it was only due to margin of error, then you'd expect Chrome to sometimes surpass Firefox due to random fluctuations. That's not the case. The probability of Firefox consistently coming above Chrome if their shares were equal is very low.

This isn't correct, sadly. We can't even conclude that about visitors to Net App's monitoring network (even if they didn't ruin it by weighting by hand wavy population estimates). Your statement could only be true if we knew (or could adjust it so) that the only source of error was *actually* due to random fluctuations (and not due to biased sampling or the way they process the data) and we could show that the difference was significant for some agreeable level of significance.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply. [..] Is there any way to get at least a rough estimate on the error margin?

Error margins make the assumption of random, independent, identically distributed (uniform) samples taken from the whole population you are investigating. No source of data on browsing market share is conducted in that way.

In the worst case, the error margin could be literally ~100%. Imagine if we were sampling a totally nonrepresentative subset of the whole population, like if we sampled only Belarus (the country where Opera is the majority).

In the best case, the error margin could be close to 0. Remember that a random representative sample of ~300 people is enough to get to 1% or so accuracy in any population size, even for example the US population in election season. Net Applications samples 40,000 websites, and many millions of users, so the error margin could be far lower than that.

But in practice, we don't know how representative any of the data sources on browser market share are.

You could probably treat the results from different web sites as separate samples and use the differences between them to calculate error ranges. I'm sure you would have to do a little more correction than simply taking the means from different sites, but it would provide information about how the results vary between sampled sites and you could use that information to draw conclusions about how variable it is between sites in general. That estimate of variability is what you would need to calculate confidence intervals.

Error margins make the assumption of random, independent, identically distributed (uniform) samples taken from the whole population you are investigating.

No, they don't. That is only the simplest case, as taught in an introductory statistics course.

Sure, you can assume a nonuniform population distribution too, etc. But in general in such cases, iid is the norm. Anyhow, the basic requirement is that we sample randomly from a distribution - that's the absolute minimum. We don't have that in this sample or any other. That's not to say it is worthless - just that we can't easily estimate how valuable it is.

As a trained physicist I am very wary of any numbers given without error margin and the numbers for browser market share keep to be rather unbelievable for me because of the accuracy they seem to imply. [..] Is there any way to get at least a rough estimate on the error margin?

Error margins make the assumption of random, independent, identically distributed (uniform) samples taken from the whole population you are investigating. No source of data on browsing market share is conducted in that way.

In the worst case, the error margin could be literally ~100%. Imagine if we were sampling a totally nonrepresentative subset of the whole population, like if we sampled only Belarus (the country where Opera is the majority).

In the best case, the error margin could be close to 0. Remember that a random representative sample of ~300 people is enough to get to 1% or so accuracy in any population size, even for example the US population in election season. Net Applications samples 40,000 websites, and many millions of users, so the error margin could be far lower than that.

But in practice, we don't know how representative any of the data sources on browser market share are.

You could probably treat the results from different web sites as separate samples and use the differences between them to calculate error ranges. I'm sure you would have to do a little more correction than simply taking the means from different sites, but it would provide information about how the results vary between sampled sites and you could use that information to draw conclusions about how variable it is between sites in general. That estimate of variability is what you would need to calculate confidence intervals.

You can use some sites in your sample to sanity-check others, but that still doesn't get you to a valid estimate of how close you are to the true population distribution. I agree it helps though - if you see crazy amounts of inter-sample variance, something is wrong. Likely in a sample this large though, that isn't the case.

After a week with this "Modern" UI on the desktop, I still don't like it. I do like Live Tiles though.

The 'Modern' or 'Metro' interface seems great for a touch-based mobile device like a phone or a tablet. For a PC though, I'm still not convinced. The nester folder and desktop GUI interface is so deeply ingrained that I can't see how obfuscating that is a good thing.

A tile is what happens a few months after a window and an icon have had sex

All joking aside, the tiled interface of W7 and W8 got me hooked. What took them so long to come up with such an elegant compromise. Fluid tiles for phones is a stroke of genius.

On the desktop after much trying I decided to stick with the traditional UI. Peter, you were right about this. On the desktop, especially if you have large non-touch displays the classic UI is more efficient than a tiled one. Hopefully W9 will offer a mixture of both user interfaces for the desktop.

After upgrading to W8 I use IE10 about 3/4 of the time and FF the other 1/4. I have only good things to say about the two browsers. FF may have better add-ons. Both are a joy to work with.

For the same reason people give every single month when someone asks this question:

Other people reporting don't weight their stats by region to account for the number of people in a country. If statcounter measures statistics across 10,000 US websites, and only 10 Chinese ones, and only measures raw hits, their data is going to be skewed to where their reporting is strongest.

This is the same reason why Nate Silver tends to weight polls that don't include cell phones. If your measuring tool is unable to reach a market segment which tends to heavily skew towards one way, your numbers will be off. People whose only phones are cell phones tend to be younger and more urban. Generally speaking, these groups lean towards Democrats. So if you cold-call a random sample of household numbers, but fail to call cell phones, you have zero percent connection with a sizable, and growing, group. Thus, polls that don't include cell phones, especially in contentious states, misrepresent the views of the whole by ignoring a particular group.

If you call try to call 10,000 people to see if they're voting Democrats or Republicans, but the south is experiencing a hurricane affecting many states, and you don't account for that, your poll is going to be wrong. If you try to measure the worldwide browser share, and you have extremely poor penetration in getting your measuring tools used in China, and you don't account for that, your poll is going to be wrong.

Thus: While I agree with many that there's no way this is accurate to the hundredth of a point, it's much more accurate that a company that does no weighting.

Google had the right idea, for making updates something users don't have to think about. Yes, enterprises, legacy applications, and security policies want 1 long-term stable version. Most people just want the most up-to-date browser, and they don't want to think about it.

I'm glad Firefox is also pushing updates more seamlessly too now. If only we could get Microsoft to update IE silently and often too, we could put an end to this "Designed for Internet Explorer 6 ONLY" mentality. Developers would HAVE to keep to standards in order to insure compatibility with the new version in 6 weeks.

IE10 is a significant step forward over IE9, just like IE9 was over IE8.Trouble is, that IE9 was annihilated by Chrome/Firefox in a matter of weeks, and staid consistently behind in terms of features and performance for the vast majority of his lifetime.I have a strong suspicion that the same will happen with IE10, there's no reason to think it'll happen otherwise.Then why bother? Because it is (arguably) the best browser today and it will be the best browser next week? So that I'll need to switch back in one month when it'll fall behind again? And seriously: is there anyone today that feels slowed down by the browser? Speed is great: I'm all for it, but I have used Firefox and Chrome for so many years, and I have them customized to make my browsing safe, quick and efficient: why bother?I am an IE-skeptical: after having been burned in the past, I have stayed clear and never had an issue.