This is a wonderful feature that makes the reading of PDFs on websites much less disruptive. However, pdf.js unfortunately suffers at times from high memory consumption. Enough, in fact, that it is currently the #5 item on the MemShrink project’s “big ticket items” list.

Recently, I made four improvements to pdf.js, each of which reduces its memory consumption greatly on certain kinds of PDF documents.

Image masks

The first improvement involved documents that use image masks, which are bitmaps that augment an image and dictate which pixels of the image should be drawn. Previously, the 1-bit-per-pixel (a.k.a 1bpp) image mask data was being expanded into 32bpp RGBA form (a typed array) in a web worker, such that every RGB element was 0 and the A element was either 0 or 255. This typed array was then passed to the main thread, which copied the data into an ImageData object and then put that data to a canvas.

The change was simple: instead of expanding the bitmap in the worker, just transfer it as-is to the main thread, and expand its contents directly into the ImageData object. This removes the RGBA typed array entirely.

I tested two documents on my Linux desktop, using a 64-bit trunk build of Firefox. Initially, when loading and then scrolling through the documents, physical memory consumption peaked at about 650 MiB for one document and about 800 MiB for the other. (The measurements varied somewhat from run to run, but were typically within 10 or 20 MiB of those numbers.) After making the improvement, the peak for both documents was about 400 MiB.

Image copies

The second improvement involved documents that use images. This includes scanned documents, which consist purely of one image per page.

Previously, we would make five copies of the 32bpp RGBA data for every image.

The web worker would decode the image’s colour data (which can be in several different colour forms: RGB, grayscale, CMYK, etc.) from the PDF file into a 24bpp RGB typed array, and the opacity (a.k.a. alpha) data into an 8bpp A array.

The web worker then combined the the RGB and A arrays into a new 32bpp RGBA typed array. The web worker then transferred this copy to the main thread. (This was a true transfer, not a copy, which is possible because it’s a typed array.)

The main thread then created an ImageData object of the same dimensions as the typed array, and copied the typed array’s contents into it.

The main thread then called putImageData() on the ImageData object. The C++ code within Gecko that implements putImageData() then created a new gfxImageSurface object and copied the data into it.

Finally, the C++ code also created a Cairo surface from the gfxImageSurface.

Copies 4 and 5 were in C++ code and are both very short-lived. Copies 1, 2 and 3 were in JavaScript code and so lived for longer; at least until the next garbage collection occurred.

The change was in two parts. The first part involved putting the image data to the canvas in tiny strips, rather than doing the whole image at once. This was a fairly simple change, and it allowed copies 3, 4 and 5 to be reduced to a tiny fraction of their former size (typically 100x or more smaller). Fortunately, this caused no slow-down.

The second part involved decoding the colour and opacity data directly into a 32bpp RGBA array in simple cases (e.g. when no resizing is involved), skipping the creation of the intermediate RGB and A arrays. This was fiddly, but not too difficult.

If you scan a US letter document at 300 dpi, you get about 8.4 million pixels, which is about 1 MiB of data. (A4 paper is slightly larger.) If you expand this 1bpp data to 32bpp, you get about 32 MiB per page. So if you reduce five copies of this data to one, you avoid about 128 MiB of allocations per page.

However, I was able to optimize the common case of simple (e.g. unmasked, with no resizing) black and white images. Instead of expanding the 1bpp image data to 32bpp RGBA form in the web worker and passing that to the main thread, the code now just passes the 1bpp form directly. (Yep, that’s the same optimization that I used for the image masks.) The main thread can now handle both forms, and for the 1bpp form the expansion to the 32bpp form also only happens in tiny strips.

I used a 226 page scanned document to test this. At about 34 MiB per page, that’s over 7,200 MiB of pixel data when expanded to 32bpp RGBA form. And sure enough, prior to my change, scrolling quickly through the whole document caused Firefox’s physical memory consumption to reach about 7,800 MiB. With the fix applied, this number reduced to about 700 MiB. Furthermore, the time taken to render the final page dropped from about 200 seconds to about 25 seconds. Big wins!

The same optimization could be done for some non-black and white images (though the improvement will be smaller). But all the examples from bug reports were black and white, so that’s all I’ve done for now.

Parsing

The fourth and final improvement was unrelated to images. It involved the parsing of the PDF files. The parsing code reads files one byte at a time, and constructs lots of JavaScript strings by appending one character at a time. SpiderMonkey’s string implementation has an optimization that handles this kind of string construction efficiently, but the optimization doesn’t kick in until the strings have reached a certain length; on 64-bit platforms, this length is 24 characters. Unfortunately, many of the strings constructed during PDF parsing are shorter than this, so in order a string of length 20, for example, we would also create strings of length 1, 2, 3, …, 19.

It’s possible to change the threshold at which the optimization applies, but this would hurt the performance of some other workloads. The easier thing to do was to modify pdf.js itself. My change was to build up strings by appending single-char strings to an array, and then using Array.join to concatenate them together once the token’s end is reached. This works because JavaScript arrays are mutable (unlike strings which are immutable) and Array.join is efficient because it knows exactly how long the final string will be.

On a 4,155 page PDF, this change reduced the peak memory consumption during file loading from about 1130 MiB to about 800 MiB.

Profiling

The fact that I was able to make a number of large improvements in a short time indicates that pdf.js’s memory consumption has not previously been closely looked at. I think the main reason for this is that Firefox currently doesn’t have much in the way of tools for profiling the memory consumption of JavaScript code (though the devtools team is working right now to rectify this). So I will explain the tricks I used to find the places that needed optimization.

Choosing test cases

First I had to choose some test cases. Fortunately, this was easy, because we had numerous bug reports about high memory consumption which included test files. So I just used them.

Debugging print statements, part 1

For each test case, I looked first at about:memory. There were some very large “objects/malloc-heap/elements/non-asm.js” entries, which indicate that lots of memory is being used by JavaScript array elements. And looking at pdf.js code, typed arrays are used heavily, especially Uint8Array. The question is then: which typed arrays are taking up space?

I used a different second argument for each instance. With this in place, when the code ran, I got a line printed for every allocation, identifying its length and location. With a small amount of post-processing, it was easy to identify which parts of the code were allocating large typed arrays. (This technique provides cumulative allocation measurements, not live data measurements, because it doesn’t know when these arrays are freed. Nonetheless, it was good enough.) I used this data in the first three optimizations.

Debugging print statements, part 2

Another trick involved modifying jemalloc, the heap allocator that Firefox uses. I instrumented jemalloc’s huge_malloc() function, which is responsible for allocations greater than 1 MiB. I printed the sizes of allocations, and at one point I also used gdb to break on every call to huge_malloc(). It was by doing this that I was able to work out that we were making five copies of the RGBA pixel data for each image. In particular, I wouldn’t have known about the C++ copies of that data if I hadn’t done this.

Notable strings

Finally, while looking again at about:memory, I saw some entries like the following, which are found by the “notable strings” detection.

It doesn’t take much imagination to realize that strings were being built up one character at a time. This looked like the kind of thing that would happen during tokenization, and I found a file called parser.js and looked there. And I knew about SpiderMonkey’s optimization of string concatenation and asked on IRC about why it might not be happening, and Shu-yu Guo was able to tell me about the threshold. Once I knew that, switching to use Array.join wasn’t difficult.

What about Chrome’s heap profiler?

I’ve heard good things in the past about Chrome/Chromium’s heap profiling tools. And because pdf.js is just HTML and JavaScript, you can run it in other modern browsers. So I tried using Chromium’s tools, but the results were very disappointing.

Remember the 226 page scanned document I mentioned earlier, where over 7,200 MiB of pixel data was created? I loaded that document into Chromium and used the “Take Heap Snapshot” tool, which gave the following snapshot.

At the top left, it claims that the heap was just over 50 MiB in size. Near the bottom, it claims that 225 Uint8Array objects had a “shallow” size of 19,608 bytes, and a “retained” size of 26,840 bytes. This seemed bizarre, so I double-checked. Sure enough, the operating system (via top) reported that the relevant chromium-browser process was using over 8 GiB of physical memory at this point.

So why the tiny measurements? I suspect what’s happening is that typed arrays are represented by a small header struct which is allocated on the GC heap, and it points to the (much larger) element data which is allocated on the malloc heap. So if the snapshot is just measuring the GC heap, in this case it’s accurate but not useful. (I’d love to hear if anyone can confirm or refute this hypothesis.) I also tried the “Record Heap Allocations” tool but it gave much the same results.

Status

These optimizations have landed in the master pdf.js repository, and were imported into Firefox 29, which is currently on the Aurora branch, and is on track to be released on April 29.

The optimizations are also on track to be imported into the Firefox OS 1.3 and 1.3T branches. I had hoped to show that some PDFs that were previously unloadable on Firefox OS would now be loadable. Unfortunately, I am unable to load even the simplest PDFs on my Buri (a.k.a. Alcatel OneTouch), because the PDF viewer app appears to consistently run out of gralloc memory just before the first page is displayed. Ben Kelly suggested that Async pan zoom (APZ) might be responsible, but disabling it didn’t help. If anybody knows more about this please contact me.

Finally, I’ve fixed most of the major memory consumption problems with the PDFs that I’m aware of. If you know of other PDFs that still cause pdf.js to consume large amounts of memory, please let me know. Thanks.

TL;DR: Cross-browser comparisons of memory consumption should be avoided. If you want to evaluate how efficiently browsers use memory, you should do cross-browser comparisons of performance across several machines featuring a range of memory configurations.

Cross-browser Memory Comparisons are Bad

Various tech sites periodically compare the performance of browsers. These often involve some cross-browser comparisons of memory efficiency. A typical one would be this: open a bunch of pages in tabs, measure memory consumption, then close all of them except one and wait two minutes, and then measure memory consumption again. Users sometimes do similar testing.

I think comparisons of memory consumption like these are (a) very difficult to make correctly, and (b) very difficult to interpret meaningfully. I have suggestions below for alternative ways to measure memory efficiency of browsers, but first I’ll explain why I think these comparisons are a bad idea.

Cross-browser Memory Comparisons are Difficult to Make

Getting apples-to-apples comparisons are really difficult.

Browser memory measurements aren’t easy. In particular, all browsers use multiple processes, and accounting for shared memory is difficult.

Browsers are non-deterministic programs, and this can cause wide variation in memory consumption results. In particular, whether or not the JavaScript garbage collector runs can greatly reduce memory consumption. If you get unlucky and the garbage collector runs just after you measure, you’ll get an unfairly high number.

Browsers can exhibit adaptive memory behaviour. If running on a machine with lots of free RAM, a browser may choose to take advantage of it; if running on a machine with little free RAM, a browser may choose to discard regenerable data more aggressively.

If you are comparing two versions of the same browser, problems (1) and (3) are avoided, and so if you are careful with problem (2) you can get reasonable results. But comparing different browsers hits all three problems.

Secondary metrics are important because they can affect primary metrics: memory consumption can affect performance and crash rate; the L2 cache miss rate can affect performance.

Secondary metrics are also difficult to interpret. They can certainly be suggestive, but there are lots of secondary metrics that affect each primary metric of interest, so focusing too strongly on any single secondary metric is not a good idea. For example, if browser A has a higher L2 cache miss rate than browser B, that’s suggestive, but you’d be unwise to draw any strong conclusions from it.

Furthermore, memory consumption is harder to interpret than many other secondary metrics. If all else is equal, a higher L2 cache miss rate is worse than a lower one. But that’s not true for memory consumption. There are all sorts of time/space trade-offs that can be made, and there are many cases where using more memory can make browsers faster; JavaScript JITs are a great example.

And I haven’t even discussed which memory consumption metric you should use. Physical memory consumption is an obvious choice, but I’ll discuss this more below.

A Better Methodology

So, I’ve explained why I think you shouldn’t do cross-browser memory comparisons. That doesn’t mean that efficient usage of memory isn’t important! However, instead of directly measuring memory consumption — a secondary metric — it’s far better to measure the effect of memory consumption on primary metrics such as performance.

In particular, I think people often use memory consumption measurements as a proxy for performance on machines that don’t have much RAM. If you care about performance on machines that don’t have much RAM, you should measure performance on a machine that doesn’t have much RAM instead of trying to infer it from another measurement.

Experimental Setup

I did exactly this by doing something I call memory sensitivity testing, which involves measuring browser performance across a range of memory configurations. My test machine had the following characteristics.

CPU: Intel i7-2600 3.4GHz (quad core with hyperthreading)

RAM: 16GB DDR3

OS: Ubuntu 11.10, Linux kernel version 3.0.0.

I used a Linux machine because Linux has a feature called cgroups that allows you to restrict the machine resources available to one or more processes. I followed Justin Lebar’s instructions to create the following configurations that limited the amount of physical memory available: 1024MiB, 768MiB, 512MiB, 448MiB, 384MiB, 320MiB, 256MiB, 192MiB, 160MiB, 128MiB, 96MiB, 64MiB, 48MiB, 32MiB.

(The more obvious way to do this is to use ulimit, but as far as I can tell it doesn’t work on recent versions of Linux or on Mac. And I don’t know of any way to do this on Windows. So my experiments had to be on Linux.)

I used the following browsers.

Firefox 12 Nightly, from 2012-01-10 (64-bit)

Firefox 9.0.1 (64-bit)

Chrome 16.0.912.75 (64-bit)

Opera 11.60 (64-bit)

IE and Safari aren’t represented because they don’t run on Linux. Firefox is over-represented because that’s the browser I work on and care about the most The versions are a bit old because I did this testing about six months ago.

I used the following benchmark suites: Sunspider v0.9.1, V8 v6, Kraken v1.1. These are all JavaScript benchmarks and are all awful for gauging a browser’s memory efficiency; but they have the key advantage that they run quite quickly. I thought about using Dromaeo and Peacekeeper to benchmark other aspects of browser performance, but they take several minutes each to run and I didn’t have the patience to run them a large number of times. This isn’t ideal, but I did this exercise to test-drive a benchmarking methodology, not make a definitive statement about each browser’s memory efficiency, so please forgive me.

Experimental Results

The following graph shows the Sunspider results. (Click on it to get a larger version.)

As the lines move from right to left, the amount of physical memory available drops. Firefox was clearly the fastest in most configurations, with only minor differences between Firefox 9 and Firefox 12pre, but it slowed down drastically below 160MiB; this is exactly the kind of curve I was expecting. Opera was next fastest in most configurations, and then Chrome, and both of them didn’t show any noticeable degradation at any memory size, which was surprising and impressive.

All the browsers crashed/aborted if memory was reduced enough. The point at which the graphs stop on the left-hand side indicate the lowest size that each browser successfully handled. None of the browsers ran Sunspider with 48MiB available, and FF12pre failed to run it with 64MiB available.

The next graph shows the V8 results.

The curves go the opposite way because V8 produces a score rather than a time, and bigger is better. Chrome easily got the best scores. Both Firefox versions degraded significantly. Chrome and Opera degraded somewhat, and only at lower sizes. Oddly enough, FF9 was the only browser that managed to run V8 with 128MiB available; the other three only ran it with 160MiB or more available.

I don’t particularly like V8 as a benchmark. I’ve always found that it doesn’t give consistent results if you run it multiple times, and these results concur with that observation. Furthermore, I don’t like that it gives a score rather than a time or inverse-time (such as runs per second), because it’s unclear how different scores relate.

The final graph shows the Kraken results.

As with Sunspider, Chrome barely degraded and both Firefoxes degraded significantly. Opera was easily the slowest to begin with and degraded massively; nonetheless, it managed to run with 128MiB available (as did Chrome), which neither Firefox managed.

Experimental Conclusions

Overall, Chrome did well, and Opera and the two Firefoxes had mixed results. But I did this experiment to test a methodology, not to crown a winner. (And don’t forget that these experiments were done with browser versions that are now over six months old.) My main conclusion is that Sunspider, V8 and Kraken are not good benchmarks when it comes to gauging how efficiently browsers use memory. For example, none of the browsers slowed down on Sunspider until memory was restricted to 128MiB, which is a ridiculously small amount of memory for a desktop or laptop machine; it’s small even for a smartphone. V8 is clearly stresses memory consumption more, but it’s still not great.

What would a better benchmark look like? I’m not completely sure, but it would certainly involve opening multiple tabs and simulate real-world browsing. Something like Membench (see here and here) might be a reasonable starting point. To test the impact of memory consumption on performance, a clear performance measure would be required, because Membench lacks one currently. To test the impact of memory consumption on crash rate, Membench could be modified to just keep opening pages until the browser crashes. (The trouble with that is that you’d lose your count when the browser crashed! You’d need to log the current count to a file or something like that.)

BTW, if you are thinking “you’ve just measured the working set size“, you’re exactly right! I think working set size is probably the best metric to use when evaluating memory consumption of a browser. Unfortunately it’s hard to measure (as we’ve seen) and it is best measured via a curve rather than a single number.

A Simpler Methodology

I think memory sensitivity testing is an excellent way to gauge the memory efficiency of different browsers. (In fact, the same methodology can be used for any kind of program, not just browsers.)

But the above experiment wasn’t easy: it required a Linux machine, some non-trivial configuration of that machine that took me a while to get working, and at least 13 runs of each benchmark suite for each browser. I understand that tech sites would be reluctant to do this kind of testing, especially when longer-running benchmark suites such as Dromaeo and Peacekeeper are involved.

A simpler alternative that would still be quite good would be to perform all the performance tests on several machines with different memory configurations. For example, a good experimental setup might involve the following machines.

A fast desktop with 8GB or 16GB of RAM.

A mid-range laptop with 4GB of RAM.

A low-end netbook with 1GB or even 512MB of RAM.

This wouldn’t require nearly as many runs as full-scale memory sensitivity testing would. It would avoid all the problems of cross-browser memory consumption comparisons: difficult measurements, non-determinism, and adaptive behaviour. It would avoid secondary metrics in favour of primary metrics. And it would give results that are easy for anyone to understand.

(In some ways it’s even better than memory sensitivity testing because it involves real machines — a machine with a 3.4GHz i7-2600 CPU and only 128MiB of RAM isn’t a realistic configuration!)

Necko Buffer Cache

I removed the Necko buffer cache. This cache used nsRecyclingAllocator to delay the freeing of up to 24 short-lived 32KB chunks (24 x 16KB on Mobile/B2G) in the hope that they could be reused soon. The cache would fill up very quickly and the chunks wouldn’t be freed unless zero re-allocations occurred for 15 minutes. This idea is to avoid malloc/free churn, but the re-allocations weren’t that frequent, and heap allocators like jemalloc tend to handle cases like this pretty well, so performance wasn’t affected noticeably. Removing the cache has the following additional benefits.

nsRecyclingAllocator added a clownshoe-ish extra word to each chunk, which meant that the 32KB (or 16KB) allocations were being rounded up by jemalloc to 36KB (or 20KB), resulting in 96KB (24 x 4KB) of wasted memory.

This cache wasn’t covered by memory reporters, so about:memory’s “heap-unclassified” number has dropped by 864KB (24 x 36KB) on desktop and 480KB (24 x 20KB) on mobile.

I also removed the one other use of nsRecyclingAllocator in libjar, for which this recycling behaviour also had no noticeable effect. This meant I could then remove nsRecyclingAllocator itself. Taras Glek summarized things nicely: “I’m happy to see placebos go away”.

Last week I mentioned bug 703427, which held up the possibility of a simple, big reduction in SQLite memory usage. Marco Bonardo did some analysis, and unfortunately the patch caused large increases in the number of disk accesses, and so it won’t work. A shame.

Quote of the week

a year ago, FF’s memory usage was about 10x what chrome was using in respect to the sites we were running…

so we have switched to chrome…

i just tested FF 9.0.1 against chrome, and it actually is running better than chrome in the memory department, which is good. but, it’s not good enough to make us switch back (running maybe 20% better in terms of memory). a tenfold difference would warrant a switch. in our instance, it was too little, too late.

I wrote yesterday about how I converted a relative of mine from a Chrome user to a Firefox user. The post was picked up by the tech press including Tom’s Hardware and Conceivably Tech. (Tom’s said I “described why getting users back from Chrome may nearly be impossible”, which I think is a laughable exaggeration of what I wrote.)

The good news is that the two major stumbling blocks I faced during the conversion are well on the way to being addressed.

The situation with third-party add-ons will be greatly improved in Firefox 8, which is scheduled for release in just a few days (2011-11-08). The first time it runs, Firefox 8 will present the user with a window that lists all the installed add-ons, distinguishing between add-ons the user installed explicitly and third-party add-ons that they didn’t install explicitly. The user will be asked to confirm which add-ons they want, and the third-party add-ons will be disabled by default. The bug for this feature is here, and there is a follow-up bug here.

An “import history from Chrome” feature is being worked on right now. Here is the feature page, a tracking bug, and twodependent bugs. The aim was for it to make Firefox 10, which is scheduled for release on 2012-01-31, but it looks like it might not make that release. Hopefully it will make Firefox 11, which is scheduled for release on 2012-03-13.

Another point raised by commenters was that Chrome has a version of AdBlock Plus. However, the Chrome version still has major shortcomings compared to the Firefox version. This page states “We are currently working on providing the same experience for Google Chrome as what you are used to from Firefox. Please keep in mind that we are not there yet and much work still needs to be done. There are also known Google Chrome bugs and limitations that need to be resolved.” This page lists the major shortcomings. Maybe those shortcomings will be overcome in the future, but until they are, it’s not a reasonable comparison.

In conclusion, the point of my post yesterday was not to say “OMG Firefox is crap the sky is falling in”, but rather “here’s what happened when I tried this”. I knew that the two obstacles I listed were being worked on, though I didn’t know the details. (Many thanks to the commenters who filled me in.) The fact that they are being worked on and/or have been fixed is rather encouraging. It’s also worth noting that Firefox’s new rapid release calendar like these make it into the hands of ordinary users only 2 to 3 months after they are implemented. With the old release calendar, these two improvements wouldn’t have made it into a release until mid-2012 or later.

[Update: before commenting, you should probably read this follow-up post that clarifies certain things about this post.]

On my recent vacation I was staying with a family member, let’s call her Karen (not her real name). She was a Google Chrome user, and I managed to convert her to a satisfied Firefox user. Here’s what I learnt along the way.

tl;dr:

Bad things about the experience:

The third-party add-ons situation on Windows is awful.

We need a “import history from Chrome” feature.

Good things about the experience:

Mozilla’s non-profit nature is compelling, if you know about it.

AdBlock Plus is great.

The Initial Situation

Karen is a moderately sophisticated computer user. She knows what a browser is, but didn’t know how many there were, who made them, or any notable differences between them.

Her machine that is probably 2 or 3 years old, and runs Windows Vista. She had used IE in the past (not sure which version) but didn’t like it, switched to Chrome at some point — she didn’t remember how or why — and found it to be much better. She was running Chrome 14.0.835.202 (no, that’s not an IP address!) which was the latest stable version.

She also had Firefox 3.6.17 installed, but judging from the profile she hadn’t used it much — there was very little history. She had the following Firefox add-ons installed:

(What is the Java Console? What is the .NET Framework Assistant? As far as I can tell they are (a) very common and (b) useless.)

I told her that I worked on Firefox and suggested that she try it and she was open to the idea. I talked about the differences between Firefox and Chrome and some of my work on Firefox. The thing that caught her attention most was that Mozilla is a non-profit organization. She hadn’t known this and it appealed to her greatly — she said that browser speed and the non-profit nature were the two most important things to her. She was also somewhat interested when I said that Firefox had an ad-blocking add-on. At the end of the conversation, she agreed to let me install Firefox and make it the default browser.

Installing Firefox

I removed the existing Firefox profile manually — I wasn’t sure if this was necessary, but I definitely wanted a fresh profile — and then uninstalled Firefox through the Control Panel. I then installed Firefox 7.0.1. (BTW, I stayed with Karen for two weeks at this point, and I had deliberately waited until Firefox 7 was out before doing this because I knew it had much lower memory usage than Firefox 6.)

An unexpected thing was that the Firefox installer asked me to close all the other running programs; it explained that this would mean that I wouldn’t have to reboot. I’m used to running Firefox on Mac and Linux so I’m not used to this, but I’m familiar enough with Windows that I wasn’t totally surprised. Still, it was annoying; Karen had MS Word and some other programs open and I had to go ask her if I needed to save anything before closing them. I realize this is Windows’ fault, not Firefox’s, but it was an obstacle.

Starting Firefox

When I started Firefox it asked me if I wanted to make it the default browser and I said yes. (I explained to Karen how to switch the default browser back to Chrome if she was unhappy with Firefox.)

It also asked me if I wanted to import history/bookmarks/etc from IE. But there was no equivalent for Chrome! Karen had a ton of bookmarks in Chrome, but fortunately she said she only used a handful of them so I was able to copy them manually into Firefox’s bookmarks toolbar (which I had to make visible). I’ve heard that someone is working on an “import from Chrome” feature to Firefox but I don’t know what the status is. We need it badly.

Once Firefox started, another unexpected thing was the state of the add-ons in the new profile. The Symantec add-ons (including the ugly Norton toolbar) were present and enabled. I had to disable them in about:addons; I wanted to completely uninstall them but I couldn’t, the “uninstall” button just wasn’t present. The Java Consoles and the media player were disabled because they were incompatible with 7.0.1, but I was also not able to uninstall them. This horrified me. Is it a Windows-only behaviour? Whatever the explanation, the default situation in a fresh install was that Firefox had several unnecessary, ugly additions, and it took some effort to remove them. I’m really hoping that the add-on confirmation screen that has been added to Firefox 8 will help improve this situation, because this was the single worst part of the process.

I then tweaked the location of the home and reload buttons so they were in exactly the same position as in Chrome. I probably didn’t need to do that, but with those changes made Firefox’s UI looked very similar to Chrome’s, and I wanted things to be as comfortable for her as possible.

The best part of the process was when I installed AdBlock Plus. With Karen watching, I visited nytimes.com in Chrome, and I had to skip past a video advertisement before even getting to the front page, and then the front page had heaps of ads. Then I visited in Firefox — no video, no ads. It was great!

Follow-up

A week or two later I sent Karen a follow-up email to check that everything was ok. She said “All is well! The ad blocking is a great feature.”

So, Firefox has a new and happy user, but there were some obstacles along the way, and the outcome probably wouldn’t have been so good if Karen hadn’t had a Firefox expert to help her.