Posted
by
Zonk
on Tuesday February 14, 2006 @06:42PM
from the that's-what-they-all-say dept.

SenseOfHumor writes "The Firefox memory leak is not a bug. It's a feature! The 'feature' is how the pages are cached in a tabbed environment." From the article: "To improve performance when navigating (studies show that 39% of all page navigations are renavigations to pages visited less than 10 pages ago, usually using the back button), Firefox 1.5 implements a Back-Forward cache that retains the rendered document for the last five session history entries for each tab. This is a lot of data. If you have a lot of tabs, Firefox's memory usage can climb dramatically. It's a trade-off. What you get out of it is faster performance as you navigate the web."

1. set browser.sessionhistory.max_total_viewers to "0"
2. It does (try opening a huge Fark photohop thread, huge as in multiple hundreds of pictures, see Firefox ramp up to 600 or 700Mb ram consumption, close the fark tab, see firefox' ram usage drop dramatically to regular ram usage levels)

Type "about:config" (no quotes) into your address bar, then scroll down to the browser.sessionhistory.max_total_viewers setting, double-click on it, then change the number to 0 and hit OK. Any sort of Firefox setting like this is found in about:config.

An operating systems class might help you understand why memory usage meters are completely unresponsive in the down direction.

See here's what happens:Firefox allocates memory for a rendered page. You've got 20MB allocated already, all in 3 chunks. None have enough room for the single large allocation it needs so the OS sets aside a new chunk of memory for the firefox process.Now it's using say 28MB of memory. And only 22MB of that is used. Well, it does a couple more allocations, some fairly permanent ones, and these get put in the newest block of memory.Then you close the tab. Firefox frees the associated memory. The OS changes it tables around for that block to indicate so. But it still has some stuff in that block. So guess what? Firefox' memory usage remains exactly the same.

The solution? Use a GC system. Some Garbage Collectors (most) actually move objects to condense them in memory. This is one of the things that makes garbage collections noticeable if a lot has happened since the last one (it's gotta move a lot of RAM and change a bunch of links to said RAM). It becomes especially bad when you move into swap space;).The downside? While GC advocates will often amaze you with the fact that malloc is not an atomic operation (it has a lot of work to allocate, more or less depending on the current situation of your memory chunks and the free memory on the system), malloc is still not nearly as costly as a garbage collection cycle. And, free is atomic (at least, TMK all a good implementation does is remove something from a data structure, unless it's the last part in which case it also needs to mark that memory as free).

So, you see, no matter how few memory leaks firefox has, it still won't drop in RAM usage every time you click close.If you want to prove memory leaks in firefox you can. Get yourself a memory debugger (such as valgrind) and run firefox under it. Now, I'll warn you that this is harder than it sounds:1.) Memory debuggers are about 100x to 1000x slower than your machine natively.2.) Firefox is a script, not a binary, it sets up a bunch of stuff for the binary to run.3.) Everything you see on the memory debugger is not necessarily a leak. Some of the leaks aren't even really leaks (it's generally no big deal to leak when you're exitting because the kernel cleans that up for you).4.) To get any useful information on the leaks (other than size) you'll need to have compiled with debug symbols and you'll need to have the source code.

Go ahead, post your list of firefox memory leaks. Then post your list of IE memory leaks. I bet both have some, but neither has anything major. And I bet it takes you a week to find them;).

People have written garbage collectors for C++, and they work just fine. But they do not help with fragmentation, which is the problem you're describing. That requires a heap-compacting allocator (aka a "handle" allocator). Many languages with garbage collection also use a heap-compacting allocator. C++ does not, because of a low-level language "feature": pointers, and specifically, pointer arithmetic.

If an object moves in memory, then people have to be notified that it has moved, or they won't know where to access it. Languages like Java handle this behind-the-scenes; the system library tracks objects for you, and your program never knows (or cares) whre an individual object is.

C++ allows direct access to system memory, and it tells you precisely where your objects are located. Programmers are then free to do all kinds of things like compute distance to other objects, or convert the location to a number and do arbitrary math operations on it.

When an object moves, anything that refers to it needs to be updated. Well, good luck figuring that out in a language with pointer arithmetic! The system would need to magically determine whether or not a numeric value was actually a memory locations. And what if a program computed the distance between two objects, and later on used that distance to get from one object to the other? The system has no idea of what can be safely moved, and what has to stay put. So nothing can ever be moved.

There are workarounds of course -- if you write a program with heap-compaction in mind, then you can use a "handle" system, where every object has an ID. You remember the ID, and to access the object, you ask the system for a temporary memory location. And as soon as you're done, you "forget" the memory location and let the system shuffle things around in memory. The next time you give that ID to the system, you might get back a different memory location, but you were already expecting that so your program doesn't mind.

But handle allocation is slower, less efficient, and more annoying to use than a traditional fixed-location allocator. You have to start your project with it in mind; retrofitting existing code to use a handle allocator is a giant timesink and prone to conversion errors. And if you don't mind the loss of performance due to using a handle allocator, why are you using C++ in the first place?

You use double-pointers rather than handles. No need to notify anyone.

I used to use this years ago on machines like the archimedes that had little memory. In a modern paged system it's a nearly useless technique - the most you'll lose is a single page even if there are large 'gaps' in the virtual address space.

A handle can be defined as a double pointer, and the locking operation could just be dereferencing it (assuming a single-threaded model of operation). I was speaking metaphorically about notification, not describing an actual event model;)The technique is still very useful if you have a program that follows one of a few very specific allocation patterns. For example, consider a program that allocates many small blocks at once and then frees a large (but non-consecutive) percentage. The ability to compac

C++ does not, because of a low-level language "feature": pointers, and specifically, pointer arithmetic.

Pointer arithmetic is irrelevant here, because it's undefined behaviour to move a pointer to outside the object through pointer arithmetic (exception: The one-past-end pointer, but that can be easily resolved by just allocating an extra byte at the end).Now pointers per se are relevant, because they are usually implemented as direct address of the object it points to. There is nothing in the C++ standard

I was not necessarily talking about Firefox, just addressing (hah) a misconception in the parent post: fragmentation is not related to garbage collection. I didn't want to get into a lengthy discussion of virtual addressing, page faults, etc. as well;)But you're correct. Thanks to the rise of "smart" binned allocators like dlmalloc [oswego.edu], fragmentation is no longer the huge concern that it used to be with (for example) the basic Win32 heap API. Modern allocators are now reasonably smart about reusing best-fit

FTFA: "...For those who remain concerned, here's how the feature works. Firefox has a preference browser.sessionhistory.max_total_viewers which by default is set to -1."......If you set this preference to another value, e.g. 25, 25 pages will be cached for every tab. You can set it to 0 to disable the feature, but your page load performance will suffer.

Thank you thank you thank you (sorry ot no mod points, but you're already up to 5 anyway).I browse with a lot of tabs in FireFox, and with FireFox 1.5 the performance when a lot of those tabs are loading has been beyond horrible. Like several seconds just to switch tabs, and then actually trying to scroll...

If you are feeling generous, perhaps you also know how to shutoff the new tab thumbnail "feature" when you've got images. 16x16 thumbnails of 4000x4000 images are nothing but a waste of CPU time and a vi

Thank you. My machine has 512 mb RAM (planning ot upgrade soon) and I multi-task a lot, so when I'm running FF I'm almost running 1 or 2 other programs at the same time. That's why it's very high use of memory becomes a problem. And with a fast connection like I have now, I can't tell that my page load performance is suffering at all.

But... what they should do is to put this in the regular optios menu instead of about:config. Lots of users don't even know to use about:config. I really like FF but the "we

According to someone else that replied on the site, the bfcache is actually global and shared, so it's _not_ per tab. But they could also be wrong. It would seem ridiculous not to share the cache where possible. Often I find myself with multiple tabs open that share pages in history.

Or even better limit the amount of memory FF uses. We can place a hard limit (100mb) and even have FF malloc the entire amount from the getgo for speed. It then uses the memory with no collisions or fragmentation while other heavy processes are running, and the other processes wont take a hit when you have 15 tabs open... especially on production solaris and rs6000 machines. (i know i know bad idea).

For those who remain concerned, here's how the feature works. Firefox has a preference browser.sessionhistory.max_total_viewers which by default is set to -1. [...] You can set it to 0 to disable the feature, but your page load performance will suffer.

"(studies show that 39% of all page navigations are renavigations to pages visited less than 10 pages ago, usually using the back button)"So why is it that when I open a new tab I have to manually cut/paste the same address in it. For example, replying to an article on/., If I want to quote the summary, I need to hit the back button to copy the text I want, then forward again to paste and type. Why can't I hit ctrl-T and get a new tab with the same page I'm currently on, then hit reply and anything I want

It's not the most ideal solution, but I can drag and drop the favicon (the icon in between the Home button and address in the address bar, with default toolbar settings) to my tabbar to effectively get a duplicate of the current page. (Tab Mix Plus might be the cause of this feature). I don't have a Firefox that isn't loaded with Tab Mix Plus around, but I don't think you need the extension to do this.

Tab Mix Plus also has an option to always open the current page in a new tab.

"Why can't I hit ctrl-T and get a new tab with the same page I'm currently on, then hit reply and anything I want to quote I can just switch tabs instead of screwing around with back/forward and scrolling."I just replied to your post using Firefox. I middle clicked on "Reply to This," which brought up your post appearing by itself in its own tab. I just copied and pasted some of your post into my reply, hit "Submit" and went on my merry way. Isn't that simple enough? Although I would like to see a/. eq

At least in the OS X version, commandclicking (and probably middleclicking as well, but I haven't got a mouse connected) the back button solves this problem nicely. I would guess that middleclicking the back button works under other operating systems as well. So just click "reply" first, then middleclick the back button.:)

I regularly use middle-click to open a link in a new tab in the first place. However, the Mac OS X version of Mozilla lacks this option, expecting me to configure my mouse to do a command-click on middle-click instead to get the same functionality I enjoy on Linux.

Usually the only time I use a browser under Windows is for Windows Update.

And just testing right now, middle-clicking on the Back button does nothing for me under Linux. It has a visual reaction but otherwise does nothing else. Maybe it is another one of those Firefox features not found in Mozilla?

That would be a nice user-configurable option. As it is, I don't find it too much trouble to hit shift-tab a couple of times, ctrl-C, ctrl-T, ctrl-V, enter to get the same page in a new tab, but it would save a few steps to have this choice. But I definitely wouldn't want new tabs opening up with the current page and no option to turn this feature off.

Hard choice but I'm seriously looking at Opera after seeing this I have a gig of RAM and its still laggy, I was wondering why the 'leak' was so high theres no way you could put that much bad programing to make a programe eat memory like a fat kid in a pie shop.

i could never stand behind a company like that and refuse to use opera products untill he makes good on his word. You cant just throw statements like that around. Browsers designed by liars are dishonorable browsers.

...scientists have determined that the human appendix is not an evolutionary anomaly as previously thought, but an intelligent design feature aimed at keeping the humans guessing as to it's actual function.

And in totally unrelated news, the Mozilla foundation recently announced that their flagship browser Firefox shall soon be renamed to Bigfoot, to reflect the software's large memory footprint.

The heap, where dynamic allocations occur, is only allowed to grow or to be truncated. An application cannot release memory in the middle of the heap without also releasing the memory at the end of the heap.

Those 9 pages worth of memory aren't being used, but it's impossible to release them back to the OS.

Thankfully, there is some good news: when Firefox needs to allocate more memory, it can and will just reuse those 9 unused pages instead of allocating more memory from the OS and growing the heap.

The best solution to this problem is to use a compacting garbage collector. Which is something that Java and C# and other higher-level langauges can easily make use of (and many do use them), but which C and C++ can't really make use of given the complete lack of compiler support. That's one reason why a Java or C# app can actually out-perform a similar C/C++ app, especially with a good native-code compiler and an library implementation with a modern GC.

but which C and C++ can't really make use of given the complete lack of compiler support.

I wouldn't know about C, but this statement is utterly false as applied to C++. Replacing the default new and delete routines is perhaps not for the inexperienced C++ programmer, but to say that there's an complete lack of compiler support is simply wrong. It is true that out-of-the-box C++ does not have a compacting garbage collecter, but one can certainly be written (and used, of course) with any conformant compil

> Wouldn't all pointer references then have to go through some kind of lookup table, so that the objects could be relocated by the runtime without breaking them?

Only on a computer without virtual memory. In a PC (which *has* virtual memory), you just punch holes in the memory.

What happens is a process gets an "address space", into which pointers can point, but any given address may not map onto some real storage. The process asks the operating system to map a range of addresses onto real storage which the operating system will try to map to real fast memory when it thinks it will be used at any moment. When the OS figures the memory wont be needed for a while, and something else needs some memory, the OS copies the data to disk and redirects the mapping to a proxy that will pull the data back into memory when the process tries to use it again.

When a process knows that it won't need a section of that real storage, it can tell the operating system to unmap it from the address space.

There are various other things that go on, but that's the simple story. From a figure posted in an earlier message, it seems that opera does pretty damned well (in comparison to most modern programs) with just the simple story, not having to rely much on nasty unreliable heuristics. Of that I am impressed.

That's one way, but you'd have to somehow instruct the compiler to use that for every pointer dereference. The easier method is to go in and change the values during compaction. Compaction is also known as stop-and-copy; it starts with a live set, everything you can reference from the stack, then copies over only the live objects while modifiying every pointer that uses it. It's messy but it works. Allocation is dead simple and fast. There's no fragmentation. And the runtime is limited by the live set rather than the heap size. There is a huge downside, however.

I wouldn't recommend mixing anything resembling C pointer maths with compaction, since its incredibly difficult to tell what's a pointer and what isn't (in fact, without modifying the compiler, it can't be done in C or C++). For this reason, the Boehm collector (a collector that replaces new and delete) goes for the Mark-and-Sweep method instead of compaction. Because you dont move objects, you don't have to worry about figuring pointers. Boehm's collector is also called conservative, not only because it doesn't modify live objects, but also in that it treats any data on the stack or in the heap as a potential pointer. If the data points inside the heap, the object containing that address is marked. This can lead to false positives on occasion, but there's no helping that without any support from the compiler (again contradicting the grandparent). The good news is that a false positive isn't going to cause direct harm in mark and sweep. All that happens is that space that could be used isn't; Boehm claims this is irrelevant in today's operating systems with virtual memory, although I doubt you'd see an entire page's worth of false positives. Certainly, I can't do any better than him.

In language R&D labs where people are paid quite well to think hard and long about things, they tend to use both approaches in what's called a "generational" collector. Young objects can be copied or collected as needed, while older objects are mark/swept away as needed. This works because old objects much more likely to stay than new ones. Last I knew, both Java and C# use generational techniques, because it makes sense in most nearly every case. However, as I described above, C++ doesn't have that, and even those libraries that replace new and delete have conventions and costs associated with it. I certainly wouldn't try to take Boehm and pidgeonhole it into Mozilla. And even if you did, it still wouldn't solve the compaction problem. All you can do is hope the virtual memory manager is doing it's job well. Even though the application and garbage collector is more likely to know what's useful than the VM manager.

I wouldn't know about C, but this statement is utterly false as applied to C++. No, its not false, its true.

Replacing the default new and delete routines is perhaps not for the inexperienced C++ programmer, but to say that there's an complete lack of compiler support is simply wrong.

And? What do you mean with compiler support for heap compacting (or GC)?

Q: What has replacing new and delete with your own implementations to do with garbage collection?A: Nothing

Q: How would new and delete of class A be able to compact a heap by moving allocated instances of class B down?A: Difficult!

Q: So if you now add a class C you like to rewrite A::new and B::delete to also cope with class C instances?A: I assume you understand that EVERY delete of EVERY class needs to know EVERY other class to be able to compact the heap, yes?

It is true that out-of-the-box C++ does not have a compacting garbage collecter, but one can certainly be written (and used, of course) with any conformant compiler.

Indeed, but not by merly only by replacing operator new and delete.Existing C++ garbage collectors are very limited to more or less conservative garbage collecting. See e.g. Boehms c++ / C garbage collector.And, the mere point of garbage collecting if you want to start nitpicking is: you don't ever call delete.

But you pointed out the flaw in the wording of the article - this IS NOT a memory leak, just inefficient use of the heap.

I thought the definition of a memory leak was an application that kept allocating memory from the OS as it ran, not an application that asked for a chunk of memory and just reused it inefficiently?(If I'm wrong, someone please correct me).

You can decommit the page, but keep it reserved. This frees RAM, decreasing the process's memory usage, but still takes up some of its address space. You MUST coalesce free heap blocks in this case, because all data in those pages is lost. This also requires extra housekeeping.

mmap() can do this, but on many systems [s]brk() cannot. brk() is also alot faster than mmap().

This is really moot on most systems; don't do a lot of little allocations that you're going to keep around for a while and DO use pooled al

I suspect that some of it may be due to talent. Perhaps Opera programmers are just more talented on average than the typical FF dev. As well, I also suspect that its simple goal orientation. The Opera company works on building a new browser that's better with new features. The MozDevs collaborate on whose got which tickets and how they'll be integrated into the source and such.

This is proof positive, I think, that OSS != the best option in all scenarios. Opera consistently beats FF out on features, secur

Well, you just picked up the worst reason - opera is great when it comes to performance, but firefox + extensions consistently beat opera and IE when it comes to features. I can have the features I want, there're way more extensions that features than opera has, and if I don't want them, I don't need to keep the extra UI involved in those features.

about:config [about] then search for browser.sessionhistory.max_total_viewers and set it to 0. This will be 0 pages in the cache per tab. You will get a reload slow down since FF will be going out to the web. You can manually set this to 2 or whatever you want. By default FF will cache upto 8 pages per tab with 1 gig of memory or more.

For those who remain concerned, here's how the feature works. Firefox has a preference browser.sessionhistory.max_total_viewers which by default is set to -1. No more than 8 pages per tab are ever cached in this fashion, by default. If you set this preference to another value, e.g. 25, 25 pages will be cached for every tab. You can set it to 0 to disable the feature, but your page load performance will suffer.

For those that don't know or remember, the preference is accessed by typing about:config

Well would you look at that. After R'ing TFA I found the option right there taking up half the article:For those who remain concerned, here's how the feature works. Firefox has a preference browser.sessionhistory.max_total_viewers which by default is set to -1. When set to this value, Firefox calculates the amount of memory in the system, according to this breakdown:RAM Number of Cached Pages32MB 064MB 1128MB 2256MB 3512MB 51GB 82GB 84GB 8

Ben, those numbers are NOT per tab. The bfcache is global; there are never more than 8 pages total in bfcache (and you need to have 1GB of RAM for this to happen). Most users have 3 or 5 pages in bfcache at any given time.

The point of bug 292965 was that the pref should be global, not per-tab. Is that not working correctly?

(Boris and David are back-end developers; they have much more working knowledge of this than Ben does.)

Also, there are actual memory leaks in Firefox. See this weblog post [squarefree.com] about progress on that. However, as that weblog post says as well, most excessive memory usage that people are seeing is entirely due to faulty extensions.

...for a half year or a year. I don't need new features, I'm currently happy with the ones I have and I'd prefer the current features working securely, in a speedy fashion and mostly without bugs. This time period would also give enough time for extensions to mature more.

Before someone jumps at my throat, it's just a description what I'd like to see, but of course its all up to the developers, they decide what to code and do with their time. It is just simple user feedback.

The Firefox CPU hogging bug makes a computer unusable until all
Firefox windows and tabs are closed. Basically, Firefox uses first maybe 10%,
then maybe 20% of the CPU, and, as Firefox windows and tabs are opened and
closed, continues taking more of the CPU time until Firefox is closed. This
CPU usage is with NO Firefox activity, or any activity of any program.

This bug is more than 3 years old. It is extremely difficult to
characterize; no one has succeeded yet. Here are some clues:

Somehow Thunderbird and Mozilla share this bug. Sometimes when Firefox
is taking say, 94% of the CPU, and Firefox is closed completely, Thunderbird
or Mozilla will begin using a lot of CPU time. Very weird, but it often
happens.

Firefox 1.5.0.1 is much worse than 1.5, which is worse than earlier
versions. This suggests that there is some resource in Firefox that is being
more overused as features are added.

The CPU hogging bug continues unchanged when Firefox 1.5.0.1 is
installed with a clean profile and no extensions.

Too many mouse clicks too closely spaced will often increase Firefox's
CPU usage, or sometimes cause it to crash.

Opera has none of these problems. So, the quote from the Mozillazine blog
shown below, although it is typical, is not supported by the
facts.

Whatever causes the CPU hogging bug is definitely associated with extreme
memory use. No doubt there are leaks, but this is not a leak, since it is not
necessarily associated with greater use of Firefox.

Users often report that just leaving Firefox open overnight causes CPU
hogging and extreme memory use.

The problems are the same in Mozilla browser.

It's good to test Firefox with a laptop in a quiet environment. When you
hear the laptop fan begin to run while there is no activity, you know Firefox
has begun to suck CPU cycles.

Putting a computer into standby or hibernation often makes the CPU hogging
bug much worse. That's why Firefox users sometimes just leave their computers
on.

When a computer takes a long, long time to start from standby, you know Firefox
is taking CPU cycles. What about coming out of standby makes Firefox unstable? No
other program has that problem.

Quote from the blog linked in this Slashdot story About the Firefox
"memory leak" [mozillazine.org]: "A lot of people complain about the Firefox "memory
leak(s)". All versions of Firefox no doubt leak memory - it is a common
problem with software this complicated."

No other program in common use is so buggy. The problems in Firefox
are not "common".

Another quote from the linked Mozillazine blog: "What I think many
people are talking about however with Firefox 1.5 is not really a memory leak
at all. It is in fact a feature."

Mozilla developers have been denying that there is a serious problem
for more than 3 years. It seems that it would be less work to fix the problem
than to undertake a cottage industry of trying to convince people they aren't
having problems. Mozilla developers have been
impeding characterization by marking Bugzilla bug reports of these problems invalid.

However, it is clear that it would take a serious scientific
investigation; this is not an easy bug to characterize.

I am using:Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.12)And I can not duplicate any of those issues.When clicking forwadr and back buttons quickyl, I manged to spike at 60% for a brief moment.I have ahd it running all day.I do not use the feature that lets some of it stay resident so opening it up is quicker, so maybe the problem is there.Any clues on ther things I can try to duplicate this issue?

to address this specific issue, more memory does not equal memory leak. Yes, the cahching mechanizi

Assessment: This Firefox outperforms both Opera and Safari in memory usage, and is faster than Opera on challenging pages. However it has the least favorable idling habits, starting at 2% here and would climb to 4% after several days of intensive use. FF 1.5.0.1 memory use would climb to about 100M for the

firefox currently sitting at 0% cpu usage. perhaps you should upgrade your pentium 133:)\seriously, the only thing i could think of is that if firefox ran out of ram and had to start using the pagefile, that would eat up tonnes of CPU. This would also effect other programs on the system. Are you sure you have enough ram in the machine? I assume you can replicate this bug on more than one machine right?

Ive had some sites crash firefox repeatidly but i cant think of any examples off hand.

firefox's memory usage has always been a thorn in my side. I tend to average around 20 to 25 tabs open, usually while I'm running other ram hungry applications. Firefox generally was eating up about 200-250 megs of ram on my machine (and I've seen it go as high as 600 megs). After changing the browser.sessionhistory.max_total_viewers to 0 and running "top" firefox seems to be using about 46 megs of ram right now. It also doesn't feel particularly slower than it did before. I have a feeling that the benefit of caching so much was actually having a negative return after a certain point because the machine was so starved for ram.
On a side note, if anyone is like me and looks in about:config for browser.sessionhistory.max_total_viewers and doesn't see it, you have to actually add the line. Right click and choose "new" then type in "browser.sessionhistory.max_total_viewers" and then 0 (or whatever you like).

Seriously, I think 1024MB of ram doesn't cut it these days. Maybe we should just accept 4gb of ram will be the norm in 2007-2008. I seriously could use it with other memory hungry apps that are sluggish *coughs* Illustrator *coughs* Indesign *coughs*.

I think this submission is confusing two points. First of all, is this really a memory leak? A program that uses a lot of memory is not necessarily a leaking program. A memory leak is a programmatic error where memory is allocated but never freed, even when there's no way to use that object again. As the program continues to allocate memory, the heap size of the process increases until eventually the OS terminates the process (eg., the OOMKiller). Actually, many applications you normally use leak memory - but as long as they don't waste a ridiculous amount of memory most people don't care, especially since most process lifetimes are relatively short (compared to a daemon process like apache), and after termination the OS reclaims all the program's memory, leaked or not.

What is being described here sounds much more like a cache of recent pages, which in my opinion is perfectly sane for a browser. Sure, maybe the cache is a bit overzealous, but even if that's the case, just disable it - worse case scenario, you edit the source. But otherwise, this is definitely a feature - I can promise you it's much more programming effort to save old pages for a quick redraw than to free the old page and replace it with the new.

So I guess the discussion here is, "is it right for firefox to use so much memory?" My answer is yes. It is not a memory leak, it seems like a very valid design decision. But if you disagree, old versions of firefox still work great (I still haven't upgraded myself).

I'm not buying this. Even if I close all of my tabs after several hours of use firefox.exe can still be taking up to 100MB of my RAM. The only way to completely reclaim the memory is to completely shut down firefox.

Firefox crashes when two browser windows are making synchronous XMLHttpRequests. I have experienced this under Linux - I have no idea whether it is the same under Windows. Basically under Lunux all Firefox windows are running in the same thread utilizing a scheme of cooperative multitasking.

So far so good. The bug appears when two separate Firefox windows are making periodic synchronous XMLHttpRequest-s. When such a potentially lengthy task has to be executed synchronously, Firefox creates a new "nested" event queue. If two (or more) browser windows are doing it at the same time, new event queues are created all the time and eventually (within 5 minutes) the application core-dumps.

I found this by recompiliging Firefox with debug information and debugging it. Even if my interpretation of what happens is not completely correct, the fact remains - a simple JavaScript can crash Firefox causing all open browse windows to be closed.

The solution is to always use asynchronous XMLHttpRequest (which is a better practice anyway) and to hope that the same problem doesn't appear in other places. Still, it is troublesome.

Memory isn't an unlimited resource you just hoard whenever you think you need it. Right now my instance of firefox is taking up 128 megs! I've seen it up to 256 megs before. This is just simply insane. I've seen people who's computer performance has gone down the tubes because firefox is taking up all the memory (and these are machines with 512 megs of memory, not exactly tiny). What I'd like to convey to the firefox devs is this: Your application isn't the only one running on the system. Play nice and don't be a hog.

With the number of people complaining about this (and the number of people that don't even KNOW to complain) isn't it a safe bet that you've made a mistake in the amount of cached pages?

The cache feature is nice, but why distribute it out to every tab? If I have 20 tabs open I'm not going to be constantly clicking the back button on each of them. Why not clear the cache on tabs that haven't been accessed recently and only keep cache on tabs actively being used. Often when I open new tabs I just want to be able to quickly access that page, or use it as a temporary bookmark - not navigate back through the path that got me there.

Firefox 1.5 implements a Back-Forward cache that retains the rendered document for the last five session history entries for each tab. This is a lot of data. If you have a lot of tabs, Firefox's memory usage can climb dramatically. It's a trade-off. What you get out of it is faster performance as you navigate the web.

The only problem is there were bugs filed for memory leaks long before Firefox 1.5 and the Back-Forward cache were implemented. Maybe this feature does contribute to Firefox's large memory footprint, but to say that this feature is the only reason and that there are no leaks is simply false.

If Firefox is caching these pages, why doesn't it cache POST results? When I hit back to go back to a page obtained via POST, FF refuses to show it to me, asking me to either cancel the action or resubmit the form.
JUST SHOW ME THE GODDAMMN PAGE, DAMMIT!. Once the page lands in my machine, regardless of how I obtained it (i.e. via GET, POST or whatever), then just show it, or at the very least give me the option of seeing the possibly expired page. Let it be my decision.

Every time I close all the tabs in my browser session except two or three, then check a few hours later and see that Firefox is sluggish and hogging a few hundred megabytes, I go to the police and ask them to take me into protective custody. I'm obviously a danger to myself and others. When I'm not responsible enough to seek psychiatric help, I just stare at my monitor and tell myself, "You only see three tabs there, but that's because you're crazy. You still have all those dozens of porn tabs open. You just can't see them because you went blind masturbating."

Seriously, what's with all the song and dance? Firefox obviously has at least one problem, probably several, that leads to bad performance for many users, under certain circumstances. Call it a UI problem, call it a documentation problem, I don't care, just call it a problem. Don't call it a feature or a misunderstanding. Don't pick a feature that can't account for many of the reported problems and say, "Aha! This is THE Firefox memory leak that's bothering everyone. See? It's a feature!" The denials and talk-arounds on this issue are what you would expect from a political party, not an open-source software project.

Of course, I only know all this because I use Firefox. It's the best. The memory problems would only be a minor annoyance if I didn't have to constantly read about how I'm crazy or stupid.

First off I like Firefox and its my primary browser of choice. HOWEVER this calling a bug a feature doesn't half remind me of a certain other company I could mention (and several Dilbert cartoons).

If this feature is for my benifit then let me decide whether to use it or not. Apart from that it does not explain why when I leave firefox idle with only one window open on a simple HTML page over time my memory useage goes up...

Stop hiding behind feable excuses and actually work on reducing the footprint firefox uses... FF is suposed to be a lightweight browser alternative to the usual browser bloatware - it is failing at the moment (rather like my spelling;) ).

I find that if I set browser.sessionhistory.max_total_viewers to 2 it uses less memory if it's left at the default (I have 1GB RAM). If I use Fasterfox's Clear Cache function, it compacts the memory usage even more. There is still some leakage but not as bad as the default -1.

First everyone complains that they should be as fast as possible to compete with IE, which is pretty fast.

When they do it the get slapped for using too much resources?

But I see a good point. I also would like to see new developement halted for some time to catch bugs and security problems. This would also help plugin developers to catch up. New features could be developed in plugins anyways.

Its a non-issue. As explained on a note at the end of the article, its a per session setting, not per tab, so the entire article misrepresented the "feature".

"Edit: In the comments, Boris and David pointed out that I misread the code, and that this is a global preference so that there are no more than 8 cached pages for the entire session, not per tab. My initial posting had claimed that it was per-tab. Oops!"

If Firefox has memory leaks (and I think it does), this is not what is causing it. If it were, however, per tab, as the article originally claimed, then it would have been a problem, because the more tabs you open, the memory usage increases at an alarming rate, if it has to keep up to 8 history pages cached.

Why is everyone bitching about this? I hate waiting for any refetch or rerendering when I use the Back button; I want it to be instantaneous. That page was fetched and rendered aslready, so having the browser keep it around for when I go back to it is exactly what I'd want it to do.

You've totally missed the point. People aren't bitching because the back and forward buttons are faster. They're bitching because the memory used for the fast back/forward is never released. Because Opera implements the sa

Virtual memory is not a carte blanche for memory hogging. As you should know, memory hogging will result in degraded performance.

Assuming that users have unlimited resources is exactly how Mozilla is barely usable on Windows 95-ME - especially when you have Slashdot Moderator access.

Who cares if Firefox or any app is a bit of a memory hog???

As long as it is just using the memory as cache space and not accessing the memory randomly, it'll be paged out into virtual memory as needed.

In an ideal situation, that would be correct.

However, the operating system does not know which memory is currenly "in use" and which ones are "in cache" - in fact, it's quite easy for an "in use" to be physically sandwiched between two "in cache" entries. Because of this, you will have a sudden loading time if you do plenty of other tasks in the background and suddenly switch back to Mozilla.

Small applications, being small, do not generally have to wait 1/2 seconds to recover from being pages in or out. Since Mozilla allocates the cache in memory, it will have to wait those two seconds.

assuming you're using an OS with decent vmem support.

An OS with decent vmem support would allow you to map files to memory. This results in no swapping at all - only writing perodic output to the hard drive, and loading the file into memory as required. If another application needs more memory, the memory map is discarded with no need to write the contents of memory.

An application that doesn't exploit the usage of memory maps is as good as an OS with shoddy vmem support. (Of course, it can simply use it's disk cache for the same effect.)

Give up and use Opera. Firefox is profoundly broken in combination with (a) memory leak being discussed and (b) memory leaks in plugins. The second seems to even include Flash, where some flash pages appear to be cache in active state and sit there using CPU cycles as well as memory.I think we've no got to the state where Firefox can be seen as a nice try, but no cigar. Opera on the other hand just works - and increadibly it's quick and lean too.