Posted
by
Unknown Lamer
on Wednesday January 25, 2012 @01:10PM
from the software-engineers-hate-sharing dept.

An anonymous reader writes with news about work on Mozilla's Javascript engine. Quoting Mozilla engineer Luke Wagner's blog: "With web workers in separate runtimes, there were no significant multi-threaded runtime uses remaining. Furthermore, to achieve single-threaded compartments, the platform features that allowed JS to easily ship a closure off to another thread had been removed since closures fundamentally carry with them a reference to their original enclosing scope. Even non-Mozilla SpiderMonkey embeddings had reportedly experienced problems that pushed them toward a similar shared-nothing design. Thus, there was little reason to maintain the non-trivial complexity caused by multi-threading support. There are a lot of things that 'would be nice' but what pushed us over the edge is that a single-threaded runtime allows us to hoist a lot data currently stored per-compartment into the runtime. This provides immediate memory savings."

However memory usage is one of those moot points.Often (Not all the time) you come to a trade off of performance vs memory usage.Sometimes it is better to store a bunch of data in memory just for quick reference then having to calculate it over and over again or worse need to get it from slower storage such as a disk.Now there are things you can do to optimize the memory usage so it isn't as wasteful. However if you goal on low memory foot print chances are you are going to be sacrificing performance to use less memory.

Which is why they always cream the competition in the benchmarks? Seriously, the only time I've ever seen it waste memory was during a session where Silverlight crashed. In general it tends to use very little in the way of memory.

OTOH, given your post, I can only assume that you're using lynx, tons of extensions or are some sort of troll.

Of the 38 bugs listed there, about 5 were not memory leaks according to the subject. How many of those were because of addons?

Then clicking on the first few bugs:- ONE GUY had addons and the dev requested a no-addon test. No reply.- This one look more promising but when the dev said "To find out it's a real memory leak or not, run 'prstat' command in a terminal.If the vaule of 'RSS' field of firefox row keeps increasing, it's a memory leak."... No reply.- ONE GUY had problem spamming the same URL into the URL bar. "Maybe have to close it" sounds like he wasn't even sure. Not a typical use case. (don't spam the URL bar with the same URL.- Agan one guy.- "The numbers you quote are not exceptional at all. They're probably caused by the malware-database that is updated in the background"- "The fix in bug 426236 fixes all the leaks, but it doesn't touch controllers..." fixed in 2008, but left open "just-in-case"

"Finally, Firefox 4 had a new HTML5 parser. It had a bug which meant that every time you set an innerHTML property on an element, some memory would leak. And that’s a pretty common operation on many websites. "

Out of curiosity, is that actually a significant memory leak? Yes, it's extremely bad form and the kind of thing that should be fixed ASAP. But did it result in a typical leak rate of a few KB per day, or would it lose 100MB an hour? Without that kind of information there's no way to tell if that's a real problem that affects people, or something more theoretical that should be fixed because it's the right thing to do.

Yes I would say so, innerHTML is used very much on many websites, it is especially bad when you have a site with dynamic content being refresh on an interval. But that is besides the point, all memory leaks are bad, when you have your browser running constantly for many months, any memory leaks wreak havoc. In this day and age having memory leaks in your application are a sign of bad developing practices and not enough testing for releases. The Mozilla team has a habit of focusing on useless stuff like tab

In all seriousness, they need to do something about the extensions. Refuse to host leaky ones or something. Extensions can't be Firefox's killer feature if they make it eat all of your RAM.

I can't agree more. Things are much better than they used to be, but oofda. So much for 640K -- FF is currently using up almost 1GB with about a dozen tabs open, and with AdBlock Plus, NoScript, IE Tab2, Firebug, User Agent Switcher.

In all seriousness, they need to do something about the extensions. Refuse to host leaky ones or something. Extensions can't be Firefox's killer feature if they make it eat all of your RAM.

We are so on this. In fact, add-ons are the majority of what we talk about at MemShrink these days.

In theory, leak checking is now part of the addons.mozilla.org review process, so new add-ons will all undergo a (very basic) leak check before they're approved. I'm sure we'll have to tweak as time goes on, but it's a start.

Refusing to host obvious leaky ones will help, but I'd like to see some sort of memory/performance meter done well. Make it dead simple to see which extension/plugin is leaking. If one is behaving worse than some threshold value, encourage the user to look at the bad boy. Preferably with a solution available (reload or delete plugin/extension).

We live in a world where 8GB of RAM costs $50. I'm not sure how much I actually care whether Firefox uses 500MB vs. 2GB anymore.

When my FF 3.6.3 gets above 1GB of virtual memory, it becomes a sluggish pig on my 8GB system. Frequent half second pauses. Characters blurt out ten at a time when I'm typing into a simple web form. I've always assumed this was a GC gag of some kind with worse than linear scaling as memory fragments.

If it was using 2GB and never slowing down, I'd write it off as the cost of having a plug-in architecture. I have a lot of plug-ins. That's the whole point.

I'm not sure whether you are aware of it, but there's this cool way of replacing your application with a newer version (new: less than two years old). For free! It's called updating. I'm aware it's a radical concept, but try it out, it's pretty cool.

I've finally figured out the cause of [most of] the pauses. It's the session restore feature that saves the browser state, cookies, and session data every 10 seconds (in JSON format, no less). I've turned it off and the pauses are now almost entirely gone.

browser.sessionstore.interval = "300000"

browser.sessionstore.max_tabs_undo = "0"

browser.sessionstore.max_windows_undo = "0"

Naturally, this will remove the feature that restores your tabs in the event of a crash, but seeing how little Firefox blows up on

The problem is that two of my otherwise favorite applications, Firefox and Crashplan, each gobble up memory until my system shows a spinny beachball of death every time I try to do the simplest thing. Even when your system has plenty of memory, performance suffers when applications start hoarding gigabytes of it.

Sorry, "leaks memory like a BP pipeline" sounds like the best description for a browser which seems to absolutely refuse to free up RAM used by old images loaded by Javascript that have since been kicked off the page. I can set up a timer to reload an image every, say, half hour* (think "weather report precipitation map" or "webcam image") on a machine that should be up 24/7, come in the next day, and have Firefox's "This page has a script that is not responding" popup because the OS was too busy thrashing swap after physical RAM filled up and Firefox thought it was the script's fault. It's not often I see the Mem and Swap meters in GKrellm2 solidly maxed out. For debug purposes, I can have it reload that image every five seconds and watch the memory steadily creep up every five seconds without ever doing anything resembling GC. Of course, if I close that tab, the memory returns instantly.

Now, you might say that for a kiosk that should remain up 24/7 like that, I should consider a different means of presenting the data. And ultimately, I did consider a different means. Because neither Chrome/Chromium nor Opera have this problem. Using the exact same script on both browsers, once it reloaded the image, the old one was booted out of memory immediately, or at least quickly enough that any extra memory use was marginal and incidental, and certainly not to the point where it would suck down all of swap like Firefox did. In fact, this script is still running on a kiosk here, it has been for a couple weeks straight now, and there's no memory wasting in sight. Firefox wouldn't have lasted the first night without manually reloading the entire page.

So yes. It's Firefox. Firefox leaks memory. A lot. It does this due to very poor cache decisions and inferior GC techniques. Period. This has been a known problem for some time; a cursory glance through Stack Overflow will find numerous questions regarding this exact situation and Firefox, none of which have conclusive answers besides "stop using Firefox". And the only common thread in all of them is Firefox. The problem is Firefox. Firefox is the problem. It leaks memory.

*: Note that this is using the trick of appending the image's URL with a dummy timestamp variable to trick Firefox into not just loading the old image from cache despite pragmas and meta tags telling it not to. Point still stands, though: Chrome/Chromium and Opera understand enough to unload the previous image from RAM with the exact same script and usage.

Technically speaking, I don't think that fits the criteria of a memory leak. As I have always understood it, a memory leak is when a piece of memory has been allocated by the program, and then the program drops the reference to it without releasing it. This memory is then 'orphaned' and can't be found or used by any program.

In any case, I think you have accurately connected it to garbage collection and cache management, and it should be fixed if so. So your point still indeed stands.:)

It's only a GC and cache problem IFF the GC actually knows the memory needs to be freed. If the GC doesn't realize that the memory needs to be freed (even though it's not being used), it's a memory leak.

The garbage collector isn't some magical plastic memory cover that makes memory leaks bullet proof. Just because you don't have to manage your memory in Java doesn't mean a shoddy programmer can't write something that'll take up more and more memory, if only because he neglected to design the program in such

Benchmarks are one thing, real life is another. I'm having to restart Firefox on a regular basis on my BRAND NEW EIGHT GIGABYTE LAPTOP. That's right:

In an age in which 1G netbooks are extremely common, I have eight times as much memory

It's a new laptop = fresh install of everything. The only thing carried over from my old machine was via Firefox Sync

So if you're about to whine "But Squiggy! It must be your profile! And you need to get with the program and have 4G of RAM", there's your answer. I don't want to hear it.

I really don't understand people like you who insist there isn't a problem when:

There's a massive groundswell of frustration and anger about the issue

Slashdot posts another "Mozilla reports - we found the problem! Next release will not have those memory problems!" article EVERY MONTH

Usage of CHROME, which in every other respect is an inferior browser to Firefox, is going through the roof

You think we're making it up? You think everyone's just switching to Chrome for the hell of it? Clue: they're not switching because it's more compatible, or more user friendly, or has more features. Because nobody outside of a Google diehard would ever argue such a thing.

We use Firefox for a bit. Some time goes by. Maybe we launch another application. Perhaps we view a PDF. And then it starts. It takes a second or more for Firefox to notice we clicked on something. The scrollbar is no longer real time. Switching a tab causes nothing to happen for ten seconds. We try closing tabs. We go to "about:memory" and hit every button. It seems... slightly faster. Or was that our imagination? Hmmm, it's gone slow again.

I'm giving up. I downloaded Firefox 3.6 from Mozilla's website last night. I'm going to make it my default browser.

I'm giving up. I downloaded Firefox 3.6 from Mozilla's website last night. I'm going to make it my default browser.

You know they're forcibly upgrading FF3.6.x to FF12 in April, right?

And that's when I bother to switch to another browser. I consider FF8 to be different enough (in ways I don't particularly like) to essentially be a different browser. If I'm going to go through all that trouble, I may as well switch completely. Chrome seems to be out, as (1) I don't particularly trust Google any longer, and (2) it seems to take a lot of RAM on my system (no I haven't investigated why, and I don't care to). Safari, on Windows? HAHAHA. IE8? Double fucking HAHAHA. So no, I haven't decided wha

Considering how appalling the memory leaks were for YEARS while the moz folks insisted there weren't any problems, it will probably take at least as many years before any of us believe anything they say about memory usage.

I for one, won't believe they have any competence in memory management until I have spent 5 years without having to restart Firefox every other day.

You do largely have a point, but during that time there was a workaround. If you for whatever reason didn't want to close the browser regularly there was a memory trim on minimize fix that would force it to trim memory. I found that to work effectively.

But, yes you are correct between the 2.0 release and the 3.5 release where it was fixed that was about 3 years. Although, I probably should give them some credit for the time during which they were fixing it.

Simply because Firefox devs are some of the most complacent, or downright willfully arrogant folks out there. It took years, literally years, for them to even admit there were massive memory leaks in Firefox. Anyone who suggested it here was branded a troll by them -- but that was back in the day when people liked, believed, and trusted in Firefox, back in the days when it was on its ascendancy. Those days are well and truly over.

So while they may have fixed most of the memory leaks (it still runs like shit on a Mac), let us not allow them to get complacent again. [snip] By not frequently reminding them about memory leaks, you are opening the door to yet more bloat going forward.

I believe there's a parallel with a common fable:

You saw a wolf. I said I shot many, that the wolf situation should be improved, but you kept seeing wolves. This repeated many times and made you angry.

Now lots of others are saying that there really are fewer wolves these days. Indeed, you have no reason to believe that they're wrong.

But because of the offense done to you some time ago, you're going to continue crying wolf, even given no evidence at all?

I don't understand. I guess this is an attempt to punish us? Is it that you feel like we harmed you with our lies, so you should try to harm us with yours? I'm afraid that by crying wolf, and encouraging others to do the same, may just cause us to ignore you all, which is exactly the outcome you don't want.

I'm very sorry you feel like Mozilla deceived and harmed you. But the malicious attitude here and elsewhere in this thread is getting old. Use Chrome, if you like! But don't encourage people to waste developers' time with false claims.

I'm very sorry you feel like Mozilla deceived and harmed you. But the malicious attitude here and elsewhere in this thread is getting old. Use Chrome, if you like! But don't encourage people to waste developers' time with false claims.

I feel the need to chime in here, because my original (humorous) post started this flood of responses. I like Firefox a lot, but still, I experience memory leaks all the time, even with latest versions. And from what i hear, i'm not alone either. It is not mass delusion.

Now, some people to this subject respond passionately, like if their favorite team were involved, which just boggles my mind.

I really appreciated when the FF team started addressing memory usage not too long ago (it really has improved thing

My experience: I can't load Yahoo Mail for any length of time in Firefox without it turning to mush. If I leave Firefox open for a couple of days with mostly static tabs, twitter excepting, and make the mistake of using other applications, it'll turn to mush. And by mush I mean swap hell when you try to scroll a web page, that kind of thing.

[snip]

So... look, if we didn't love Firefox, we wouldn't be upset.

If you love Firefox, please file a bug (bugzilla.mozilla.org) and cc me [:jlebar]. Bugs from the community are how most of these problems get identified and fixed -- we simply don't have the testing resources that Google, Apple, or Microsoft has. We rely on you guys as much as you rely on us.

If you file a bug (or if anyone else reading this thread files a bug), I promise I'll take a serious look and try to understand your problem.

"My experience: I can't load Yahoo Mail for any length of time in Firefox without it turning to mush. If I leave Firefox open for a couple of days with mostly static tabs, twitter excepting, and make the mistake of using other applications, it'll turn to mush.

And by mush I mean swap hell when you try to scroll a web page, that kind of thing.

Sweet jesus, man, he just gave you a description right there. Although I'm not on the FF dev team, that sounds like a damn memory leak to me.

This is a common/. misconception, that "bug reports" like this are actionable.

There are so many things missing from this: The reporter's operating system, which version of Firefox is affected, whether Firefox is actually swapping or the disk is spinning doing something else... We'd want to see if the garbage collector is making Firefox slow. We'd want to see if a newer version of Firefox doesn't have this problem. Like another poster said, we'd want to have a look at about:memory before and after the p

Firefox has 400 million users. (That's 1/20th the world's population, for those following along at home.) Any time we make a UI change, some of these 400 million people will love it, and some of those 400 million people won't. I'm sorry you didn't like this change.

For me it only counts to blame the leak on a plug-in if they tell me which plug-in to nuke. If I disable 15 plug-ins, it's not even the same browser by the time I'm done. Why do all these extension leaks persist? Because there's no feasible way to push a complaint into the right bug queue. Who is responsible for this sorry state of affairs? FF-core.

In my opinion, any lost memory not attributed to a specific culprit is a leak in Firefox, the product.

Have you told firefox not to remember all your downloads indefinately? It gets a little slow when it's remembered a couple of hundred downloads, and that was the default setting a long time ago, if you've been upgrading and never reinstalled your OS you've probably still got that default setting.

In saying that, I use chrome now, once they decided to start bumping major versions every month or so, and upgrading broke at least one extension for a week, it was time to move to chrome. Oddly when I did I still preferred the FF UI, course now they have changed that to be more like chrome anyway.

Where do you pull that "3X faster" from? Without proper AdBlock, Chrome seems to be only about as fast as Firefox. With AdBlock installed and properly configured (ie, not just the defaults, including "non" (hah) intrusive ads), Firefox runs circles around Chrome. If you can't bother setting it up, try NoScript.

That's just speed. Now try to factor in privacy, features or configurability. For example, in their default setting, Chrome crashes to desktop if you close the only tab. Someone on the Firefox team had the "brilliant" idea to ape that, and sadly, Firefox does that by default as well now. But fear not: "Close Last Tab" restores sanity. Also, tabs on top: I find myself using keyboard to access tabs only ~50% of the time -- the tab bar is accessed at least an order of magnitude more often than the URL bar, thus it should be more accessible. Again, the Firefox team aped Chrome's misfeature, but you can restore sanity easily. Get rid of the useless search bar? Here you go. Get rid of Google's typo-jacking? browser.fixup.alternate.enabled=false and keyword.enabled=false (you can type "google goat porn" if you want to search; rename the default keyword to "g" for convenience). Fix the http:/// [http] being hidden for 1/3 sites that still didn't go SSL? browser.urlbar.trimURLs=false. And so on, so on.

If I wanted to close the program, there's that button in the corner of the window, or Alt-F4, or Ctrl-Q. That's different from a request to close the tab (ie, the tab close button, or Ctrl-W). No other MDI program does that. It's a failure to conform to commonly agreed upon standards, for no reason whatsoever. Be that a bug or a misfeature, the end result is the same: the program suddenly quit when it shouldn't have.

If we're talking about JavaScript execution, then yeah, the difference is at least that much. Last year, I tried writing a game in JavaScript, since all the cool kids are dropping Flash nowadays. It worked fine on Chrome, but was completely unusable on FF. Running it in WebKit on Android, on hardware that's puny compared to my desktop, it still ran better than in FF.

Which is a real shame, because overall, FF is my favorite browser. I like the configurability, I like the wide selection of plugins, I like the

Well, Chrome is made with C/C++, while Firefox is almost done entirely in JavaScript. Chrome is a real native application, while the Firefox native bit only have the minimum required to run a JavaScript interpreter

I am going to assume you are aware of how those functions work (specifically that they cannot interrupt the current thread if it is busy; they are only handled if the thread is idle) and that they themselves are not multi-threaded or have anything to do with multi-threading. It's not 100% clear from your post, though.

Either your browser will display the alerts in the proper order despite the 0ms timeout (because timers are only handled when idle because they are NOT threaded) or your browser will get angry at the 10,000,000 iteration-long loop.

The same way it's always been implemented. setTimeout is event driven; it adds an event to the event queue to be executed at a later time. Once your code returns, the browser can spin the event loop again. The timer event will come up in due course and the browser will reenter the js engine to call your function.

But seriously, if there's no performance gain from multithreading, it can be a really good idea to move away from the complexity of it. There's a lot of traps people can fall into with concurrent code if they don't know what they're doing.

They mean to replace multiple per-thread copies of data with one single copy of it, accessible by all threads. No doubt part of Mozilla's latest push to reduce memory consumption.

One feature of x86 is that, save for a specialized SSE streaming store instruction, any store made by one core is immediately visible to all the other cores—even when the old value was already in a core's cache.

Maintaining such cache coherency involves a lot of overhead, so to get better scalability multi-threaded apps will sometimes adopt a "share-nothing" model: all threads get their own copy of the data, and no other threads will ever get to touch it. You trade memory and complexity for speed.

It sounds like Mozilla has decided this trade off is no longer worth it, and so has done away with multi-threading all together. Perhaps they will use green threads [wikipedia.org] instead of native threads, though that brings along its own bag of complexities.

It's not a cop out. You don't gain an advantage from multi-threading if your threads run on different cores. The main advantage of threads was avoiding a context switch during a blocking call. This way switching to another thread was cheaper. And since they ran on the same processor, it meant that you HAD TO switch to another thread. But on multiple cores they often run simultaneously. So if you have (for example) 2 threads running and one blocks, you won't get the advantage of avoiding a context swit

main advantage of threads was avoiding a context switch during a blocking call. This way switching to another thread was cheaper.

But that advantage mostly went away before the core count started climbing. There's still a bunch of context to swap on a thread switch - the main difference on a process switch is swapping the memory mapping context, and processors have been optimizing for that specific action for quite some time, and it's no longer particularly expensive in the scheme of things.

I don't see the need for threads in a broswer plug-in. I can't imagine living wothout them in back-end infrastructure code: no one gets taught a

ah, so the "web worker" implementation from HTML5 does a better job for threading in Javascript. got it. If that's what developers find easier and better to use then it makes sense to simplify the threading in their own runtime. Being a published standard helps too so I get it now.

My initial sense of this, is that they are making a huge mistake here. I'll have to do more research, but my feeling is that they are moving in the wrong direction with this decision.

One of the really cool "baked in" things with functional style language is their fundamental support for horizontal scaling across CPUs. My hope has been that javascript evolves towards this, so that the generic suite of functional methods become massively performant on a larger scale with map/reduce/fold/each calls.

Closures present a bottleneck here, but it seems like a reasonable runtime could make some intelligent prediction about whether the isolated function is a closure or not, and ship it off to a different CPU/thread depending on optimization strategies, or even estimated closure size. Even better, this could be done at runtime with some runtime optimization based on execution metrics of an anonymous/declared function in-context.

At the point of calling the map/reduce/fold/each function, the runtime should be able to decide whether to parallelize out the call, or even use some language extensions to let the developer specify the threading.

The point is, now that they're making this decision, all of those options are gone from FF. And at a terrible time too. As we move toward CPU architectures that encourage parallelism, Mozilla is taking js off the table as a first-class language able to easily exploit those new architectures. That strikes me as a huge mistake, and I'm struggling to understand the rationale.

The old JavaScript runtime supported multi-threading in the runtime itself. This resulted in the need for complex threading/locking code to make sure that data was being accessed correctly. This is hard to maintain, easy to get wrong, consumes more memory and slows down things like garbage collection.

The new JavaScript runtime is single-threaded. WebWorkers each have their own instance of a (single-threaded) JavaScript runtime that may be running on different threads.

You can still have your map/reduce/fold/... functions running in parallel; they will be implemented on top of WebWorkers. As the representation of each runtime is simpler, the engine as a whole can optimise the work between threads better and perform better code generation.

It also means that garbage collection is faster as it (a) does not navigate all memory allocated across all threads and (b) only blocks the thread the garbage collector is currently running on.

As an exercise in insanity, I tried to install a modern Linux distro on an old Pentium II system with only 128M of RAM. Used btrfs for the heck of it. It was all going well until I tried to run Firefox 9.0.1. Thrashed that system mercilessly. Never mind actually showing a web page, just starting up blank was extremely slow. Sometimes I saw the "unresponsive script" popups on Firefox's own Javascript. I hacked out everything that used memory, dumping the LXDE desktop environment for a plain old window manager (jwm), and this helped, but it's not enough. Turned off images and disabled Javascript. It still thrashes swap.

Chrome didn't do any better. Firefox 3.5 worked okay on an even smaller system (96M of RAM). Version 3.6.8 + LXDE works fine on a system with 192M of RAM. Here's hoping their MemShrink effort scores more big wins.

Single threaded is the safest way to program. Creating a multithreaded application without a strong multithreaded architecture is asking for trouble.

The only problem with this is the limited performance and the fact that modern computers are packing more and more CPU's whereas individual CPU performance has been stagnating. Sooner or later people will have to work out the tools to create safe multithreaded applications without requiring a special degree in parallellization.

Relatively speaking, core performance growth has slowed down drastically. Most of your integer based logic and math have a throughput and latency of 1 cycle. Hard to get much lower than 1 cycle. The only real advancements in speeds are multi-cycle instructions, and most of those are SIMD now.

I was curious about that myself. Multithreaded should better for performance when one has multiple scripts and multiple tabs.

But, I do see your point, if the tools aren't there to make things work together, I kind of wonder if that isn't part of the problem I've been having lately with scripts not responding and the interface freezing randomly.

Sooner or later people will have to work out the tools to create safe multithreaded applications without requiring a special degree in parallellization.

Maybe as soon as they figure out IPCs. Before answering the question "aren't single-thread programs bad", you have to mention the difference between a thread and a process. Once one realizes that the main advantages of a multi-threaded paradigm were in single-processor space (because it allowed to avoid a context switch necessary for changing memory space to that of a different process), you won't even have to explain the rest. Most people have already come to expect that multi-core is a Good Thing(TM).

As the internet continues to develop and fewer people started out with dial up, I'm not sure how long that's going to remain the case. I remember having to use a crappy web app at a previous job logging things and it would freeze or crash so frequently that standard practice was to log everything on paper first and then copy that into the application.

What the article is saying is that each instance of the JavaScript runtime runs in one thread. If you want multi-threaded JavaScript, you need multiple runtime instances (one per thread). This is how WebWorkers work.

That makes a lot more sense than the title suggested. The Firefox devs have been working for quite a while to split the browser up to better make use of multicore processors and it seemed a bit odd that they would be going backwards and cramming all javascript into one process.

Of course the article said as much that these containers are on a per tab basis and hopefully that will help people who have tons of tabs open at once still be able to browse when one unrelated script freezes on a different tab.

Basically in a dynamically typed language like JavaScript every property access, function call, or any other thing that can be changed dynamically could be changed at runtime by another thread. So you need locking for every method call, property access, etc to make sure it isn't changed by another thread while it's accessed in another.

There are some generally fast locking algorithms for when locks are used mostly by the same thread... for instance in Java locks can be owned by a thread and that thread never has to lock or unlock at all, but instead it periodically checks if another thread has written a flag saying it wants to become the owner, then there is synchronization to pass off ownership. This works ok for Java, where there are fewer things that can change at runtime and they are explicitly listed out (using 'synchronized'), but in a dynamic language it's usually just too much overhead.

Just for comparison V8 is even more extremely single-threaded, with execution that can only be interrupted at some certain points in the JS code.

Basically in a dynamically typed language like JavaScript every property access, function call, or any other thing that can be changed dynamically could be changed at runtime by another thread.

This has little to do with dynamic typing; anytime you share mutable state between threads, anything mutable in the shared state could be changed at any time by any thread (in particular, statically typed languages which allow arbitrary pointer manipulation -- like C/C++ -- have this problem just as much as dynamicall

for instance in Java locks can be owned by a thread and that thread never has to lock or unlock at all, but instead it periodically checks if another thread has written a flag saying it wants to become the owner, then there is synchronization to pass off ownership. This works ok for Java, where there are fewer things that can change at runtime and they are explicitly listed out (using 'synchronized'), but in a dynamic language it's usually just too much overhead.

So, I write Java a lot (java/jogl actually) and we do quite a bit with multi-threading. usually we use reentrant locks where anyone that wants to read/modify needs to lock/unlock, even the lock owner. you can't just assume the owner can check if nobody is owning it because, if it is currently changing something and then another object in another thread requests the lock, the owner won't know; it needs to lock the lock as well so other threads know to wait.

Just because you have 64-bits doesn't mean you have to use them for everything. I suppose you insist upon using a 64-bit text editor as well. Considering how much time and energy the developers spend trying to minimize memory use, I'm not really sure why they would go and undo all that just to use 64bits.

Web Browsers are much more sophisticated than a text editor. If all you used them for was rendering html, then 32-bits would probably be fine. But, 64-bit could potentially be nice for things like 64-bit plugins and extensions - true, most plugins/extensions are just fine as 32-bit apps, but there could be some specialized plugins which might benefit from access to full memory.

32-bit Inel assembly is crap - extremely limited registers, and an instruction set full of legacy baggage form the 8-bit days. AMDs add-ons for 64-bit are modern, lots of registers and a set of sensible instructions to use them.

So it's a trade-off: 32-bit uses less memory (including for the instructions), and is therefor morelikey to get processor cache hits , while 64-bit gives you more registers, which allow signficantly faster code when (as is normal in non-scientific programinng) everything you care abo

Not sure why this was modded down. Windows WAS the system in which tread paradigm dominated. It was a performance gain in a single processor machine. Now that we are moving away from multi-threaded and towards multi-core, I guess most people forget what one has to do with the other.