Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

igrigorik writes "The generality and simplicity of Google's Map-Reduce is what makes it such a powerful tool. However, what if instead of using proprietary protocols we could crowd-source the CPU power of millions of users online every day? Javascript is the most widely deployed language — every browser can run it — and we could use it to push the job to the client. Then, all we would need is a browser and an HTTP server to power our self-assembling supercomputer (proof of concept + code). Imagine if all it took to join a compute job was to open a URL."

1. He places the map/emit/reduce functions in the page itself. This is unnecessary. Since Javascript can easily be passed around in text form, the packet that initializes the job can pass a map/emit/reduce function to run. e.g.:

var myfunc = eval("(function() {/*do stuff*/ })");

In fact, the entire architecture would work more smoothly using AJAX with either JSON or XML rather than passing the data around as HTML content. As a bonus, new types of jobs can be injected into the compute cluster at any time.

2. Both Gears and HTML5 have background threads for this sort of thing. Since abusing the primary thread tends to lock the browser, it's much better to make use of one of these facilities whenever possible. Especially since multithreading appears to be well supported by the next batch of browser releases [owensperformance.com].

(As an aside, I realize this is just a proof of concept. I'm merely adding my 2 cents worth on a realistic implementation.;-))

I think this approach to MapReduce is a pretty creative angle to take on it. However, there are a number of distributed systems-type problems with doing it this way, that would need to be solved to actually make this realistically possible:

1) The dataset size is currently limited by the web server's disk size.
Possible solution: push the data to S3 or some other large store.

2) There is a single bottleneck/point-of-failure in the web server. In theory 10,000 clients could try to emit their map keys all at once to the web server. IIRC, Google's mapreduce elects nodes in the cluster to act as receivers for map keys during the map/sort phase.
Possible solution: Again, if you were using S3, you could assign them temporary tokens to push their data to S3 -- but that would be a large number of S3 PUT requests (one per key).

3) Fault-tolerance -- what happens when a node in the browser compute cluster fails for any of N reasons? How does the web server re-assign that map task? You'd especially want to ensure that computation finishes on a job in an unknown environment such as 1,000,000 random machines on the internet.
Possible solution: If you haven't heard from a node in N seconds, you could reassign their map task to someone else. This is a similar idea to the MapReduce paper's description of sending multiple machines on a single map task, and racing them to the finish.

4) Security -- there is no way to deterministically know whether the data emit()ed from a user's browser session is real or not. How do you trust the output of 1,000,000 users' Javascript browser executions (I think the answer is, you don't).

Further down in the Slashdot comments, a poster also pointed out that Javascript is a poor platform for computationally intensive work. Which I agree with on a general level. The Javascript number system is designed for genericity, not performance.

In the end this is just a cute idea that has any number of practical problems. Many of them reflect the fact that distributed computing is hard, but many of them also reflect the fact that the suggested platform is less than ideal for this function. Especially if you're going to be pushing workloads that take more time and resources to transmit back and forth than to simply compute them.

Doesn't stop me from humoring him, though. We all have to dream.;-)

And besides, this may just inspire the next fellow down the line to use the technology for a more practical purpose.

Unity [unity3d.com] (a game development platform) translates JavaScript (and Python) into.NET CLR opcodes and then runs them via Mono, which ends up being quite a bit faster than just running the JavaScript in a traditional interpreter.

SquirrelFish (a bytecode interpreter in WebKit) and V8 (a native JIT compiler in Chrome) are also available to speed things up.

Looking at it from the.NET CLR bytecode perspective, Silverlight 2.0 is available on Windows platforms and OS X (Intel); once Moonlight hits 2.0, that'll make a

Problems 1 and 2 are based on the faulty assumption that "a web site" is a single piece of hardware, whereas for a system like this it would obviously be a server farm connected to the intarwebz by several high-volume dedicated gateways.

Problem 3 has to be solved in any real-world implementation, so is pretty obvious (and tractable with timeouts etc) imo.

Problem 4 is the tough one - you can either slug your performance hard by running calculations multiple times, or you figure out some way to authentica

Local Storage APIs would probably work better. The entire data set could even be dumped to local storage to allow recovery from browser failures. In addition, using the SQL engine of the Local Storage database can speed up certain sorting and aggregation tasks, thus (potentially) allowing for a faster response than making Javascript do all the heavy lifting.

I think that I would be willing to donate some of my cycles but Java has a bad habit of locking up the browser till the process is completed.

And then Firefox (or Epiphany) pops up a dialog after JS maxes out the CPU for 30 seconds asking if you want to permit the execution to continue.

So you have to limit yourself to 20 seconds per page load, and have the overhead of using Javascript as opposed to a better language like Java. Actually, a Java applet seems like what you would really want to do if you were re

This could be a possible way to generate revenue from popular websites... instead of selling something of such dubious quality as "advertising impressions", high-volume sites such as/. could support themselves by taxing, say, 10% of a viewer's CPU with an unobtrusive background thread, and selling the aggregated processing power to customers. I'd certainly be happier donating a percentage of my otherwise totally wasted CPU time to a site than having to read crappy ads for products I don't want.

The same thing that happens when you're standing on a train that's moving at the speed of light and you throw a ball forwards.

*ahem* I mean, the script could set a "last batch accepted" timestamp in a cookie or somesuch when it starts, and delete the cookie when it's done, and only run a processing batch when either it can't find a cookie or it's been 10min since the last task.

Who wants to open up a page that suddenly uses all of their cycles and makes their computer useless for anything else while this is running.

Can you actually do that on a modern OS and a modern CPU? I regularly have my CPUs on 101% load and it stays snappy as ever. Only heavy I/O on the system drive makes it unresponsive. This applies for both Gentoo and Vista. (The default install of Linux Mint, however, sucks horribly.)

Actually, once upon a time, there was a distributed Java applet [archive.org], alot like BOINC but in a browser. This particular project was about calculating the emission of gamma rays from nuclear waste.

It didn't last long, probably about a year or two, but it did get quite a few results.

Firefox 3 on Vista on my work desktop loads/. pages flawlessly, whereas and my very similarly spec'd home computer running Firefox 3 on xubuntu 8.10 freezes up for a couple of second when it loads a long comments page. Not long enough to be a pain but it's definitely noticable. My eee 900, however, will lock up for 10+ seconds loading a hot/. topic. >.<

With ever-increasing JavaScript performance, there's a lot of cpu power available for cracking passwords and captcha's... Just include the code in an ad and you're done. No tricky installs needed, just the idletime of the user's web browser.

I really don't think laptops were designed to run at 100% all the time anyway, so yeah, I'd avoid any distributed computing projects on your computer.

I run it on my two desktops at home though, and there's barely any difference in my electric bill. Idle vs load for me is about 40W difference -- I could save more by turning off a fairly dim bedside lamp.

I configured my desktop machine at home to suspend when I hit the power button. I only use it for games, so it's never fully powered on throughout the day. My electric usage would definitely go up a bit if it was always powered on running compute-intensive software.

With ever-increasing JavaScript performance, there's a lot of cpu power available for cracking passwords and captcha's... Just include the code in an ad and you're done. No tricky installs needed, just the idletime of the user's web browser.

This is eerily plausible, but I think there's one thing keeping this from becoming a massive problem:

Anyone running a legitimate site will kick their advertiser to the curb if their ads start sucking down lots of CPU. The only people who'd allow this sort of advertising

I don't get what's the big problem people have with "[shudder]JavaScript[/shudder]".

It's a Turing-complete language, which means it can be used to do anything from simple form validation to ray tracing and neural net simulations. With AJAX to handle file interactions, I don't understand the problem that people have with it. What is it that you think JavaScript can't do that 'x' language can?

I wish people would get over this childish bias and accept that JavaScript is a/real/ language, and not

A big thing is the same thing people have against VB: there may not be anything technically wrong with it, but bad programmers are drawn to it because it's easy, so you hardly ever see a good VB program. There's especially nothing wrong with VB now, when writing a program in VB.NET gets you the same result as if you'd written it in C#: you still get CIL code when it's compiled.

However, Javascript gets used for way too much, and historically it's been a huge browser security issue. Even if you use it respons

I'd like to see Javascript elsewhere. In the browser, it's limited by that turd known as DOM... imagine what Javascript could do if it had libraries that weren't utter shite. It could easily take over all the tasks done by Lua now, and possibly most of Python and Ruby as well.

The problem is people get into web development, find out that DOM is crap, then they assume the problem is Javascript and not DOM. JS is fine; DOM would be just as crap if you were working with it in Python.

imagine what Javascript could do if it had libraries that weren't utter shite

It's certainly not the exact opposite of "utter shite" but JavaScript on Windows via Windows Script Host has lots of libraries immediately available which makes a lot of tasks on Windows (including administration) much easier via the FileSystemObject, WMI, etc.

Actually it was the '90s, but whatever. The thing is, non-DHTML web pages are actually pretty good for most things... what made those early '90s web pages so awful was no CSS, slow connections, and the fact that people really didn't know how to design for this new medium.

Probably 99% of the web still shouldn't need Javascript or flash, though pages usually do need to be dynamic on the server side.

I know it was the '90s, I was just trying for the double joke of it being an advanced concept/vapourware for the '80s. Won't try that one again.

But the argument against javascript is one that is countered by your own comment: "the fact that people really didn't know how to design for this new medium".

Javascript is a tool just like another on the Internet. It can be used for good or evil depending on who writes the program. And as you mentioned, retreating from javascript means going back to a purely

Actually it was the '90s, but whatever. The thing is, non-DHTML web pages are actually pretty good for most things... what made those early '90s web pages so awful was no CSS, slow connections, and the fact that people really didn't know how to design for this new medium.

Sure it's fine when you've got a 2GHz processor and a smack of RAM to compile and run an interpretive language -- with the sole purpose of relatively simple data manipulation, validation, and perhaps some light processing to kick a chunk of data back. But when you are talking about serious data crunching, you want code running natively, not in a locked down little box, like SETI@Home, and optimized for that architecture and platform.

People think because you can put it on the web, you should. That is, at bes

I think that opinion, although quite frequently espoused on slashdot, suffers from a problem of framing current technology around past application models. Technology for technology's sake, such as Web 2.0 using AJAX/Flash, is not a wasteful exercise. Technology doesn't only stem from innovation; a good chunk of innovation stems from technology. The efforts with Web2.0 are leading to furthering the refinement of cloud computing and distributed,

Did you read the linked page? It explains how that CSS menu degrades gracefully [wikipedia.org] to be perfectly useful -- just not as flashy -- in browsers without the proper CSS support. Graceful degradation is a key design principle of HTML which many sites choose to ignore (ex. by requiring Flash or JavaScript). A good example is the commenting on Slashdot with the new discussions system: if you do not have JavaScript enabled, the reply button is just a normal link, allowing, for example, opening a reply box in a new ta

Yeah, yeah, and Usenet was the ultimate discussion group and everything's been downhill from there, right? And 25x80 column monitors were plenty (who needs proportional fonts?) and color is way overrated, and...

Why is it that we always need the previous generation twho remembers "what it was like before all this newfangled nonsense" to die off before we can make progress?

Just because you're looking for the web to look like a static newspaper doesn't mean the rest of the world wants the same thing.

Just because you're looking for the web to look like a static newspaper doesn't mean the rest of the world wants the same thing.

There are situations where JavaScript is good, but it simply breaks things like the ability to bookmark your page and then restore it as it was from the bookmark. Then you have the sites which really abuse it: for example, you can't book a flight with Ryanair if you have JS disabled (or a browser which doesn't support it: they don't seem to have come across the concept of degrading gracefully).

How come the new kids who come in can't tell the difference between progress and two-steps-forward-two-steps-back? You make a valid point that some people resist change for poor reasons, but I would say an equal or greater problem is people embracing change for poor reasons.

DHTML is fine when it works, and it's just starting to get there. But I'd say that web usability was at an all-time low between 2000 and 2006 when the new kids thought everything should be dynamic without the slightest understanding of

Haha, his complaint that the 1st column's background colour won't stretch to the height of the 2nd column has been solved for quite some time.

The main problem associated with pure table layouts came from good ol' Netscape 4's inability to render very complex tables quickly; thankfully, those days are long gone. These days it's about usability and searchability, which complex table layouts kill dead.

No one is stopping you from doing pure tables, but the solutions are there in CSS that make things so much bet

Javascript really isn't suited for this kind of thing, even with worker threads, for two reasons I can think of. First, web clients are transient... they'd have to report back often in case the user clicks away.

But more importantly, Javascript just isn't a good language for massive computation. It only supports one kind of number (double), has no vectorization or multicore capabilities, has no unboxed arrays, and even for basically scalar code is some 40x slower than C, let alone optimized ASM compute kernels. (This is for crypto on Google Chrome. Other browsers are considerably slower on this benchmark. YMMV.)

Or they could use an Applet or JWS and get several times the performance for only a mild reduction in install base. JWS would even be able to run offline or when the browser window's closed and cache some output to a JVM-managed scratchpad file on disk.

If a user downloads, say, folding@home, it's running all day, every day, on all cores of the machine, whenever the computer is on and idle, which is most of the time. The user doesn't have to remember to run it, doesn't have to devote screen real estate, attention and so on, and the program is less annoying because of its low priority and relatively low memory footprint (less boxing).

Additionally, the 40x I cited was in the fastest available browser (Chrome),

Why have you assumed the javascript user ran the site for 5 hours a day for a week, but that the installed.exe user ran it for a year? Even if one accepts your estimate of an installed.exe as being 400x faster than a javascript app, you should at least allow equal running time. And are you sure that modern browsers on multicore machines don't let multiple JS threads run on different cores?

In which case, I would find it easy to believe that for every one slashdotter who would install a distributed comput

I think you are vastly underestimating the JIT engines of Chrome and FF. While these JIT engines still have a way to go, I would expect the execution speed of Javascript to approach the performance of other modern virtual machines like the JVM.

Javascript really isn't suited for this kind of thing, even with worker threads, for two reasons I can think of. First, web clients are transient... they'd have to report back often in case the user clicks away.

I don't see why web clients being transient is a problem. The whole point of the MapReduce algorithm is that each worker (the web clients in this case) don't need to know anything about what the other worker is doing, what the system as a whole is doing, nor what it had done with any past job.

I don't see why web clients being transient is a problem. The whole point of the MapReduce algorithm is that each worker (the web clients in this case) don't need to know anything about what the other worker is doing, what the system as a whole is doing, nor what it had done with any past job.

Which is why Map-Reduce is only suitable for "easily" distributed problems. Lucky for Google that almost all their computational problems fit into this mold. But in the rest of the world, this just isn't the case. Which is why Map-Reduce is more interesting and trendy than a solid change in how distributed systems are designed.

If the user leaves before a task completes, you don't have anything to reduce.

Google's implementation of MapReduce already takes this into account. Haven't you heard of how they just have a bunch of vanilla x86 networked together, and when one of them fails, they just throw it away plug in a new one.

Javascript just isn't a good language for massive computation. It only supports one kind of number (double), has no vectorization or multicore capabilities, has no unboxed arrays, and even for basically scalar code is some 40x slower than C, let alone optimized ASM compute kernels. (This is for crypto on Google Chrome. Other browsers are considerably slower on this benchmark. YMMV.)

YMMV is true. I see speed differences of x5-x10 between -O3 C code and V8 - significant, but far from x40.

As for having only doubles, that is true for the language, but not for engines, which can implement an integer type as well. This is a little tricky to do, but certainly possible: Anything that starts out as an integer will remain one over addition, subtraction and multiplication; you need to add checks for overflows and to handle division. In other words, developers have the convenience of only work

Yes, it's a neat technique. It's also very old and obvious. Google's implementation is also good, but this stuff is just not rocket surgery. It's just a simple pattern of how to massively parallelize some types of computational tasks.

But somehow, just because some dudes at Google wrote a paper about it, it's become the second coming of Alan Turing or something among some silly folks. Hell, a couple of weeks ago somebody was saying on the comments here that MapReduce was a good alternative to relational databases. Now that is silly.

There is a huge difference between being able to run javascript apps and run javascript apps well - not to forget that a lot of the javascript I see out there really only works on PC's with IE or Firefox, Opera and Safari, especially on OS X seem to have trouble with some sites that aren't coded for compatibility, but instead pushed out quickly with little regard for anything other than IE on Windows.

If there were a couple-few or more orgs competing to use my extra cycles, outbidding each other with money in my account buying my cycles, I might trust them to control those extra cycles. If they sold time on their distributed supercomputer, they'd have money to pay me.

As a variation, I wouldn't be surprised to see Google distribute its own computing load onto the browsers creating that load.

Though both models raise the question of how to protect that distributed computing from being attacked by someone hostile, poisoning the results to damage the central controller's value from it (or its core business).

Just change your prefs- under Index/General uncheck "Beta Index" and check "simple design" and "low bandwidth." With those prefs Slashdot loads almost instantly on my somewhat aged machine (P4 2.4) and is still usable on a 700MHz P3.

I was getting the same thing with my 2.8GHz P4 under Mozilla, but it seems to have gone away since I recently (week ago) switched to SeaMonkey, so it may have more to do with slow JavaScript implementations than clock speed (although obviously a combination of the two).

This seems to me a self-defeating idea. The obvious goal is to get more processing power. Yet using a scripted language is inefficient, and a waste of processing power.
If you want more processing power, you need to group computers of the same general instruction set, and which can run compiled (or, dare I say it?) assembled machine code.

MapReduce is interesting because at Google and Hadoop it has a distributed filesystem underneath it too. The clever part is how data is distributed, and processing is moved to the data rather than moving data to the processing. I don't really see how this really helps matters, unless you are going to have data involved too - which brings in the privacy concerns yadda yadda yadda.

Sure, some things would work that require huge amounts of processing power on limited data, but why would you use map-reduce for t

MD5CRK used a JavaApplet that used this Chinese Lottery concept. The applet performed 95% as fast as a pure C implementation of MD5. JavaScript is another matter however. And an assebly code that inlieved MMX/SSE with ALU was much faster.

But, you can just close the browser or type "killall firefox" and the program dies and you have to go to the URL again to get to it. So, though this is bad for other reasons, yours just isn't one of them.

You know what really annoys me? Seeing people (and I'm guilty of this too) mixing hot and cold fluids together. Cold milk out of the fridge into hot coffee, using a hot gas flame to warm up food, that sort of thing. I should imagine that in millions of years' time, when the heat death of the universe is well underway, needless and wasteful increase in entropy will be punishable by permanent deletion.