Posted
by
Unknown Lamer
on Tuesday May 31, 2011 @02:39PM
from the when-can-i-run-firefox-in-firefox-in-firefox dept.

kripkenstein writes "Ever since Id Software released the Doom source code under the GPL, it's been ported to platform after platform. Now, you can play Doom compiled to JavaScript on the web, using standard web technologies like Canvas and without any plugins. If your browser has trouble running it, here's a screencast."
The translation was accomplished using Emscripten, a Javascript backend for LLVM. As per the GPL, full source code is available. Pretty neat.

If you want real graphics performance out of a browser, you should be using WebGL (assuming Flash is not an option). WebGL lets you execute OpenGL code on the video card of the machine, which will be a ton faster than Canvas.

You have to remember that Canvas is just an image rendering platform. From my understand of Canvas in Firefox, it actually renders every frame as a PNG and displays it to the screen. There is no GPU acceleration. That's what WebGL is fore.

Video card hardly makes any difference, browser does. 35 fps full-screen on Firefox 7.0a1, 34 on Firefox 4.0, slideshow on Firefox 3.5. This is on a cheap-ass 2.8Ghz Phenom II. It uses no OpenGL, about any graphics card can handle shoving such bitmaps around. It's single-threaded, too, so what you're ripping on your other cores doesn't matter.

However, without such basic controls as strafe, this demo is not playable. No mouse input hurts but DOS versions had unusable mouse anyway so it's just a throwback to the old times. I estimate I've clocked around 4000 hours those days so I'd cope:p Heck, even comma/dot might be acceptable if they don't want to allow redefining keys, although I'd really prefer a sane setup like Z/X=strafe, alt=fire, shift=run (assuming no autorun like in the original).

Javascript is fine. It's the 'running inside a browser' that kills performance by 99%, and makes this an impressive feat. If the Javascript engine was ripped out of Firefox 4 or Chrome, and given a decent graphics & sound API, it would have respectable performance (nowhere near C or C++, of course). When you're writing a graphics heavy app in a browser, though, it's like playing a song on a dot matrix printer. The fact it exists at all is the amazing thing.

No. My 486SX-25 would run Doom at 20-some FPS (mainly dependent on how many monsters were within visual/acting range), and after an upgrade to a DX2-50 Overdrive it would usually be 30-something -- and the original DOS executable had a hard limit of 35 FPS.

Don't forget this is running as interpreted code in what amounts to a virtual machine.

This demo is rendering to an HTML5 *canvas* as a virtualized framebuffer in an interpreted language with no hardware support whatsoever. Are you *seriously* trying to use this as a complaint against Javascript? I remember back when "demo" just meant "write a cool app to show off"...

Of course it's not the way someone would build a real app in Javascript, but it's an excellent demo to help people understand what the platform can do and how it might be improved. And for those of us who understand all of the tools that went into it, it's just pretty damn cool.

A couple weeks ago a story was posted about a demo that booted a real 2.6 Linux kernel image with ramdisk entirely in Javascript: JSLinux [bellard.org].

That takes a whole 6 seconds to boot to a bash prompt *in Javascript* on my machine. SO SLOOOW, Javascript sucks!!!;)

While this is impressive, it has been done before (and better): GWT Quake [youtube.com]

I think that Quake demo is awesome! I'd just like to mention though that this Doom demo is very different from a technical standpoint, and I think both are interesting:

The Quake demo compiled Java to JavaScript using GWT, the Doom demo compiles C through LLVM into JavaScript using Emscripten.

The Quake demo uses WebGL to render, the Doom demo translates a 100% software renderer. It's much more challenging to get good performance with a software renderer in JS, especially given that the original renderer was heavily optimized for the CPUs of the day (for example, it uses fixed-point math).

The Quake demo was a major effort, with rewriting and fixing. The Doom demo is a straightforward port, no new code (only a few tiny tweaks), took only a week to do. (Btw, speaking of the timetable, sorry for the sound quality - I just spent a few hours on that part, and I had never used the Audio Data API before.)

The original doom used software rendering which makes this a fair comparison actually, things like OpenGL and hardware accelerated 3D graphics certainly didn't exist (in the consumer realm) in 1993. Doom ran at full frames on 66 mhz Pentium processor. Loosely extrapolating, this means the Javascript version is about 200 to 300 times slower than the original.

Indeed, it is a lot slower than the original. But the reasons for the slowness are quite clear: it's a byte-compiled language that must run in a VM, so that incurs already a lot of overhead; it cannot use any assembler tricks to speed the game up, it cannot align things in memory on double-word boundaries and so on. But then in addition to that it's running inside a web browser and renders every single frame as a PNG which involves first compressing the PNG and then decompressing it, just to display it.

The original one of course was running on bare metal, having the CPU fully to itself, and VGA was actually very fast as a graphics card back in the day; you'd render an image, change a register or two on the VGA card to flip the buffer and voila, you got a picture on the screen. No need for compressions or decompressions or anything. Not to mention that it was all indexed 256-bit colour graphics which means that the entire screen only weighed in at 62.5Kb and you could modify the colour index table for fast special effects, no need to modify individual pixels as the hardware took care of displaying the appropriate colour!

Comparing it straight-up to the JavaScript version simply isn't fair to either of them.

It's still slower than C version simply because native code doesn't mean fast - if it has to do dynamic dispatch over method names, for example (because the compiler couldn't infer types deep enough), it's still slower than a direct or virtual call.