Posted
by
timothy
on Tuesday December 09, 2008 @10:33AM
from the which-can-then-be-virtualized-ad-infinitum dept.

t3rmin4t0r writes "Google has announced its Google native client, which enables x86 native code to be run securely inside a browser. With Java applets already dead and buried, this could mean the end of the new war between browsers and the various JavaScript engines (V8, Squirrelfish, Tracemonkey). The only question remains whether it can be secured (ala ActiveX) and whether the advantages carry over onto non-x86 platforms. The package is available for download from its Google code site. Hopefully, I can finally write my web apps in asm." Note: the Google code page description points out that this is not ready for production use: "We've released this project at an early, research stage to get feedback from the security and broader open-source communities." Reader eldavojohn links to a technical paper linked from that Google code page [PDF] titled "Native Client: A Sandbox for Portable, Untrusted x86 Native Code," and suggests this in-browser Quake demo, which requires the Native Code plug-in.

This is not a good thing: by definition x86 code is not portable across platforms.

Secure or not, it goes against the main founding principle of the web, which is portability. There are other ways to solve the performance issue, I thought just-in-time compilers were getting pretty close anyway (50% according to http://www.mobydisk.com/softdev/techinfo/speedtest/index.html [mobydisk.com]).

On the security side, I'll just quote Google's description: "modules may not contain certain instruction sequences". That doesn't sound like a robust way to detect malicious code.

You could work around that compatability issue easily, just set up the browser so it runs inside a preset virtual machine or emulator on the host, so that you can just write x86 code for that virtual machine/emulator rather than executing it directly.* (I heard you like programs, so I put a machine in your machine so you can execute while you execute.)

Yeah, but you'd still have the issue where your subnotebook/phone/PS3/Mac/etc runs slower than your desktop because it has to emulate an x86. This on top of the fact that it may be running a much slower chip...

Isn't Java run in an emulated fashion on all platforms? Isn't that part of the 'slow' image that it cultivated in it's early years, that it was too slow due to the emulation of the java 'virtual machine'?

Is the problem here that this could mean some machines won't be as slow as others or just that its x86?

What exactly is the difference, outside of one having a much larger code base to 'exploit' and the potential for a huge speedup on machines that can natively handle x86 code?

Pretty much all of Sun's offerings have HotSpot built in, which provides JIT compilation for the JVM. IBM's JVM, BEA, etc. all have JIT features. Google's Android has Java-like Dalvik, which is slow as balls and doesn't have JIT functionality.

Some ARM processors are capable of executing Java bytecode natively. The device developers have to pay for that feature though.

Really, it sounds like Google is poorly trying to reinvent Java. They've tried this with Android already and it doesn't work so hot from a performance standpoint.

I also noticed that Google is very aggressively trying *not* to use Java JRE anywhere, as Dalvik VM or this x86 nonsense demonstrate. I have no idea why is that, given Java and JRE are FOSS now. One thing is for sure: Their Google Docs and other office-like applications would only start to make much more sense if they use new JavaFX clients (that can be dragged from browser to desktop, becoming standalone app) alongside with improving JRE support in Chrome and Firefox.

2) Sun has slightly different licenses for desktop and mobile use. The desktop license is GPL with a classpath exception (letting you write non-GPL java apps to run on the virtual machine), the mobile license is straight GPL. Google didn't want to force developers to only produce GPL apps for Android, so they could not use this.

First, today there isn't a "first class" platform for Java. It's JIT everywhere (excepting a few devices with chip-based Java execution). So a web developer has to consider performance when using Java or Flash. In contrast, native x86 execution obviously favors x86 machines and creates a web ghetto - similar to ActiveX, though not quite as bad.

The second problem is that while Java, and now even Javascript, have been pretty well optimized on non-x86 hardware, I've yet to see a x86

Seeing how JIT-compilation means that the code is compiled to native code just before being executed, this is clearly rubbish. JIT-compilation results in native code, just like any other compilation; the program is simply stored in some intermediate form (bytecode) rather than in the final compiled form.

...

Yes, this is true. Emulation is horrible for performance. This whole idea is ludicrous.

You contradict yourself here. JIT is one way to perform emulation. Either JIT isn't as fast as native code or emulation isn't horrible for performance, you can't have it both ways!

The problem with Virtual PC on the G5 is that it needs to emulate the X86 application code plus the Windows OS.

Since most apps strike a pretty reasonable balance between application logic and library calls, most emulators only need to emulate a relatively small portion of the code. They can drop down into native implementations as soon as the app calls library code. That's why script languages are viable at all.

Where I work, we had a large set of applications written in assembler (don't ask) for an '80s v

Today, you could provide this feature using a combination of JavaScript and server side processing. This approach, however, would cause huge amounts of image data to be transferred between browser and the server, leading to an experience that would probably be painfully slow for users who just want to make a few simple changes. With the ability to seamlessly run native code on the user's machine, you

On the security side, I'll just quote Google's description: "modules may not contain certain instruction sequences". That doesn't sound like a robust way to detect malicious code.

Why not? It just means that the permissible instruction sequences are limited to a subset that can be statically analyzed and verified to be safe. The Java VM has similar verification algorithms that are run whenever untrusted code is first loaded.

It's true that this does not allow all x86 code to run; it's at least practically (an

Why not? It just means that the permissible instruction sequences are limited to a subset that can be statically analyzed and verified to be safe. The Java VM has similar verification algorithms that are run whenever untrusted code is first loaded.

One of the key differences is that Java code and data are separated to the point of paranoia. I cannot load a classfile as data and pass through execution to the native system. With the x86 instruction set, I can load a data file and execute a jump to a data segment without the code having passed through any sort of system loader. A VM would have to take this into account. Not to mention common issues of stack smashing, heap overflows, and other common memory tricks to execute unwanted code.

When you're managing native code, it only takes one slip-up to hand over the keys to the kingdom. That slip-up may be as simple as a two byte exploit, but it's a slip-up none the less. One must be VERY careful with native code because there is no way to prove that it is safe to execute natively.

Hypervisor features in modern processors simplify the issue somewhat, but it is still not proven that hypervisors are without exploits. Not to mention the overhead of running dozens of simultaneous hypervisor environments.

Java and Javascript have it right. Java bytecode is provably correct because it targets an ideal machine. Thus the code can be translated into well-behaved native code with the linkage between data and code managed during or after translation. Javascript is just as good because it provides an abstract execution environment that must rely on exposed APIs to accomplish any interaction with the system. It is provable not possible (shy of an underlying flaw in the browser) for Javascript to break through its execution engine into a native runtime.

The two platforms may be paranoid, but when you're dealing with security on the scale of the World Wide Web, "better safe than sorry" is a good motto.

Don't modern processors that are pentium IV's and above feature protection from data vs application code? Windows XP service pack 2 features it and I am aware that programs have to be recompiled to take advantage of it.

In a few years it will make sense just to compile the code for the pentium IV and above and you can have the extra protection in your programs like RISC processors.

When you're managing native code, it only takes one slip-up to hand over the keys to the kingdom. That slip-up may be as simple as a two byte exploit, but it's a slip-up none the less. One must be VERY careful with native code because there is no way to prove that it is safe to execute natively.

Hypervisor features in modern processors simplify the issue somewhat, but it is still not proven that hypervisors are without exploits.

That's not at all true, though, and you certainly don't need any supervisor CPU features. It is quite easy to run native code completely securely -- all you need to do is set up a private virtual memory space for the managed code, and only provide it with a call gate to your own program code through which it can do controlled requests. It's done at large scale -- it's called a "process" in normal OS parlance. You may have heard of the term

Holy crap. AKAImBatman I usually enjoy your posts, but it's painfully clear nobody on this thread - including you - has actually read the paper.

If you had, you'd see that this system is secure. It's simple yet clever at the same time. By using a combination of x86 segmentation (which ironically you say is never used anymore!), alignment rules, static analysis and - crucially - masked jumps, it's possible to ensure that native code cannot synthesize unverified code in memory and then jump into it. If you can prevent arbitrary code synthesis, you can control what the program does. It's as simple as that.

Even though the verifier for this system is microscopic (compared to, say, a JVM), and so much more likely to be correct, NativeClient also includes a ptrace sandbox to provide an additional redundant level of protection.

One must be VERY careful with native code because there is no way to prove that it is safe to execute natively.

I don't blame you, because until I read the paper I also believed this. Once you read it you'll slap your forehead and say, my god, it's so simple. Why didn't I think of that?

that and applets are slow as shit. The National Weather Service still uses a java applet for their single-radar radar loops (uses animated gifs for larger views).. takes forever to load because the JVM is initializing, whereas the animated gif takes exactly as long as it takes to download.

my "platform" would be very linux and windows machine i've ever run - all on x86

try this for a "Slow platform"

Athlon 64 x2 6000+, 4GB DDR2-800, 250GB SATA-II drive

JVM initialization is slow because the JVM weights 9 million metric tons.

and initialization of the VM of any language is an important factor in it's effective performance - no matter if it's per-instruction performance once it's VM is started is almost as good as native code, it will take a long time for that to outweight the initial startup time.

On the security side, I'll just quote Google's description: "modules may not contain certain instruction sequences". That doesn't sound like a robust way to detect malicious code.

Why not? It just means that the permissible instruction sequences are limited to a subset that can be statically analyzed and verified to be safe. The Java VM has similar verification algorithms that are run whenever untrusted code is first loaded.

It's true that this does not allow all x86 code to run; it's at least practically (and probably theoretically) impossible to correctly determine whether or not a piece of code is safe, but as long as the VM errs on the side of caution, there shouldn't be any problems with this approach.

I will grant that this makes it unclear what the advantage is over (say) Java applets. What can this technology do that the Java VM couldn't? As far as I'm concerned, the failure of Java in the browser has more to do with the lack of a standard library for high-performance multimedia applications (think: Flash) than with shortcomings in the bytecode language.

All this means is that google have created a VM in which the "bytecodes" happen to be executable on real hardware, but some of these "bytecodes" have to be intercepted and replaced at runtime with substitute code... this aught to sound familiar; this is what a software hypervisor does (eg VMware).

In other words every man and his dog has jumped aboard the "I can write an x86-hypervisor" bandwagon, the difference being that google have decided to take theirs and embed it into the browser rather than run as a standalone app.

Interestingly enough it took the momentum that VMware created to get intel to correct some of the issues with its' ISA to make it much easier to virtualise [wikipedia.org], perhaps someone the size of google can prod intel into adding a third wave of virtualisation accelartion extensions to their ISA so as to make this idea safer* with low overhead

Thanks for your comment and the links. Every time I run across an article like this and sigh, wishing I had the technical cojones to explain why it is that we were doing things like this on mainframes in the 80s with complete safety... and continuing to wonder why Intel couldn't just COPY the damned concepts if they can't figure out how to implement them from scratch.

Our world continues to be saddled with a half assed operating system running on a third rate architecture and for no other reason that techn

Yes all new technology is bad even R&D concepts. Dag Nabbit I want my ASCII (no freaking colors) 300BPS BBS back you know the ones where you need to put your phone headset into the modem. Back then everything was secure. The password was the telephone number that you dialed. Brute force attacks were expensive. And if the BBS had a Password protection you were secured to no end, where no one can get in who you didn't want.

Um the way things work with software is the program sends opt-codes to the CPU whi

x86 code runs natively on 90% of the processors out there. Java or.NET bytecode runs natively on about 0% of them (Sun did have a Java chip once but it is long dead). So it is hardly any worse than the alternatives. There are many x86 emulators and some of them have reasonable performance.

If we were starting from scratch now, nobody would choose the barnacle-encrusted i386 instruction set as a way to distribute programs. But given the hardware and software that exists, it's not such a bad choice.

On the security side, I'll just quote Google's description: "modules may not contain certain instruction sequences". That doesn't sound like a robust way to detect malicious code.

Of course, the way to do it is to define what instruction sequences are safe and allow only those. I assume that's what they are doing and 'modules may not contain certain instruction sequences' is just the one-line summary.

That said, you can make any instruction sequence you like using the assembler and run it on your Linux system, and it cannot break out of the process virtual machine to access hardware or memory belonging to other processes or the kernel. If it can, this would be a bug in Linux. So there is no reason why arbitrary instruction sequences couldn't be allowed in principle, if you let the operating system do the work of sandboxing the process. After all why reinvent the wheel?

x86 code runs natively on 90% of the processors out there. Java or.NET bytecode runs natively on about 0% of them (Sun did have a Java chip once but it is long dead). So it is hardly any worse than the alternatives. There are many x86 emulators and some of them have reasonable performance.

ARM Jazelle (in quite a number of the ARM revisions deployed all over the place) includes DBX for direct bytecode execution of Java. That includes the iphone and loads of other stuff.

I used to think Jazelle meant actual Java bytecode execution on the chip, until I talked to an Android developer about it. It turns out that Jazelle is quite incomplete and traps out to native code quite frequently... for instance, to call a method.

That said, so what? You already have to write special web pages for mobile devices if you want a truly great user experience. The performance/power profiles of mobile devices are so radically differen to desktops that being unable to run x86 code natively isn't

Among desktop PCs, maybe. But have you heard that they've started putting the web on devices such as cellular phones and set-top boxes? You're not going to find a lot of x86 CPUs in those.

Of course, the way to do it is to define what instruction sequences are safe and allow only those. I assume that's what they are doing and 'modules may not contain certain instruction sequences' is just the one-line summary.

Native Client is an open-source research technology for running x86 native code in web applications, with the goal of maintaining the browser neutrality, OS portability, and safety that people expect from web apps.

This sounds to me like the Native Client is a virtual machine that will execute x86 code inside a browser, regardless of the underlying OS. It doesn't specifically mention hardwar

The release contains the experimental compilation tools and runtime so that you can write and run portable code modules that will work in Firefox, Safari, Opera, and Google Chrome on any modern Windows, Mac, or Linux system that has an x86 processor. We're working on supporting other CPU architectures (such as ARM and PPC) to make this technology work on the many types of devices that connect to the web today.

Google has announced its Google native client, which enables x86 native code to be run securely inside a browser.

Because all that we need is to further promote an archaic instruction set that won't die because of all the pre-existing code compiled for it. An instruction set that was finally starting to loosen its grip as the industry worked toward more abstract solutions.

With Java applets already dead and buried

And with good reason!!! Plugin engines do not provide a very smooth browsing experience. You must wait for them to download and activate before you can start using the widget. Meanwhile, Javascript is designed for execution as the page is loading.

The heavyweight JVM was probably the worst offender, but look at Flash for another example of an engine that most developers would rather eliminate. While it was hip to create entire websites out of Flash for a while, the platform was very user-unfriendly and almost died out. Thanks to infighting over video standards however, Flash was able to hold on as a video delivery platform and even gained a margin of success as a web-gaming platform. (About the only area where Java Applets really shined back when they were popular.)

My personal opinion* is that this is a step in the wrong direction. Javascript engines are getting good. Damn good. I'd like to see more R&D poured into these engines and the underlying technologies [whatwg.org] rather than reinventing ActiveX and Java. If researchers wanted to invent a more efficient or usable browser language other than JS, I'd be all for it. But I don't run a browser to become a part of a compute farm. I run a browser to access web information and applications. Very little of which is compute-intensive enough to require a new execution engine over a more advanced set of APIs.

*...and 50 cents won't buy you a cup of coffee anymore, so take it for what it's worth.

** As an aside, C/C++ is an incredibly complex build environment. Why anyone would want to continue subjecting developers to the angst of compiler differences, makefiles, configure scripts, and other irritants is beyond me. As is typical with such platforms, I can't even get the examples running on my machine. The run.py script dies with an "Invalid Argument" on line 42 and the nacl-gcc compiler fails with syntax errors a-plenty. I'm sure I'll figure it out eventually, but WHY oh why do we want to promote such a complicated method of compiling code?

Whats the problem with JavaScript? I have written JS code with >20k lines already, and it was quite ok. Among the things that irritate me is this "var" nonsense (declaring a variable without var puts it in the global namespace!), but other than that, it was fine. Also, you are wrong, JS has real objects. And, weak typing can be a very powerful tool if used properly. Note that Python has weak typing too...

Strong typing means 1 + "2" is nonsense, and you have to explicitly convert the types.

That example has nothing to do with strong typing, but rather operator overloading. Both Java and Javascript understand "+" to be a string concatenation symbol when dealing with strings. Thus they attempt to coerce the values into strings. In Java's case, the resulting output looks something like this:

new StringBuffer().append(String.valueOf(1)).append("2");

Javascript does have implicit type casting (e.g. '1' - 1 = 0), but this is a feature that can be found in quite a few strongly languages. (e.g. int x = 10; char val = 10; x += val) Javascript is actually STRONGER than C when it comes to typing. When I cast a variable to a new type, its original type information is redefined and/or completely lost. This can create problems when programmers start using (void *) pointers for everything. Javascript remembers the underlying type of a value at all times. Values are never modified or destroyed, but can be coerced to create a new value with an implicitly cast type.

I do. And so can you. [yahoo.com] It's the C-based syntax that throws most programmers for a loop. Once you realize that the language is actually of a functional design similar to LISP, everything gets a lot easier.

No real objects, weak typing, etc.

Javascript has one of the most flexible Object systems I have seen in my 20+ years of programming. And its typing system is actually quite strong. Like another poster mentioned, it's dynamically typed not weakly typed. Which is an issue that fades into obscurity once you understand how to properly utilize the language.

It's fine for small bits of code, but for larger apps? Ugh.

Javascript (like most functional languages) is perfect for building large apps out of a massive number of small bits. Look up scoping in Javascript sometime and you'll understand that larger apps get built by having machines within machines within machines to go from simple tasks to ever more complex tasks. It is, in many ways, a more scalable solution than APIs and packaging. But it is different and therein lies the crux of its failure in the minds of many programmers.

The only problem you seem to have with Java plugins is the load time -- this is only resolved by Javascript because JS is pre-loaded by the browser at all times (in modern browsers at least).

If other plugins were to be marked as 'frequently used' by the plugin engine and loaded at runtime instead of page load-time, they'd obviously be just as responsive as Javascript (or more so, since Java is compiled to native code in many cases).

Making a browser that integrates Java in a reasonable way and makes it work just as seamlessly as Javascript was tried already (by Netscape) but it was before we had computers with enough RAM to handle it IMHO.

The only problem you seem to have with Java plugins is the load time -- this is only resolved by Javascript because JS is pre-loaded by the browser at all times (in modern browsers at least).

The Java runtime was compiled into early browsers like Netscape. So the load time is not caused by the plugin itself. (Though that does play a role in the first activation.) The load time is the time it takes to download the complete application, dearchive the components, load the components into an interpreter or JIT, initialize the environment and/or APIs used, and finally present the application to the user.

Javascript fits in better with the way web browsers are designed in that the browser executes each individual module during the page load. The makes page loading more asynchronous and thus a better experience for the web user. The web developer can still throw up a "loading" progress bar for applications must preload, but they are the exception rather than the rule.

Making a browser that integrates Java in a reasonable way and makes it work just as seamlessly as Javascript was tried already (by Netscape) but it was before we had computers with enough RAM to handle it IMHO.

There is more to the issue than meets the eye. Besides the synchronous aspect I mentioned, the client Java runtime has also grown to meet the expansion in system memory and complexity. Which is a good thing from the perspective of writing rich applications for deployment on the server or desktop. It's a bad thing when we're talking about the time-sensitive environment of the web browser. If you want an ideal JVM for the browser, Sun is going to have to strip it down again and make the platform a better fit than it has been in the past. (A version that relies heavily on the DOM for APIs would be preferable.)

They're also going to have to work out a good method of solving the load problem. Even Flash allows for partial execution prior to the load being complete. (This is how most Flash games show a LOADING screen.) Java was not designed with this in mind and the platform shows it. There are ways a developer could work around it using dynamic class loading, but this requires a great deal of knowledge, effort, and skill on the part of the developer.

My own feeling is that it's best to let sleeping dogs lie. I love the Java platform, but it currently has a higher calling. Best to let it work where it excels and focus on the aspects of the browser that currently excel. (e.g. Javascript)

No, you're thinking of Java6 (no updates). Java6 update 10 introduced many client-side enhancements, such as preloading of the browser plugin so the first time you hit applets they load instantaneously.

C/C++ are programming languages, not build environments. There's nothing to stop developers using qmake, or Jam, or one of many more user friendly build systems. The fact that the examples don't is more indicative of the intended audience (expert native-code developers) than anything else.

As to "why build this at all"... because they can, they want to and it has the possibility to provide a feature set not currently available. No one who isn'

I don't think the latest trend in RIAs and the associated frameworks is about changing the way we browse the web. I think it's about unifying desktop and web application development. You can sort of see it happening already with Silverlight, where you're effectively writing against the same WPF API, using the same XAML and.NET classes, for desktop apps and web apps intended to be run against the Silverlight runtime.

I think you're right in that all these schemes are effectively a reinvention of ActiveX an

Agreed. The web is not a place for applications. HTML is designed for hyper_text_.

A "webapp browser" would essentially be the view and one half of the controller in the MVC pattern. An interesting idea would be to have the browser as an environment for Adobe Adam & Eve scripts: http://lambda-the-ultimate.org/node/563 [lambda-the-ultimate.org] .

You seem to be confusing the instruction set with the underlying implementation. Core 2 is awesome. The instruction set is not. So much so that it must be translated into a decent set of instructions by microcode before the processor can pass it through the decoder and ALU.

What is "archaic" about modern x86?

Oh, I don't know. Instructions for 64-bit programming piled upon instructions for 32-bit programming piled upon instructions for 16-bit programming piled upon ins

Yeah, as the title suggests, I can see why this would be attractive. x86s are everywhere, as is code for them, sidesteps the hassle of working it out in javascript, etc, etc. That said, though, it seems really, really gross. For those applications where dealing with an embedded add-on is an acceptable tradeoff for higher performance, we already have java, which is designed for platform independence(JVM), sandboxability, etc. and has had years of development and wide support. Particularly given the increasing popularity of web on embedded(read non-x86) devices, "sorta-kinda-quasi-java-that-only-runs-on-x86s" seems like an enormous step back. Why would you do that?

we already have java, which is designed for platform independence(JVM), sandboxability, etc. and has had years of development and wide support. Particularly given the increasing popularity of web on embedded(read non-x86) devices, "sorta-kinda-quasi-java-that-only-runs-on-x86s" seems like an enormous step back. Why would you do that?

Isn't it obvious? Google standardized on C++, Java, and Python. As you point out, Java is already there. This 'any x86' lets them use their other two languages, C and Python. It kills two birds with one stone, and securing x86 is a hell of a lot easier than securing C++.

Of course, if they just straight up told people they wanted to choose the wrong tool for the job just because it's what they know they would have been laughed off the web.

After posting rants about crappy interpreted languages, incompatible HTML/CSS/JS implementations, ridiculous W3C "standards" (that their own browser never supported properly), I'm glad that someone finally did this (as suggested here [slashdot.org] a few days ago).

Security isn't a problem when even safe (in theory) content like PDF is plagued by exploits [theregister.co.uk] regularly. People need to learn to a) switch on such features only on trusted web sites (use Noscript e.g.) and b) distinguish trusted from untrusted web sites (i.e. av

Now let's see some hosted apps with decent performance and good UIs and let's make sure that hitting backspace doesn't destroy all our work.Amen to that brother, I can't remember the site, but somehow trying to backspace an error in a form did the old back browse and I lost everything. Like alt-arrow was too hard to do why the hell did backspace need to be tasked to this function?

but somehow trying to backspace an error in a form did the old back browse and I lost everything.

This happens when the input forms temporarily lose focus because the browser loads something (switches to bee / hourglass icon), then apparently "backspace" gets sent to the browser window and interpreted as "back" instead of just being applied to the text field of the form.

Browser security not withstanding... By effectively emulating a CPU, it does open up some interesting experiments in distributed computing - and, yes, I'd like to see a tiny linux distro running in a browser:)

"In Native Client we disallow such [self-modifying code] practices through
a set of alignment and structural rules that, when observed,
insure that the native code module can be disassembled
reliably, such that all reachable instructions are identified
during disassembly."

Ok, when I read the post I had to chuckle when I read the asm joke. I've been programming in asm for 16 years now and there are a few rules of thumb:
- if assembly is allowed then the only real security is executed by hardware.
- malware writers love a challenge like this.

Did you get past page 3? Unmasked jumps are forbidden by static analysis, so you can't create new code and jump into it. Existing code is verified against whitelisted opcode sets. Segmentation it used to prevent self-modifying code. Tricks that prevent accurate dissasembly are also forbidden by the verifier.

This is really a little operating system, with 44 system calls. Those system calls are the same on Linux, MacOS (IA-32 version) and Windows. That could make this very useful - the same executable can run on all major platforms.

Note that you can't use existing executables. Code has to be recompiled for this
environment. Among other things, the "ret" instruction has to be replaced with
a different, safer sequence. Also, there's no access to the GPU, so games in the browser will be very limited. As a demo, they
ported Quake, but the rendering is entirely on the main CPU. If they wanted to support graphics cross-platform, they could put in OpenGL support.

Executable code is pre-scanned by the loader, sort of like VMware. Unlike VMware, the hard cases are simply disallowed, rather than being interpreted. Most of the things that are disallowed you wouldn't want to do anyway except in an exploit.

This sandbox system makes heavy use of some protection machinery in IA-32 that's unused by existing operating systems. IA-32 has some elaborate segmentation hardware which allows constraining access at a fine-grained level. I once looked into using that hardware for an interprocess communication system with mutual mistrust, trying to figure out a way to lower the cost of secure IPC. There's a seldom-used "call gate" in IA-32 mechanism that almost, but not quite, does the right thing in doing segment switches at a call across a protection boundary. The Google people got cross-boundary calls to work with a "trampoline code" system that works more like a system call, transferring from untrusted to trusted code. This is more like classic "rings of protection" from Multics.

Note that this won't work for 64-bit code. When AMD came up with their extension to IA-32 to 64 bits, they decided to leave out all the classic x86 segmentation machinery because nobody was using it. (I got that info from the architecture designer when he spoke at Stanford.) 64-bit mode is flat address space only.

Java applets are not "dead and buried". Neither on the Web, nor on mobile phones (with the distinction increasingly meaningless), or on embedded devices, like DVD players and settop boxes (most of which have Java VMs in them, especially Blu-Ray players and other HD players for menus).

What is "dead and buried" is ActiveX, which is x86 code running in a browser's "sandbox". But even that is clearly no barrier to resurgence, as this story shows.

x86 is a lousy architecture for modern purposes. Its design was determined by optimizations for executing Pascal programs, which was the primary programming market for the IBM PC when it was originally designed. That was a long time ago, and only the huge legacy of existing apps (and their forward momentum maintained by huge backwards compatibility design and sacrifices) keeps x86 code popular. I'm all for a SW x86 emulator, especially for newer CPUs so they don't have to be shackled to design compromises just to run the legacy code, instead of doing it newer and better ways with a more modern instruction set. Just like I'm all for the game emulators that will play old Atari 2600 games on Core Duo PCs and ARM mobile phones. Let's just not enslave ourselves to 1980 design priorities optimizing for a really dead language for yet another decade of programming, now going on 30 years, which is 20 generations under Moore's Law.

Actually yes. I've noticed issues with some of the other Google domains such as gmodules.com as well. Will be interesting to find out what the cause is. I thought it was Firefox for a bit there but it turns out one of the domains on TechDirt has a script that was crashing Firefox (hurray for NoScript; didn't have it on my work PC before today).

On topic though: I seriously hope they don't plan on hosting virtual x86 boxes, just provide the code. They seem to be obsessing over the whole cloud computing th