Posted
by
kdawson
on Friday December 10, 2010 @02:25PM
from the for-your-own-good dept.

holy_calamity writes "Google's Chrome OS chiefs explain in Technology Review how most of the web-only OS's features flow from changing one core assumption of previous operating system designs. 'Operating systems today are centered on the idea that applications can be trusted to modify the system, and that users can be trusted to install applications that are trustworthy,' says Google VP Sundar Pichai. Chrome doesn't trust applications, or users — and neither can modify the system. Once users are banned from installing applications, or modifying the system security, usability, and more are improved, the Googlers claim."

The headline's a bit misleading. Users _can_ replace the OS. However, the BIOS will check signatures on the OS, and offer to restore from a known-good backup on boot (without destroying user data). This ensures that if the OS is infected by a virus or something, it's very, very easy to restore.
There are specific points in the design docs [chromium.org] where they make it clear that they do want to support advanced users installing their own OS, to the extent that that does not cause trouble for less advanced users.

And I expect that to carry zero weight with 3rd party hardware vendors, who will undoubtedly lock the platforms down and, if they're like Motorola, they'll sign the kernel so you absolutely can't load other OSes.

Because the manufactures are who the OS openess is aimed at. That get to choose how they want it on, and you get to choose which manufacturer you go to.

It's not looking the other way, it's the agreement.

Plus, you need a wedge to change entrenched practices. Apple want to change the way people use smart devices to access the web, and the way voice mails are done. AT&T agreed to make those changes with the agreement they will be the sole carriers for a certain number of years. Now everyone is changing, an

Are you suggesting it's Google's agreement to sell out the developers whose code they rely on? Because it is certainly not the intention of said developers to be locked out of their own code that way. At least, it was certainly not my intention and I believe my opinion is shared by a large segment of the Linux community.

Well you and any of the others in the Linux community are pretty much boned if you released under GPL V2, because RMS didn't see the TiVo coming and thought folks would obey the "spirit" of the license which we found out means jack and squat to a corporate lawyer. Of course since Linus and some of the other won't release under GPL V3 because they think RMS went to far you now have a divide which just makes things even more confusing and gives ammo to those that want to use the "GPL infection" bit, because if an OEM uses GPL V2 they can just pull a TiVo and lock you out with code signing and eFuses, but if a single line is GPL V3 then that is not allowed and the OEM is boned.

What I see happening is big corps like Google paying for GPL V2 versions of code to be continued and updated, which they will lock down via eFuses and other TiVo tricks thus screwing the original developers unless they hire them to work for the corp. Meanwhile the GPL V3 code will be less used or fragmented, since you'll be able to use the GPL V2 code in the GPL V3 branch but not the other way around and...it is probably gonna be nasty. But if you think the headset makers and telecos are actually gonna embrace openness? Well then I got a really nice bridge you may be interested in. Hell some of their biggest money makers is screwing their customers with nasty tricks like software lock outs of features which you have to pay to enable and other dirty tricks.

So if you don't want your code locked then you really don't want it on mobile devices here in the USA, because that is what you're gonna get, like it or not. They have seen the iPhone app store model and have $$$ dancing in their eyes, they sure as hell ain't gonna let you install or do anything they don't get a cut of, sorry.

What I see happening is big corps like Google paying for GPL V2 versions of code to be continued and updated, which they will lock down via eFuses and other TiVo tricks thus screwing the original developers unless they hire them to work for the corp.

There is basically no reason for a corporation to maintain a fork of GPL code all on their own. Half the point of using OSS is that you can make the changes you need and push them back into the tree without having to maintain your own version of everything in the world. If you're going to maintain it all yourself with no community involvement then you might as well just write the whole thing without using any GPL code. If that was Google's intent then why didn't they start with BSD and then never need to publish the source for their changes?

Meanwhile the GPL V3 code will be less used or fragmented, since you'll be able to use the GPL V2 code in the GPL V3 branch but not the other way around

So you're saying that because the GPL V3 version will have improvements made by certain corporations and the community instead of just the improvements made by those corporations, fewer people are going to use it?

But if you think the headset makers and telecos are actually gonna embrace openness?

Oh, they'll fight it. But right now they control the phones because they subsidize them and people buy their phones from the phone company to get the subsidy. What happens when the price comes down on phones to the point that they don't need a subsidy? They're going to turn away paying customers just because the customer bought their phone on Amazon without the lockdown package?

They have seen the iPhone app store model and have $$$ dancing in their eyes, they sure as hell ain't gonna let you install or do anything they don't get a cut of, sorry.

Someone was just telling me how the app store model doesn't make Apple very much money (they make much more by selling the device), and I'm not sure AT&T is making anything from it directly either. They certainly make more by selling ~$100/month service plans. Sure, AT&T likes that they can "discourage" apps that use cellular bandwidth to make VOIP calls instead of making AT&T voice calls, but all it takes is a wedge. One provider allowing open phones. Then it isn't a matter of losing a few bucks out of a $100/month wireless plan, it's a matter of losing the whole contract to the company that lets their customers save a few percent by using VOIP.

I can see third party ChromeOS device vendors not just kernel signing, efuses, or autoreinstalls, but doing one or more of the following:

It must take a toll on your health to be so paranoid.

If you're saying that mobile phone companies will continue to be mobile phone companies, then I might agree. But none of the things you list are going to happen, and any company that does any one of them will be at a competitive disadvantage. Remember, the people who will buy phones with ChromeOS are by definition not

ChromeOS is likely different because it isn't a device, but we are definitely going down a slope here. If a device did do the things I mentioned (including blocking the IMEI of the device from ever connecting to cell networks), ordinary news channels would dismiss it as "anti-hacker measures taken to ensure integrity of hardware devices".

This type of shoe has already dropped in the console world. Ask the people whose XBox has been dropped from XBL, or the PS3s which get dropped from PSN. It isn't far-fet

And I expect that to carry zero weight with 3rd party hardware vendors

Perhaps not. OTOH, I expect that Google -- for the same reason there is a always an unrestricted Android dev phone available -- to always have an a similar Chrome OS dev device available once Chrome OS is generally available.

Reading the design docs, having an oem-unlock switch is a nice compromise between keeping Joe Sixpack from getting compromised by malware, then blaming it on Google/device maker's lack of security versus allowing a clued user to do what he or she wants.

With this in mind, one thing that would be nice to have are offline apps. This way, a glitch in Internet connectivity would not mean a corrupted term paper.

I just have one concern though -- the fact that everything you do is stored in the cloud. This means zero privacy. Even with the lack of privacy now, if an application started sifting through Word documents and uploading them to an ad agency, there would be Hell to pay. However, one can't have any assurance that someone isn't doing this when all the docs are stored remotely. There is a fundamental rule, "don't put anything on the Internet that you don't want everyone, including your worst enemy to know." So, trusting a cloud service with everything you do may have negative ramifications later on.

With this in mind, one thing that would be nice to have are offline apps. This way, a glitch in Internet connectivity would not mean a corrupted term paper.

That's what local storage in HTML 5 is for. When I played with Google Gears in 2007, there was a complete Javascript API for an in-browser SQLite database; AND I could specify which files would be served locally. Thus, I could make a web application that would work without an internet connection.

Google Gears is now depricated because a lot of the lessons are applied to the HTML 5 spec.

I can already replace my Windows installation and when the OS is infected by a virus or something, it's very, very easy to restore. Just hit a BIOS switch, reinstall from a truly hidden (and BIOS-protected) partition - or recovery DVD - and reinstall without destroying user data. (All user data is on D:, while reinstall will bomb C:)

It doesn't work that well, let me tell you. User data is there, but programs need to be reinstalled to access it. System comes back squeaky clean, but everything needs to be changed to my personal liking.

What it boils down is that a computer will be either vulnerable to users, useless for them or anything in between these extremes. Can't install programs? Useless but secure. Can install any program? Useful, but vulnerable.

Without settings and mail saved *somewhere*, a mail client is useless. With settings and mail saved *anywhere*, a mail client is potentially vulnerable.

Replacing the OS with a known-good image only works if someone can truly produce an image that is more useful than say a Windows default installation and still known to be good. Which gets increasingly doubtful the older the OS image is, the more programs are installed and the more data/configuration/specifics are kept in program installations somewhere.

>> "User data is there but programs need to be reinstalled to access it. System comes back squeaky clean, but everything needs to be changed to my personal liking."...

That's a defect specific to Windows and its bloated registry. In the *nix world, all your settings are stored in your user data directory. All programs can be reinstalled from your distros repository with a single package manager command, and their old settings (as well as all your desktop settings) will be just as you left them.

I can already replace my Windows installation and when the OS is infected by a virus or something, it's very, very easy to restore. Just hit a BIOS switch, reinstall from a truly hidden (and BIOS-protected) partition - or recovery DVD - and reinstall without destroying user data. (All user data is on D:, while reinstall will bomb C:)

It doesn't work that well, let me tell you. User data is there, but programs need to be reinstalled to access it. System comes back squeaky clean, but everything needs to be changed to my personal liking.

What it boils down is that a computer will be either vulnerable to users, useless for them or anything in between these extremes. Can't install programs? Useless but secure. Can install any program? Useful, but vulnerable.

Without settings and mail saved *somewhere*, a mail client is useless. With settings and mail saved *anywhere*, a mail client is potentially vulnerable.

Replacing the OS with a known-good image only works if someone can truly produce an image that is more useful than say a Windows default installation and still known to be good. Which gets increasingly doubtful the older the OS image is, the more programs are installed and the more data/configuration/specifics are kept in program installations somewhere.

Aren't you forgetting the main point -- the cloud? Everything (in theory) is saved for you to continue where you left off. Yes, the cloud itself is potentially vulnerable, but arguably more secure than an average home user's PC.

The other week I broke my Android phone and swapped it for a replacement one. I logged into my Google account and within minutes all my apps, contacts, and messages automatically came back. Most of my settings did as well.

The headline's a bit misleading. Users _can_ replace the OS. However, the BIOS will check signatures on the OS, and offer to restore from a known-good backup on boot (without destroying user data). This ensures that if the OS is infected by a virus or something, it's very, very easy to restore.

isn't this exactly what Microsoft argued when it put forward "Trusted Computing"? And didn't we excoriate them for it?

I was thinking the same thing. If iOS is a walled garden, this is a walled garden hermetically within a Plexiglas dome and a concrete floor and all the plants in sterilized pots.

But that might not be a bad thing. For the "my phone/computer is an appliance" crowd, this might be perfect. No fiddling around trying to download plugins or extensions, no overhead of antivirus, and no difference between multiple machines, and most importantly almost no tech support required. If I break something like this, I go out and buy a new one, present one username and password to it, and it's exactly like my old one used to be.

If you're selling an OS whose primary purpose is to surf da interwebz, it might not be a terribly bad idea to resurrect the concept of the "dumb terminal" in that context. I presume Google will push updates, so if they keep a current list of plugins and/or extensions that can be enabled/disabled by the user as desired, you have machines that are going to be really, really hard to compromise, and really, really easy to use. And really, really inexpensive.

Well, except by Google, so you'd better trust Google a LOT under this model (much like you have to trust Apple a good deal under the iOS model). If you want your computer to do anything outside what Google had in mind, you're done. If Google gets hacked, your data gets hacked and you might never know about it. And, of course, you'll never be able to do anything without Google knowing about it.

***If you want your computer to do anything outside what Google had in mind, you're done. If Google gets hacked, your data gets hacked and you might never know about it***

Too Right. But if you expect this cloud concept to work, maybe it's how things are going to have to work. Realistically, I don't see how one can leave their personal and especially financial data on someone else's server without fullproof encryption and/or Operating Systems that are far more secure than Windows and Unix are or are ever l

After reading the article, I can't come to any other conclusion. This is *way* more closed than the iFamily stuff. It's on par with the attitude that Apple took with the initial release of the iPhone, before the App Store. Even then, Apple provided a fair number of local apps that you could use to perform a lot of basic PDA functions. This is literally a computer with one application installed. It has a web browser, that's it.

This is... pretty yucky. I mean... I consider the iPhone's level of lock dow

I consider the iPhone's level of lock down to be acceptable on a phone or PD

I don't, so you can imagine my opinion of blatantly user-hostile systems like this. But make no mistake, this is the larger target for virtually every mobile device manufacturer. Google is just establishing a basis, leaving the final lock down to the vendors. I refer to my prior post regarding that.

Google may be doing Jobs's path though. First only allowing Web apps and getting that locked down, then eventually adding an App Store, and a mechanism for apps to run securely. I can see ChromeOS sporting the userID protection that Android has, but also sporting a DroidWall like mechanism for only allowing apps to communicate to machines specified in their manifest list. For example, a game company's offering would only have access to their servers and Admob.

So, it's a dumb-terminal that requires me to have constant access to the internet, can't store files, can't have actual programs installed on it.

Please catch up. It is not what you think.

It's not a dumb terminal, it doesn't require you to have constant access to the Internet (some apps require it, others don't), it can store data locally, and you can install programs. They're registered in the cloud, and if you log in and one is missing, it's quickly synchronized to the local device.

Understanding the significance of ChromeOS requires that you abandon some old ways of thinking about how a computer should act. Yes, you're "losing" the desktop and the file folders. You're also losing slow boot times, viruses, the risk of losing your data in hard drive crashes or device theft, and the occasional maddening discovery that you left a critically important file on a hard drive at home|school|work.

This may not be the device for you, but it may be the device for a lot of people. It's worth pointing out that over half a million people buy smartphones every day that also walk away from a mountain of desktop-computer annoyances.

With a reliance on constant connectivity and no hard drive, a Chrome notebook could be described as an overgrown smart phone with a keyboard.

So, unless the article is mistaken (which is possible)... that would be a dumb terminal, with no storage.

This may not be the device for you, but it may be the device for a lot of people. It's worth pointing out that over half a million people buy smartphones every day that also walk away from a mountain of desktop-computer annoyances.

So, unless the article is mistaken (which is possible)... that would be a dumb terminal, with no storage.

TFA is not merely "mistaken", it is either the product of gross ignorance of the subject matter or deliberate deception.

Chrome OS does not require constant connectivity, contrary to what TFA claims. It does everything through the Chrome browser, of course, and so has requirements that are pretty similar to that -- browser based applications will require network connection to the extent that they don't take advantage of the features of HTML5 and other technologies implemented in the Chrome browser for the specific purpose of enabling offline web applications.

And, yes, the Cr-48 at least has no hard drive but not no local storage: it uses an SSD for local storage. Applications can store information locally using the HMTL5 local storage APIs.

I never trusted the "one Mao Jacket Fits All" paradigm in fashion, and certainly do not with my machine. Somehow the judgement of engineers who "friended" all my gmail address book is suspect. at least to me in designing a total operating system...

Come back in an hour when all those posts have been modded down to -1, Flamebait, and look at the stuff that's been marked up.

There are an awful lot of people here who are going through tortuous mental gymnastics to explain why Google locking down its OS so that the only thing you can do is run web apps is a good thing because you can wipe Chrome OS and install whatever else you want.

By that logic, Windows is the best OS ever, because you can wipe your new system from Dell and install something that's compl

Sounds like "as closed" to me. I can see not letting apps modify the system without user consent, but I'm appalled by the idea that the person who pays several hundred dollars for a computing device shouldn't be able to do any damned thing he pleases to it.

Several hundred dollars seems rather high for a chrome os device.

The ones that are actually for sale, that I have seen, have been marketed to call centers, on the assumption that they will just connect to the corporate intranet.

It doesn't matter what levels of relative distrust I assign to Google or assign to you personally.

Google can do a lot more damage to me than you can.

Well, that rather depends on what volumes you assign to "you."

Dozens of zombie botnets around the world exist around the world, and consist of millions of compromised machines. All of these exist almost entirely because users are trusted to make the right decision with regard to program installation and access... and they're wrong often enough to get their machines infected.

The fact is these days even relatively knowledgeable users can't be expected to be able to easily vet the source code of every program they use, even when the source is available. When was the last time one of you audited the code for the entirety of your Linux install--or even just the kernel?--plus your Firefox/Chromium browser and Open/Libre Office? Have you manually combed through all the Javascript from every webpage you've browsed today, to make sure there are no exploits hidden in the code? Are you sure you haven't given a virus a backdoor into your system?

Maybe not trusting users by default is the right way to go. It's just an extension of the idea to not have everyone log in as Administrator/Superuser all the time, and instead differentiating between regular users and admins; you're just linking the Admin account to a physical switch on the hardware itself.

Oh, and even if you're using an email client with good spam filtering, it still has to download the spam before it can sort through it and throw it away, wasting bandwidth.

That's why you use a webmail system like Gmail. Let Google deal with the bandwidth problem.

I do see your point with the DDoS, though it can also be argued that DDoSes can be a good thing in many instances, such as when they attack Mastercard, the Swedish prosecutor's office, etc. I can't remember any DDoS ever attacking anything I reall

And for those comparing this to Apple's lockdown, that's ridiculous - Apple actively tries to prevent you from jailbreaking, while anyone can mod the Chrome OS.

Anyone can modify Linux, that doesn't mean that if you give me a Linux box with locked down guest account access, no alternate boot methods, and don't tell me the root password that I can modify this *particular* Linux installation. The fact that Chrome is Open Source won't help me install applications on my Chrome device. Unless I go out and install my own custom ChromeOS on the device, at which point why did I buy this thing? I could have just bought a conventional laptop and put Fedora on it.

I've never seen so many people get it and not get it at the same time.

Unless I go out and install my own custom ChromeOS on the device, at which point why did I buy this thing? I could have just bought a conventional laptop and put Fedora on it.

That's both the smartest and dumbest thing said so far. Why yes, if you want a full Linux distro, then don't buy ChromeOS. That's so stupidly obvious that stating it makes us all dumber. Yet so many don't get it.

Really, not letting most users or applications modify the OS is a good thing. Microsoft (and others) have had a TERRIBLE model in permitting this. Third-party stuff has no business altering the foundation of the system's operation.
Now, not letting an application that doesn't want to monkey with the OS get installed is probably going too far. I mean, who's gonna run an OS they can't put an app on? That's broken.

Google doesn't get advertising dollars from you running a local app and disconnecting from the network. They *do* get advertising dollars for every online app you regularly use because that's the only way for you to get anything done.

I spend most of my work day with a couple browsers, a couple Putty sessions, Outlook, Excel, and a few other apps open. Imagine how many page impressions that would generate if every single one of those apps was based in "the cloud" and had a little section where Google could insert ads?

Still wondering why this is being touted by Google as the most innovative and revolutionary feature ever in OS design?

The whole point of Chrome OS is to shift the application from running natively on the hardware to running in the cloud. You're thinking of the web browser as the application, Google is thinking of GMail as the application.

The whole point of Chrome OS is to shift the application from running natively on the hardware to running in the cloud.

No, the whole point of Chrome OS is to shift applications from targetting the OS to targetting the browser (thereby commoditizing the OS.)

This differs from a shift from "running natively" to "running in the cloud" in that one of the major areas where Google has put effort to enable the browser to be the platform for more robust applications is in allowing browser-based applications to run

"Operating systems today are centered on the idea that applications can be trusted to modify the system" only applies to Microsoft operating systems. Unix and Linux don't trust applications. Application packaging systems are often trusted by users to properly install an app, but Unix/Linux requires the user to have sufficient privileges to allow the app installer to perform the installation. Few Unix/Linux apps are given root privileges.

First off, you're way out of date. Windows has supported the permissions structure you're advocating since NT 3.1 came out (it pre-dates Windows 95, although until XP came out the permission-less 9x systems existed in parallel). The first user created had root permissions, but nothing required that you do everything as that user; my day-to-day XP account had limited permissions. For Vista and Win7, by default even members of the Administrators security group run programs with limited permissions, though the

Really, not letting most users or applications modify the OS is a good thing. Microsoft (and others) have had a TERRIBLE model in permitting this. Third-party stuff has no business altering the foundation of the system's operation.

Now, not letting an application that doesn't want to monkey with the OS get installed is probably going too far. I mean, who's gonna run an OS they can't put an app on? That's broken.

Define "app".

ChromeOS allows the offline install of webapps like Google Docs, which allows you to use every regular function of google docs offline, with no web connection. You can create, save, and edit documents, including saving them to external media, without an internet connection. You can even print them if you have a network connection, even if there is no internet.

How is that not an app?

ChromeOS is not an operating system like you are used to. That doesn't automatically mean its a bad idea.-Taylor

Microsoft (and others) have had a TERRIBLE model in permitting this. Third-party stuff has no business altering the foundation of the system's operation.

Microsoft fixed this issue almost ten years ago with.Net. The.Net framework allows you to grant or deny any permission to any application (or to every application). The default configuration is that applications launched from storage outside the local machine are not trusted to do anything other than display a user interface, regardless of the permissions of the user running the application. It would be trivial to change the configuration so that only Microsoft software could modify the OS. The only p

Once users are banned from installing applications, or modifying the system security, usability, and more are improved.

Keep them from installing the OS and the box will be very secure, though usability may suffer a bit. I've always thought that security wonks are only really happy with a system while it's powered off or still in the box.

I dunno. If your users need to do all their work at the office, this could be great. You either always have connectivity, or nothing would have worked without the net being up anyway. Otherwise, you run into issues. Not only will this take some serious bandwidth, but if your net connection is down, you are out of luck.

Yeah, if only Google had thought about this issue and invested some effort into enabling off-line web applications before deploying an everything-is-done-through-the-browser OS.

If you could install an app, or adjust the system as a user, then maybe you wouldn't provide as much data to Google. Google do not make money from computers or operating systems, they make it from the information they extract from you.

Then it's not the part they care about. A malicious application installed by a naïve user will always be able to send emails (because the user will demand the ability to do that), and therefore send spam. And it'll still be able to delete the user's files.

Now we're just a hop and a skip away from "Once users are banned from browsing non-Google-approved websites or attempting to use non-Google services, security, usability and more are improved."

For those that always say "but you can modify it!" or "well you don't have to use it" (the latter of which is true even for Apple's iEcosphere), that doesn't address the problem. The problem is that a whole lot of people will see the convenience and the stability and they won't modify it and they will use it, making the whole concept of walled gardens and lockin more popular among consumers who want ease (as opposed to choice) and companies who want to make money. Large groups of people will forget that they ever had a choice to begin with. I'm not trying to evoke 1984 here or say that we're all going to be slaves to Google, but in the world of consumer technology right now, the leading idea that is getting the most users and making the most money is "step into the [Apple/Microsoft/Google/Facebook] world and bask in the luxury of having everything work together and not having to make choices."

Just like the old adage about privacy and security, is it worth trading choice for convenience?

Just like the old adage about privacy and security, is it worth trading choice for convenience?

Sounds like that's a question that people can only answer for themselves - and a lot of them are answering "yes" by buying locked-down devices and aren't regretting it.I think the reason for this might be that the choice that you see isn't apparent or useful to most people. Only for a very small portion of users are the limitations that an iPhone imposes limitations at all - using myself as an example, I used to work as a programmer and still have an interest in technology, but moving to an iPhone wouldn't

I thought the whole point of Chrome OS was that it was a client for running cloud-based webapps?
Given that, it makes sense to lock down the machine - unless they're saying that it won't even run non-google Web apps?

Sorry, but I don't trust having all my apps run from the web. Just the other day I was on a tight deadline trying to print a document from Docs when it crapped out on me refusing to print. It was late at night, so it's understandable if they needed to do some server maintenance. Or possibly it wasn't even Google's fault because there may have been issues with my ISP, but either way I was helpless to do anything. I would prefer to having things run locally and automatically sync to the cloud when possible.

The documentation is vague and shallow. The options panes are missing or disable important features that might help me produce charts that don't look like shit. The only way to downgrade to charts that work is to revert to older versions of the document and not to accept the upgrade when making changes in the future.

The user owns the machine, they should be trusted to decide what is done with it. If you think I'm wrong... let me explain...

The reason we don't want to trust users is because they have a demonstrated history of bad choices, which result in a lot of work for the geeks who have to clean up the mess. We have a better track record, so we assume that it must be because we are smarter than they are. This is only true to a limited extent.

The reason the user makes bad choices is because are given the wrong choice to make. Instead of asking what extent of permission a program should be granted, the user is given an all or nothing choice. It's not possible for them to "try out" a program without risking everything. This is just plain nuts.

Capability based security offers a way to express the wishes of the user in a manner which NEVER trusts an application... but rather places the responsibility for limiting system changes in the operating system, where it belongs.

It is only when we finally get out of or smug self congratulatory slumber that it's possible consider that the typical user is not an idiot prone to randomly pressing OK.

We need to offer sane choices, and a sane security model... Capability Based Security is the only way to go.

Google... unfortunately, isn't any wiser and misses the boat here, but by a slightly smaller margin.

"Once users are banned from installing applications, or modifying the system security, usability, and more are improved, the Googlers claim."

No security is perfect, there WILL eventually be a remote execution exploit, and the users will be banned from installing applications, or modifying the system in order to fix it. I hope it comes with a USB drive I can boot from to wipe the system clean...

However, there WILL also eventually be a remote execution exploit that enables the users to install applications, or modify the system security to provide additional usability, and more functionality than the Googlers intended.

ChromeOS is just begging to be sprung free of the Google jail.

Hint: When the "Attackers" are the folks who purchased the device, their physical access to the device will render all "defenses" useless.

Also: DO NOT WANT, will simply use any other unrestricted laptop or tablet PC available.

Jolicloud, a competitor to Google OS, has an app at the Chrome Web Store [thenextweb.com]. Jollicloud decided to integrate its platform inside the Chrome browser. You can use Jolicloud services instead for Google's. Though definitely restrictive, Google is not locking you into its services.

Yeah, but they still can't get HTTPS on their own damn cloud products. Here's a quick look at Google's security beyond the local device:

I turn on my laptop, turn on my VPN, surf. In the process I got owned by my buddy running Firesheep. Here's how:

Laptop has tabs open.Wifi connects before VPN kicks in.Chrome tries to refresh a tab containing a PUBLIC Google Doc where I was not logged in, and Chrome sends out my authentication without HTTPS on it.Firesheep grabbed the Google account, which is my Reset password account for everything else. Owned.

Later we learned that Chrome's sync bookmarks tool also sends your Google account authentication without HTTPS. All the time.

So if you're on an open network, Google is spamming your authentication to anyone who's listening, because they can't get their shit together to use HTTPS when they authenticate.

...and I'm not buying a portable computer that only works when it can talk to Google's servers (though I'll happily beta test one!). Preventing apps from mucking around with system files is a no-brainer, but that doesn't mean they have to live in the cloud. For corn's sake, they make portable apps [portableapps.com] for Windows that work fine without touching the OS.

..it better trust the machine's owner completely, or else these machines are just Trojan Horses. If the machine doesn't ultimately answer to you, then who does it answer to? Someone who isn't you, that's who.

The difference(at least according to design docs, we'll see what happens on release when we come to that) is that ChromeOS devices give one the (advanced; but non-hack) option to tell the command and control system to shove it. Their shipping image, and the one you get if you restore, is built on a no trust model; but if you wish to put a different one on there(including a modified build of the open portions of ChromeOS) that is your call.

With Apple, by contrast, their portables are their OS or nothing, barring hacks that depend on mistakes they did not intend to make, and do tend to correct over time. What you see is what you are stuck with.