The Wheel turns, and Ages come and pass. Thin client fades to fat client, and …

Share this story

On the first day of CES, I dropped by the Qualcomm booth looking for ARM-based smartbooks to try out. As I poked and prodded the Lenovo Skylight, I pulled out my Nexus One and dropped it on top of the unit for a size reference so that we could snap picture of it. As I stood there looking at the phone laying on top of the smartbook and contemplating the fact that both of these (Android-based) devices had 1GHz, ARM-based Snapdragon processors in them, I glanced across the booth and spotted an ARM-based game console sitting right next to the ARM-based iRex Iliad e-reader. And then there was the portable media player (PMP) positioned not far away... then it really sunk in: smartphone, netbook, e-reader, PMP, game console—all popular consumer electronic categories with real computing needs and a huge audience, and all on ARM right now.

Intel, in contrast, is currently in the netbook, is aiming at the smartphone, would've liked to be a game console (they had an internal team pursuing a win with the erstwhile Larrabee GPU), and has yet to signal any interest in the booming and ARM-only e-reader market (though the chipmaker does have a kind of e-reader for the blind).

My time on the show floor of CES 2010 brought home for me in a dramatic way the fact that the Intel vs. ARM war isn't really a hardware war or an instruction set architecture (ISA) war—in fact, it's not even a war at all. It's more like another fundamental turn of the wheel from fat client back to thin client—a redivision of computing labor, brought about by the ubiquity of network bandwidth, the availability of cheap wireless radios, the rise of the app store distribution model, and the cloud infrastructure build-out. But this time around, the turn has a few important twists.

Just enough cycles to run the UI

Take another look at the list of ARM-based products that I gave above: smartphone, netbook, e-reader, PMP, the Zeebo game console. None of these are content production devices. They're all networked, and they're all oriented toward messaging and/or consuming content that comes in over the wire or over the air. This fact points at the key reason that ARM processors—which aren't even in the same ballpark with x86 processors yet in terms of raw performance—have been so successful in this new "app store" and cloud-based messaging and content distribution context. Thanks to Moore's Law, ARM's inexpensive, lightweight CPU designs—not just those based on the A8, but even ARM11-powered parts like NVIDIA's Tegra—now provide enough cycles to run a rich UI. And when you have a network port on the device that lets you trade bandwidth (and some battery life) for access to massive cloud storage and computing power, enough cycles for a UI is all you really need for these new device categories.

The overall effect is that the network has made the larger computing ecosystem more efficient at division of labor, so that work gets done in the optimal location. Instead of filling in what I mean by "optimal" with a list of parameters, I'll give you two examples of this phenomenon in action.

The first, most recent example, is Android 2.1's pervasive use of voice input, a compute- and storage-intensive task that the phone can simply offload to Google's servers. There's enough network bandwidth to make the turnaround time for voice recognition fairly snappy, so that a phone would never need the local storage or cycles to do this itself.

The next example is the Crysis-on-an-iPhone demo that I saw at an AMD event last year. All of the game rendering is done by a remote server, which uses the network to push compressed frames to the phone and to take player control input from the phone. A very small local client app decompresses the frames and passes along player input—again, the phone needs just enough cycles to run the UI.

Of course, this local-remote division of labor is nothing new—it has been with us since the dawn of networking and the client-server model of computing. It's also the same phenomenon at work in the West's outsourcing of labor to developing nations via telecom infrastructure.

The twist is that on this turn of the wheel back toward the thin client model, the great bulk (by units shipped and aggregate hardware + software revenue) of the commercially available thin-client ecosystem—a growing menagerie of readers, tablets, smartphones, and other networked, special-purpose content consumption and messaging devices—will be consumer-facing. And right now, with the likely exception of Internet-connected TVs (Intel has a great shot at this), ARM has nearly the entire ecosystem locked down by virtue of the simple fact that it's the most efficient way to run a thin-client UI, and one that gives a responsive, fat-client experience without sacrificing battery life.

In sum, Intel isn't doing battle with ARM. It isn't even doing battle with the ARM ecosystem of software makers, fabs, SoC designers, and device makers. If Santa Clara is fighting anything, it's fighting the thin-client model itself. The chipmaker simply does not want the wheel to make another turn—at least, not until it's ready with an SoC that can compete in the fickle, low-margin world of consumer-facing, networked, thin-client devices. And in that respect, they're in a race against time.

Despite the fact that ARM's current advantage is entirely due to its efficiency, there is one place where a small amount of ISA lock-in may help tip the scales toward ARM in the long-run: casual games.

Cross-platform gaming: from phone to console (to tablet?)

I should admit that the introduction's implied comparison of the ARM-based Zeebo game console to Intel's Larrabee-based Playstation 4 ambitions is actually apples-to-oranges, because the Zeebo isn't intended to compete with the latest and greatest from Sony, Microsoft, or Nintendo. Rather, the company bills itself as "family fun and learning for the next billion," and its plan is to target users in the developing world by selling them a 3G-connected computer/console that runs games and educational software.

With an ARM11 core for the main application processor, an ARM9 core for audio, and a Qualcomm-designed GPU core, the console has about as much horsepower as a Nintendo DS, and less horsepower than my Cortex A8-based Nexus One. In fact, ARM11 and ARM9 are very low-performance, in-order cores, and if I were going to build such a console, my first thought would be to use an Intel processor. Atom's performance is far better than anything ARM-based on the market, and an Intel part would give you the advantages of the x86 software stack. So why didn't Zeebo use Atom?

Zeebo went with ARM and not Intel because the Zeebo is intended to run ports of an explosively popular category of casual games that people are playing right now. I'm talking about ports of smartphone games. Simply put, the ARM ecosystem gives Zeebo access to more casual games from independent and big-name studios than the PC ecosystem, and by a wide margin. During the course of a brief demo, I saw games from Activision and other studios that had been ported from their original smartphone platform at a cost to the developer of, in at least one case, as little as $50.

Note that smartphone games are coded in C/C++ for performance reasons. This is true on Android, which otherwise runs interpreted code via the Dalvik runtime engine; and on Palm's webOS, which insists on CSS/HTML/JavaScript for non-game applications. So the casual gaming market is the one corner of the mobile app market where the underlying hardware matters, and it's quite a large and profitable corner.

Ultimately, I'm not at all sold on Zeebo as a business (at close to $200, the console costs way more than a PlayStation 2), but that's not the point. The point is that there's so much money flowing into game development on the iPhone and Android that you can actually make an ARM-based game console and stock it with cheap ports. You could also stock an ARM-based iTablet with cheap ports, as well; indeed, a wave of ARM-based tablets would provide a great platform for casual games, and such tablet gaming could become a trend in itself. (At this past CES, Ars Gaming Editor Ben Kuchera was very impressed with NVIDIA's Tegra-based tablet prototypes, and wondered why the GPU maker doesn't just build and market them under its own brand. Such are ideally suited to casual gaming.)

By the time that Intel is able to squeeze the x86 ISA down into a competitive smartphone, the casual gaming market will be that much bigger, and that much harder to pry away from ARM. Of course, it's too early to tell what kind of impact that some mild ISA lock-in from games will ultimately have on the direction of the wider hardware/software ecosystem for mobiles, but I suspect it won't be trivial.

Appendix: Confessions of a cloud client skeptic

Yes, in late 2007 I was a cloud client skeptic. I had been talking to ARM and Intel about their mobile plans, and I knew that they were looking to massively increase the number compute cycles available in the handheld of 2009 and 2010. But I underestimated the network, and the fact that all of the local cycles would (at least in the near- to medium-term) go to the UI and user experience.

Interestingly enough, in the Nick Carr post that I tried to debunk, Carr was completely out to lunch with his Apple/Google collaboration speculations, but he nailed every last thing about the Chrome OS netbook. Given what we now know after the Chrome OS interview, i.e., that Google didn't really start on Chrome OS until 2009, this is a case of Carr having hit on the idea before Google did (as opposed to him having sources inside Google who tipped him off).

When thinking about the prospects for cloud clients in general and the Chrome OS netbook in specific, Carr's detailed and accurate prediction is worth noting as a data point. It's a general rule among VCs here in Silicon Valley that if an idea is a good one, there are multiple people pursuing it. The fact that Nicholas Carr and Google are both sold on the idea doesn't necessarily mean it's a good one, but that is a strike in its favor. And at this point, I'm sold on it, too.

Share this story

39 Reader Comments

While the cloud model idea is great, the real question is whether it is sustainable.

There appears to be only one company that has made money off Web 2.0 apps (Google) and the apps are also secondary to their real business. Ads. No one (excepting Google, again) seems to have made money creating a Web App that replaces a desktop app.

The reason behind this, is not because I believe that Desktop apps are better than Web apps, but instead that they are simply far cheaper to create and maintain. Also, once you have distributed the app, you don't have to pay for server cycles, electricity, memory, since the consumer is bearing this cost.

Finally, with the web consumer being trained to expect nearly everything on the internet to be free (Google is a huge culprit of this...), will consumers be willing to pay for these costs? Not just the direct development costs, and the profits, but also the aforementioned maintenance expenses that the user is bearing themselves, but implicitly so they aren't factoring them into purchasing decisions...

I think the main argument that this article missed was that ARM is way better than x86 at power management right now. That's where even Intel admits it has problems scaling down x86 and competing with ARM on reasonable power efficiency. Because ARM simply is that much more of a power efficient architecture, it's a wonder why it didn't take hold on computers before. Probably because Windows only ran on x86. But now there are alternatives to Windows on low-power devices that can also act as low-capability computers, so I think Intel's reign of virtual monopoly on the computer market is pretty much over, unless they can somehow integrate ARM's architecture benefits into x86.

By the time that Intel is able to squeeze the x86 ISA down into a competitive smartphone, the casual gaming market will be that much bigger, and that much harder to pry away from ARM. Of course, it's too early to tell what kind of impact that some mild ISA lock-in from games will ultimately have on the direction of the wider hardware/software ecosystem for mobiles, but I suspect it won't be trivial.

I'm not convinced they might ever be able to do it. Intel's key is fabrication technology. But "competitive with ARM" is a moving target. They're at 32nm now and on track for 22nm at the end of 2011. Fabs like TSMC and GF are supposed to start delivering 28nm at the end of 2010. Up until recently, it didn't seem like there was a lot of pressure on these third party fabs to keep up with technology. They'd have to do it for video cards for ATI and Nvidia, but outside of that, power wasn't important because most chips were going into boxes connected to the grid, not handheld devices. Then Apple showed everyone the way (the iPhone as a pocket computer, not as just a dumb "smartphone").

Ok, now that I've rambled sufficiently, the point is that third party fabs want to keep up with fabrication technology because its a huge competitive advantage - the best devices are going to sell the most, and you want them made at your fab. I don't think Intel can get a significant enough lead in fab tech to overcome the inherent problems in x86 at the smartphone level (decode block, etc), without trimming and cleaning up the x86 ISA (x86 2.0?). Fab tech seems to be getting more competitive, not less, with AMD spinning off GF.

So unless Intel can get some 2-3 year lead over everyone else, the handheld CPU market will always favor ARM from a fabrication standpoint (it also favors ARM from a structural standpoint - you can license ARM and build your own SoC, Intel only recently allowed this and you don't get the benefit of using Intel's leading fabs - just TSMC).

Originally posted by arcadium:There appears to be only one company that has made money off Web 2.0 apps (Google) and the apps are also secondary to their real business. Ads. No one (excepting Google, again) seems to have made money creating a Web App that replaces a desktop app.

Isn't Microsoft doing a cloud version of office?

quote:

Originally posted by arcadium:The reason behind this, is not because I believe that Desktop apps are better than Web apps, but instead that they are simply far cheaper to create and maintain.

I'm not sure about that. It's a nightmare for Microsoft to support Office with all the different things people do to their PCs. On the web, all they need to do is make sure it works on the major browsers. E: They also mitigate the piracy problem by lowering the amount of money they ask for at any given time to something that's a lot cheaper than the one-time cost of Office, and they avoid people buying like Office 98 and then never giving them another cent.

There's a lot of advantages to doing it this way.

quote:

Originally posted by arcadium:Finally, with the web consumer being trained to expect nearly everything on the internet to be free (Google is a huge culprit of this...), will consumers be willing to pay for these costs?

Google sells their Apps service.

This isn't just a home consumer thing. If a company can replace thousands of PCs with web clients and cloud services, their support costs go way down.

I wonder what John Sculley (Apple CEO in 1990 when Apple co-founded ARM together with VLSI and Acorn) would have said that fateful year if he had known that 20 years later...

+ Macintosh would run on x86

+ ARM would still be the leader in intelligent hand held devices (The Newton ran on ARM, of course) ... even though Apple sold off its ARM shares in the early 2000s at about the same time it launched a little device known as "iPod" which was powered... by dual embedded 90 MHz ARM processors on a PortalPlayer chip.

With local computing power and storage ever cheaper, I don't see what The Cloud does for me. On the one hand, I want my apps and data available even when I'm not connected. On the same hand, I want my data safe (not lost) and secure (not hacked), which The Cloud has repeatedly shown it doesn't do.

On the other hand, The Cloud gives me... I don't know what. It sure gives Google oodles to datamine from, and various 3rd world subcontractors' interns vast opportunities to mess with my stuff.

I *DO* want my data backed up, if possible online, at all times, and I want all the bugs and vulnerabilities on my system corrected rapidly and transparently. The fact that MS and Linux suck so bad at doing that does not mean The Cloud is the answer. A good OS is, as Apple is kinda showing, not that theirs is perfect.

As for the intel vs arm debate, arm sure seems nice as a platform. it is indeed an ecosystem issue, and arm seems to have a good enough one.

Originally posted by anthonyr:I'm not sure about that. It's a nightmare for Microsoft to support Office with all the different things people do to their PCs. On the web, all they need to do is make sure it works on the major browsers. E: They also mitigate the piracy problem by lowering the amount of money they ask for at any given time to something that's a lot cheaper than the one-time cost of Office, and they avoid people buying like Office 98 and then never giving them another cent.

This isn't just a home consumer thing. If a company can replace thousands of PCs with web clients and cloud services, their support costs go way down.

I'm not so sure:

If MS supplies the cloud services, it means instead of the one-off cost of developing their software, they now also incur the recurring costs of setting up servers, backups, failovers... for all their users over the duration of their use. That's a major expense and headache, and liability, if any one form The Cloud ever commits to guaranteeing availability, safety, and security of apps and data. Or, even more complicated, provide tools and support for others to do it...

It still doesn't solve all support issues though, they still need to make sure their apps will run in a variety of browsers, OSes, networks, configs... and adds extra issues (transparent network/datacenter failover ? ...)

MS could, instead, do a better job of writing software, force a more aggressive upgrade cycle (maybe moving to a yearly subscription model instead of one-off sales...). Training and doc and macro compatibility would get in the way, but so do they with a Cloud solution. If MS can't solve those 3 issues with a 3-4-5 yearly release cycle, I won't trust them (or anyone) to achieve that on The Cloud.

obarthelmy, one thing you are missing is that whatever additional costs microsoft might incur hosting cloud versions office is going to be offset by savings by Microsoft customers which Microsoft can then try and capture.

Think of it this way, what is likely to be cheaper overall: A business operating its own exchange server for 10,000 users, or Microsoft adding 10,000 more users to the 10M (just making up a number here) in its hosted exchange offering? Who is going to have more buying power for hardware? Who is going to get better equipment utilization? Who is going to have the economies of scale for system administration?

Originally posted by obarthelemy:With local computing power and storage ever cheaper, I don't see what The Cloud does for me. On the one hand, I want my apps and data available even when I'm not connected. On the same hand, I want my data safe (not lost) and secure (not hacked), which The Cloud has repeatedly shown it doesn't do.

Yep. In almost all cases, I reckon I'm better off not being reliant on hardware which isn't mine and mine to maintain. I've very little interest in these cloud-type apps for this reason. It's irritating enough on rare occasions when e.g. your internet connection has a glitch, but that irritation would be greatly magnified if, as a result, you were completely unable to do anything even offline too.

Honestly I do not believe in the thin client bullshit. Moore's law tells you that really powerful CPUs will be available in mobile devices as well. Come on they already are. Apple Apps also tell you that people like to have applications on their devices instead of the cloud if the process of downloading and running applications is easy and safe. And why not. Clients are powerful enough to do almost anything and get ever more powerful.

The data will of course reside more and more in the cloud, but I doubt that the data crunching will move mainly from client to server. We will still have some areas like search where the computations happen on the server and most other areas where computations happen on the client.

Originally posted by anthonyr:...This isn't just a home consumer thing. If a company can replace thousands of PCs with web clients and cloud services, their support costs go way down.

But they won't. Do you really want data your legally obligated to protect or development data you don't want out in the wild on the Cloud? Hell no. For this reason many businesses block access to "Cloud" storage, which includes things like Google Docs. Web apps that leave the firewall are NOT efficient for the majority of business. I have no doubt that several web companies can take advantage of this, but they are not the mainstream. You site MS Office as an example of an app going cloud, but that is to move to subscription pricing, which most businesses are rejecting. You watch, enough big MS customers will bitch up a storm and you'll see a local install option like we've always seen.

Cloud computing and Cloud apps work for home users. Arcadium is correct, there isn't a sustainable business model for most of these companies going Cloud in this iteration of Cloud. Most of the solutions being provided in the Cloud have already been solved in the corporate datacenter, and solved at a low cost. That is why Cloud makes more sense for home users, since there is no hardware investment.

Jon, if you have to be buzzwordy trendy and talk about the "Cloud", please at least capitalize it. Putting the vague thingy in quotation marks is probably too much to ask. Content wise a solid article as usual, though.

Originally posted by obarthelemy:With local computing power and storage ever cheaper, I don't see what The Cloud does for me. On the one hand, I want my apps and data available even when I'm not connected. On the same hand, I want my data safe (not lost) and secure (not hacked), which The Cloud has repeatedly shown it doesn't do.

Yep. In almost all cases, I reckon I'm better off not being reliant on hardware which isn't mine and mine to maintain. I've very little interest in these cloud-type apps for this reason. It's irritating enough on rare occasions when e.g. your internet connection has a glitch, but that irritation would be greatly magnified if, as a result, you were completely unable to do anything even offline too.

On the whole I'm with you, I don't trust on the benevolence of the 'Cloud' service vendors, but here's the rub; the majority of folks are rather poor at maintaining their own hardware, they want something that just works. Once people are hooked, even if Google gets hacked again (and again...), its hard for them to move to a different paradigm, especially if the vendor is able to keep their pictures from being lost. Now that relatively affordable powerful handsets are available all that is needed is now is ubiquitous wireless, and thats coming soon.

the majority of folks are rather poor at maintaining their own hardware, they want something that just works

Yeah but I somehow do not see the connection between:

"systems are hard to maintain "

and

"we should move everything into the cloud."

How about simply making it easier to maintain your own hardware? I think the Apple AppStore is a great example for this. Installing programs and running them locally and updating your system software doesn't NEED to be hard. Its just that Windows and for different reasons Linux suck at it.

If I could install a program with a click, add programs without slowing down my system. (by prohibiting them from adding services, background stuff, crud to the registry whatever) run them more or less sandboxed so they do not destroy my system or have an authority that I trust that checks the software for me (Apple) and make them easily updatable it wouldn't be hard anymore.

That's the reason why I think Chrome OS will be a failure. Users/and developers do not want to be restricted on programs that have been written in things like HTML,JavaScript and CSS which may be great tools to provide content with slight interaction but which are GODDAMN awful to develop full applications and which are still dog slow.Usêrs want fast and secure applications that look native and are usable. Developers want to develop in programming environments that are suitable. You just need to get rid of the awfulness that is Windows or the nerdy flexibility that is Linux when it comes to installing, running and updating programs.

I'm very skeptic of JavaScript being a good "universal VM" than Java (or .NET) bytecode, except perhaps for very dynamic languages like JS itself. JS's execution model is very flexible and there's a shiny new batch of advanced JS JITs, but these will have to struggle a lot to compensate its MASSIVE performance disadvantage. I have benchmarked some Java code against JavaScript (<a href="http://weblogs.java.net/blog/opinali/archive/2009/10/29/programming-bitmapped-graphics-javafx">example</a&gt, and Java is still ahead in an order-of-magnitude range, even with V8, TraceMonkey etc.

JS perf is increasingly "good enough", even in smart devices and remarkably when the hard work can be offloaded to accelerated libraries, e.g. with the upcoming WebGL. This is remarkably good news for closed platforms like the iPhone -- Apple will eventually be forced to support these standards, because all competition will have them; one thing is dodging Flash in the iPhone, but a much harder gambit is to not have competitive HTML/CSS/JavaScript. But having said that, nobody seems to really believe that the pure HTML5/CSS/JS-based browser will have enough performance for everything. Even ChromeOS will have NaCl, basically to run stuff that's too slow even in V8 (or Dalvik if that's included).

Originally posted by JPan:If I could install a program with a click, add programs without slowing down my system. (by prohibiting them from adding services, background stuff, crud to the registry whatever) run them more or less sandboxed so they do not destroy my system or have an authority that I trust that checks the software for me (Apple) and make them easily updatable it wouldn't be hard anymore.

Ironically the technology exists in this format at least for Windows .NET apps, called "ClickOnce"

That's the reason why I think Chrome OS will be a failure. Users/and developers do not want to be restricted on programs that have been written in things like HTML,JavaScript and CSS which may be great tools to provide content with slight interaction but which are GODDAMN awful to develop full applications and which are still dog slow.

HTML 5 is supposedly much better at this. I don't expect ChromeOS to be a huge hit at first but the idea is valid. If an OS is optimized for rendering interactive over-the-web applications, it will be orders of magnitude better than what you get on a PC right now.

There will always be a need for your own desktop if for nothing more than a backup. But as wireless data coverage -- Verizon is doing a pretty good job -- increases in both speed and area, most people will find that they spend 90% of their time interacting with webapps instead of PC's.

Originally posted by imgod2u:HTML 5 is supposedly much better at this.

Sorry, but much better at what? Hacking HTML with JavaScript is possibly the slowest, most fragile way of building applications since we were doin' it with punch cards. all HTML5 gives us is more tags. Whupee?

Originally posted by arcadium:While the cloud model idea is great, the real question is whether it is sustainable.

Yeah, that's my biggest concern, too. Well, slightly anyway. I guess *my* biggest concern is the lack of control over software/content changes. While this isn't precisely equivalent, every time I open an application whose help documentation is entirely online, only to receive a broken link, now-irrelevant documentation (Apple, this is you, I'm talking about here), or any number of these things, I begin to imagine what it'd be like in a completely online-centric world.

It's this lack of control -- of tangibility -- that worries me about the cloud model. At least with the desktop model, if I walk away from a machine for 5 years and come back to it, nothing will have changed. Not only will this likely *not* be the case in a cloud model, it's also not a guarantee that said company will even exist, and what happens with my data then? What happens when a company decides to no longer "do no evil?"

I can only think of the PlaysForSure fiasco, or WGA server outages, or the examples I cited above, as reasons why this model seems to have fundamental flaws in its attempt to outright replace a desktop model.

quote:

Originally posted by Osvaldo Doederlein:JS perf is increasingly "good enough", even in smart devices and remarkably when the hard work can be offloaded to accelerated libraries, e.g. with the upcoming WebGL. This is remarkably good news for closed platforms like the iPhone -- Apple will eventually be forced to support these standards, because all competition will have them; one thing is dodging Flash in the iPhone, but a much harder gambit is to not have competitive HTML/CSS/JavaScript.

Err, I was with you until you made the assertion that Apple does not support web standards, and has "no competitive HTML/CSS/JavaScript." Apple, along with Google, are currently on the forefront of web standards, and to think that they're not helping shape the WebGL standard for their own purposes on their respective smartphone segments seems slightly naive.

Flash on the iPhone is one argument you can make; Java on the iPhone is another; but to assert that the iPhone doesn't support modern web standards, I think, is a bit far-fetched. Unless, of course, you had a clarification about this that I missed, or that I missed what you meant entirely. In which case, feel free to correct me.

"but that is to move to subscription pricing, which most businesses are rejecting."

Umm. subscriptions are already the norm?

I'm not sure why the comments are hammering on the enterprise level when clearly this article only addresses the consumer-facing entertainment/convenience industry of portable connected mobile devices.

But, about the enterprise: Every hire in your IT shop is a used car in the sense that it HAS problems and you don't know what the new problems are until something goes wrong. Major vendors, to a far greater degree, are a known quantity, and Internet downtime is a known quantity and most businesses these days aren't getting much done without the internet, anyway. So eventually the machine replaces humans. At the periphery. For the mission critical stuff, you can't outsource the IT because nowadays, it had better be one of your advantages in the market. Core stuff has to stay in house and that has to include servers, etc. But everything else, you're better off focusing on your core and outsourcing the other stuff to the cloud. And the companies that do this will grow and put the others out of business.

If an OS is optimized for rendering interactive over-the-web applications, it will be orders of magnitude better than what you get on a PC right now.

why? In the end the HTML rendering engine is still a html rendering engine like WebKit with some JavaScript execution engine. Which will be more or less the same thing you get when you download Chrome or Safari. Why should Chrome OS be an order of magnitude better at rendering webpages than a browser on a normal system? Simpler yes, but faster or better?

The value proposition of Chrome OS is to take away functionality to add simplicity of use. Which is the same approach Apple for example goes with their AppStores. You cannot do anymore what you want to do with your computer and in exchange we take care of system maintenance. In the end I think the Apple approach is the better one:

Its like Paul said: Building applications with HTML and Javascript simply is an slow and faulty approach. I would say it is one of the most idiotic approaches to build your whole application infrastructure. Just because millions of web developers have learned to work around this nightmare because they are forced to it doesn't mean that its a good idea.

quote:

Ironically the technology exists in this format at least for Windows .NET apps, called "ClickOnce"

Thats the same as Java WebStart right? I think I never trusted this kind of stuff for the same reason I don't like Applets that are certified to read from your hard-disc. They never reached critical mass and widespread use. So I was pretty sceptical of the one or two occassions once I saw a webpage with a webstart application. I think things like these need to be prominently included in the operating system. As a core way to handle applications, perhaps besides the more flexible normal approach.

People are arguing about web apps and the only vendor brought up is google? How about salesforce. They are a very successful vendor that sells only to companies. Close to 70,000 customers, including big companies like Dell, Qualcomm, Starbucks, and Motorola. All trusting their Customer Relation Management to the cloud.

Even smaller companies like Replicon are successful selling web apps to companies. They have around 1.2 million people filling in their timesheets, submitting expenses, and tracking project information in the cloud. Even companies that are quite privacy conscious like banks have no problem with the cloud.

I think the hand-waving in this thread about companies not trusting web apps is bunk. Web apps targeting home users may get most of the press, but companies are picking up web apps in a big way.

In our company we just switched from exchange to google. We save money by not having to maintain exchange any more, and user feedback has been overwhelmingly positive. The google infrastructure is way faster than exchange/outlook was. Searching mail is orders of magnitude faster, but even little things like writing a new message is faster. The compose message page opens nearly instantly while outlook takes almost a second to do it.

The major advantage of ARM over x86 for thin-clients, CE devices, etc is first and foremost cost.

A low-voltage ARM CPU can cost well below $10, a top of the line ARM CPU like the Snapdragon costs $30 (as according to isuppli in the recent Nexus One tear down).

An Atom costs $45 for CPU alone, and will cost $30-$35 only when bundled with a unequally expensive intel chipset (such as the 945GSE). This of course was the central theme of the Nvidia Intel lawsuit.

ARM will further develop, and increase in speed, Qualcomm has a 1.5Ghz multi-core in the pipeline. A four-core 2Ghz Cortex-A9 that consumes a mere 0.25W has already been shown. The cost-performance of these chips will shift further down, it wouldn't be unreasonable to expect an Atom-class ARM cpu that costs $10 in the near future, and without Intel's draconian (and possibly illegal) chipset pricing model.

ARM is really a democratization of the CPU outside the Wintel paradigm. ARM is built by pretty much every company, Samsung, Qualcomm, Cirrus, DEC, Sharp, Texus Instruments, Yamaha, LG, NEC, Nvidia, and even formerly Intel. The list goes on.

There's real competition, and real incentive for ARM makers to keep prices down and performance up. As an example look at the iPhone, inewer iPhone 3GS are using Qualcomm ARM chips instead of Samsung ARM CPUs in eariler 3GS models. The end-user doesn't know or care, they both function as the same iPhone 3GS, but ARM offers real free market competition between manufacturers and real fungibility between makers.

In the near future, a sub-$50 Chrome OS desktop, or a sub-$150 Netbook/Tablet should be technically/economically feasible with ARM something x86 just can't do.

ARM and x86 are on a collision course, the rumored Apple tablet is also expected to be ARM, the next Nintendo DS is also expected to be Nvidia variant of ARM (current DS is already ARM). But beyond CE devices, ARM is on a collision course with x86 on traditional desktop/notebook spaces.

Who Knows? If ARM performance continues to increase, and prices continue to decrease, it wouldn't be surprising if Apple/MS produced an ARM-variant of their desktop OSs.

One problem with the cloud is that it could turn into the new "banking" system. They have your data. They're going to change terms and agreements when they see fit. They'll lure you in with free service, then start tacking on a small fee here and there. You'll probably put up with it, because it's a royal pain to migrate your data some where else.

It's great being able to access your data from anywhere using the cloud. But it sucks when someone else is holding onto your data ... making money off your data.

Nice article, but I look at it from the SoC point of view. As other commenters have pointed out Intel doesn't do SoC well yet. They are clearly trying but the real battle is internal, inside Intel - can they change their internal corporate culture to do SoC as well as ARM Ltd? Most important, ARM don't sell chips, can Intel survive selling only the IP rights? If they can't then I don't think they can compete, the offering is unattractive to the system builders (look at what happened to XScale.)

quote:

watermocassin:Used our Cisco Linksys nslu2 as a debian based server for several years.

I still do (though I run my own version of slugos). NSLU2 has a pretty weedy (XScale) ARM inside it - particularly as LinkSys apparently accidently halved the CPU speed (it's easy to remove the link from the MB ;-), but it still does the job.

Likewise take apart any NetBook then take apart the *components* - hard drives, flash (SD) memory. ARM CPUs start to pop up their little Acorn green heads. Surely, from the ARM Ltd point of view, CPU sales are just the icing on the cake?

Intel has done plenty of SoC stuff. You know that Xscale processor your running in your Linksys router? Take a look at it sometime and you'll quickly realize that it says 'Intel'.

The issue with Atom versus ARM is that the ARM is just a much simpler processor. It's design is much more modern and they are able to do more work with vastly less complexity. The Atom is every bit as modern internally as the ARM, but it has to deal with translating the x86 ISA into something that is runnable on a modern processor. The overhead and transistors needed to do this is relatively minor on a desktop or server processor, but when you try to shrink things down it starts to get significant.

In addition to that it's the licensing modern that ARM has. They design the processor and instruction set and then anybody can take it and run with it. You have dozens of different companies now cranking out ARM processors of all different sizes and purposes.

There also is not a significant amount of backwards compatibility required either. Sure there is a ISA, but it's not like anybody expects code from a OMAP3 to run well, or at all, on a old ARM2 or ARM3. It's a embedded system were developers tend to have full acccess to all the source code they need. If embedded folks required this backwards and forwards compatibility like in x86-land then there would be no way that the ARM could support multimedia applications the way it can now.

Even Microsoft's embedded kernels are 'open source' after a fashion.

In contrast, binary compatibility is the sole reason behind x86's continued existence. There is nothing right or proper about the x86 ISA. It's crusty, expensive to process, and is difficult to virtualize. If people did not give a crap about being able to run their old software on newer processors then x86 would of died back in the 486 days. This is the whole point to Atom.

Originally posted by drag:Intel has done plenty of SoC stuff. You know that Xscale processor your running in your Linksys router? Take a look at it sometime and you'll quickly realize that it says 'Intel'.

Indeed I've taken apart several NSLU2s. The history of XScale, however, supports my hypothesis. Originally DEC worked closely with ARM Ltd on an ARM variant called StrongARM. IRC StrongARM was effectively the ARM Ltd strategy for developing ARM as a main CPU (I can't make authoritative statements on that.) Intel acquired StrongARM from DEC and then tried to do SoC with the techology. It developed the processors described in the wikipedia 'XScale' page. Subsequently, however, Intel sold the whole shebang to Marvell - an established and widely known microcontroller (and therefore SoC) company that aleady had ARM expertese. (See Marvell's website, or your Mouser catalog ;-)

What I glean from this is that Intel couldn't make a business in SoC. This is backed up by my own experience on NSLU2 with regard to the Intel licensing of the closed source software for the XScale MIIs. They failed even though they obtained state of the art technology and some of the people that went with it.

quote:

The Atom is every bit as modern internally as the ARM, but it has to deal with translating the x86 ISA into something that is runnable on a modern processor. The overhead and transistors needed to do this is relatively minor on a desktop or server processor, but when you try to shrink things down it starts to get significant.

Wouldn't it be ironic if the internal CPU of the x86 was an ARM? It's an interesting thought experiment that proves both your point and mine. Back when ARM was developed (mid-1980s) Acorn developed a DOS emulator to run on it. That showed that the performance bottleneck was in translation of the iAPX86 memory addressing arithmetic into different schemes available on the ARM (from my memory of various "coffee room" discussions.)

Any emulation that achieves speed parity will use special purpose hardware assist - backing your assertion with regard to "overhead and transistors". In my experience Intel would favor more hardware in such a project whereas Acorn (at least) favoured software. This shows in the Intel need to invent uses for the hardware it tacked into the Pentium instruction sets. That hardware was hardly used; indeed Intel provided still image decompression acceleration code for both JPEG and PNG that really doesn't help, because the bottleneck in question is not in the CPU.

Meanwhile Acorn/ARM Ltd time and time again demonstrated minimalist hardware approaches to problems ammenable to much more complexity:

1) The approach to supporting big endian architectures (flip the low bits of the address.)2) The approach to code size and memory cost issues raised by a 32 bit RISC architecture (produce a 16 bit instruction set that maps to all the commonly used 32 bit instructions.)3) Java support - another translator, but it's an optional extra and it's a new instruction set too.

There are tradeoffs in the different approaches, however I contend that the ARM approach works in the Intel - main CPU - environment whereas the Intel approach fails dramatically as Intel tries to move to cheaper, lower power, applications. I also argue that this is not about engineering - the engineering is the same in both companies - it's about culture. Most fundamentally there are two different answers to: "I have a problem, should I start working on a hardware solution or a software one?"

quote:

In addition to that it's the licensing modern that ARM has. They design the processor and instruction set and then anybody can take it and run with it. You have dozens of different companies now cranking out ARM processors of all different sizes and purposes.

Well, yes - cultural background again. Intel does manufacture, neither Acorn nor ARM Ltd ever have done. (LIke many modern computer manufacturers Acorn designed a computer than subcontracted manufacture.) There's a conflict of interest between the developers of SoC solutions within and outside Intel and if Intel favors its internal customers it loses to competing architectures.

The issue this raises for Intel became very apparent during SlugOS development. That XScale MII came with a great gobbet of binary, closed source, code; but we combined it with a GPL Linux kernel. We regarded that as legal thin ice and we lept though hoops to keep the binary download independent of the kernel. That's a problem for commercial manufacturers too - many, many (maybe most) devices use GPL, copyleft, kernels. Intel just didn't get it, they didn't even start to address the licensing issues until around the time when they must have been considering the sale to Marvell. The license *was* easy to fix - it was culture again, Intel just didn't see a need to do the work.

Of course ARM Ltd have another trick up their sleeve - a variety of OSes, including Symbian, that aren't GPL. Not having to deal with legal issues is a real plus for system builders.

Originally posted by idspispopd:People are arguing about web apps and the only vendor brought up is google? How about salesforce. They are a very successful vendor that sells only to companies. Close to 70,000 customers, including big companies like Dell, Qualcomm, Starbucks, and Motorola. All trusting their Customer Relation Management to the cloud.

Salesforce.com is a great counter-example. They are a company that is obviously doing very well. There just don't appear to be too many salesforce.com's around, however. This could either be a consequence of the youth of the medium, or a fundamental flaw. I'm not sure which.

quote:

In our company we just switched from exchange to google. We save money by not having to maintain exchange any more, and user feedback has been overwhelmingly positive.

I would be interested in knowing whether Google makes any money off this. Which is why I brought up Google in my first post. It subsidizes most of its ventures through its Ad profits. I am interested in knowing if this medium can achieve sustainable profits, outside of the Google behemoth (and Salesforce.com, as mentioned is a great data point in this).

Originally posted by Temple:ARM is really a democratization of the CPU outside the Wintel paradigm. ARM is built by pretty much every company, Samsung, Qualcomm, Cirrus, DEC, Sharp, Texus Instruments, Yamaha, LG, NEC, Nvidia, and even formerly Intel. The list goes on.

Good point. Maybe this is why we are seeing the success of ARM now. Earlier, your PC was really the only computer you owned. Now, your PC, phone, e-book reader, set-top box, etc. are all computers. Intel has a one size fits all solution (Atom) while ARM allows companies to tailor their solution to fit their particular needs (i.e. phone, set-top box, etc...).

Originally posted by Osvaldo Doederlein:JS perf is increasingly "good enough", even in smart devices and remarkably when the hard work can be offloaded to accelerated libraries, e.g. with the upcoming WebGL. This is remarkably good news for closed platforms like the iPhone -- Apple will eventually be forced to support these standards, because all competition will have them; one thing is dodging Flash in the iPhone, but a much harder gambit is to not have competitive HTML/CSS/JavaScript.

Err, I was with you until you made the assertion that Apple does not support web standards, and has "no competitive HTML/CSS/JavaScript." Apple, along with Google, are currently on the forefront of web standards, and to think that they're not helping shape the WebGL standard for their own purposes on their respective smartphone segments seems slightly naive.

Flash on the iPhone is one argument you can make; Java on the iPhone is another; but to assert that the iPhone doesn't support modern web standards, I think, is a bit far-fetched. Unless, of course, you had a clarification about this that I missed, or that I missed what you meant entirely. In which case, feel free to correct me.

Fair enough; I was talking just about the future but this might not be clear. Remember that when the iPhone was first released, it only supported web-based applications, a development model that Apple had to actively support until they would have the native SDK and AppStore ready. So, they didn't have much choice but enabling the best web-app support possible. Now it's different. Although the latest iPhone OS updates have at least enhanced existing features (updated Nitro VM), they didn't introduce anything new worth noticing. I don't think items like WebGL are in the pipeline for iPhone, any time soon. I'd be happy to be proven wrong, but it's just hard to believe that Apple would essentially make its mobile Safari powerful enough so a significant fraction of its apps could be written as web apps.

From the article:"Just enough cycles to run the UI"...This fact points at the key reason that ARM processors—which aren't even in the same ballpark with x86 processors yet in terms of raw performance—have been so successful in this new "app store" and cloud-based messaging and content distribution context.

Not sure if I can agree with this statement. In fact, I don't think I can agree how ARM was presented in this article in general. Yes, ARM lags behind performance, but for some reason it was portrayed as if the ARM was incapable of doing anything other than GUI rendering. Nonsense.

Using ARM6/7 in the 1st gen iPhone as an example, I was actually surprised how much processing capabilities it had. I dare say the ARM6 could easily match a Pentium III 333MHz, probably 400MHz even. And you could do a LOT with Pentium III 333MHz back then. Now piggyback a decent GPU on top of it, and you have a half decent system in your hands.

Originally posted by Temple:ARM is really a democratization of the CPU outside the Wintel paradigm.

I agree 100%. And even more. Breakout through from this monopoly, but there are much more things to consider.This is A REAL NEW ARM/POSIX PARADIGM:

1. Windows is not portable, compatible anyhow, any flavour (XP, Vista/7, CE/Mobile...). Wine, QEMU... *will* be when standard ARM platform got enough horse power. But, there is no so extreme need for it: let's say it will be at 2013. - too late for Windows bloathtecture.On the other hand, Most of Apple OSes are POSIX compatible (OSX, OS X iPhone, "iSlate" most probably...), and it could be ported at new architecture flawlessly. Just remember "switching" to Intel platform.As mr. Stokes said:

quote:

Jon Stokes:"The point is that there's so much money flowing into game development on the iPhone and Android that you can actually make an ARM-based game console and stock it with cheap ports."(...)"games from Activision and other studios that had been ported from their original smartphone platform at a cost to the developer of, in at least one case, as little as $50."

Almost same case as is Windows (software), stands for x86 (Intel) platform (hardware): It is possible that we will see netbook/smartbook dual processor hybrids in near future (well, they already exist); however, it seems to me that it is just transition to the new standard.

Cloud computing, low cost hardware and easy portable software between new ARM/POSIX standard will make Wintel abandonware.We could call it POST WINTEL AGE.

2.

quote:

Originally posted by Temple:In the near future, a sub-$50 Chrome OS desktop, or a sub-$150 Netbook/Tablet should be technically/economically feasible with ARM something x86 just can't do.

As a direct result of above, One Laptop (or Smartbook) Per Child will become true, globally.Also as a result, spreading of informations and speed of it will probably transform our global society to some kind of Netocracy (see: A. Bard & J. Söderqvist: "Netocracy — The New Power Elite and Life After Capitalism")

3. Low cost of architecture and software portability will raise so called homebrew (independent) software development.And what about independent hardware??? Pandora and Wiz are just - Open(ed) Pandora box(es)!

4. There was speculation at this thread claiming that only Google can make money from WEB2 apps (right now). But what about other sources?Apple made billions just by selling MP3s; and it is just beginning. Whole new market is opening here: videos, books, tv, apps/games on demand...Speaking of cloud computing, we forgot "Network Computer" (NC) concept that was, obviously, born before its time.Considering evolution of wireless broadband internet, smartbooks are becoming just "terminals", "portals" to the "cyberspace".I'll repeat: Whole new market is here. And read on:

5. What about VOIP? Did Google made Nexus One "Just for Fun" ? Hybrid FXS/Skype phones already exist for years.Same as ARM/Smartbook concept will reduce cost (read it: extra-profit!) of old hard/soft solutions, new VOIP standards will cut off extra-profit of "traditional phone companies".Again, spreeding of informations will exponentially explode This is another one proof of new, post wintel paradigm to come.

I have literally no idea why web-downloadable software doesn't use it more.

I think that's because ClickOnce is basically a 1.0 software release that hasn't seen support from Microsoft since almost 4 years ago, if not more.

Try using ClickOnce on a project sometime, and you'll find that every client has to redownload the whole install, even if there's only been a small update to a single file (say, fixing a divide by 0 bug).

If your client software package is 150 MB in all and you have between 1K and 5K users, ClickOnce causes a simple update to be very painful for your network.

I do agree, though, that with some additional updates the technology could be great. I expect we'd see more from the installer behind Microsoft Web Platform Installer at this point, though.

Originally posted by eas:obarthelmy, one thing you are missing is that whatever additional costs microsoft might incur hosting cloud versions office is going to be offset by savings by Microsoft customers which Microsoft can then try and capture.

Think of it this way, what is likely to be cheaper overall: A business operating its own exchange server for 10,000 users, or Microsoft adding 10,000 more users to the 10M (just making up a number here) in its hosted exchange offering? Who is going to have more buying power for hardware? Who is going to get better equipment utilization? Who is going to have the economies of scale for system administration?

Large companies will not outsource their email for the simple reason is that if a supena comes in for an email to the ISP managing it they have to turn it over immediately. If you manage your own mail server you can get your lawyers to fight the request off until your retention policy kicks in and "oops, no more incriminating emails". When you nuke the email on your own servers you know you've gotten rid of it.