The king of processors in the smartphone market is ARM with its line of power efficient CPUs. ARM today announced a new processor called the Cortex-M0 that it claims is its smallest and most energy-efficient ARM processor available.

The processor has a small gate count and code footprint to allow for power requirements as little at 0.085 milliwatts when combined with the ARM 180ULL cell library. ARM says that the new processor extends the company's MCU roadmap into the ultra-low power and SoC applications market.

The processor will find use in applications for medical devices, e-metering, lighting, smart control, gaming accessories, compact power supply, and other markets. ARM says that early licensees of the processor include NXP and Triad Semiconductor.

One of the key benefits of the new Coretx-M0 processor is that it is suitable for mixed signal markets where devices like intelligent sensors typically need separate analog and digital devices.

ARM's Mike Inglis said in a statement, "The Cortex-M0 processor is yet another demonstration of ARM’s low power leadership and its commitment to drive the industry forward towards higher performance with ever lower power consumption. With its expertise in low-power technology, ARM has worked closely with its Partners and their customers to ensure that our processor architectures enable the cost and energy-efficient creation of tomorrow’s electronic devices and systems."

I ask this question because CPU's are getting to the point where they are too much power for the average home user. I mean an i7 is already too much power for most people. Sure a lot of people on this site are power users and can take advantage of it, but even then it's a lot. (8 core / 16 thread Xeon Nehalems coming soon =D)

Then you have cloud computing gaining steam. I'm thinking most of the high end CPU's will go into mega data centers / servers, while the home user has a CPU that uses nearly no power and can do graphics as well.

Or what might happen is that you pay a monthly fee to 'rent' computing power from someone. Then all you would need is an internet connection and a screen, with all of the graphics and CPU calculations being down in the cloud and being streamed to you. (Bandwidth is pretty far away from being able to do this atm, for the home user at least, but it's growing fast).

We are already starting to see this in the mobile market place. The high end mobile chips just aren't worth the huge amount of money over their counterparts. Gaming still fails on laptops as well unless you spend a lot.

I wouldn't be surprised if Intel comes out with an Atom that rivals a C2D in 2-3 years. But then again Intel might not do that on purpose.

Unless we develop some way to make light and electricity go faster you'll never be able play a game being processed in a cloud in real-time. Bandwidth may go up all you want, but ultimately the packets will still take some amount of time to get there, they can only travel so fast (Even using fiber optics). It's all about latency. There will still be a market for powerful CPUs. It may become smaller, maybe even a niche, but still.

Power consumtion may be going down in the future, but performance isn't.

quote: I have money in the bank. Like many, I also have tons of "important/sensitive" information in my e-mail that I relatively trust with that company.

relatively trust? so you save the really important stuff. And I see where you're coming from, but the bank analogy is very poor. If a bank loses my money I get it back, but if my hard drive died, I'd be livid (which is why I back it up).

Cloud computing is an interesting concept, but something about accessing my data through the internet... just doesn't feel right.

How exactly will you compensate for the player turning to the left, or jumping, or taking any potential game action? You'd have to pre-generate all the possible outcomes and ship them to the display in advance, which would have to have enough smarts to be able to both cache those results and pick the appropriate one in response to input. Which ignores the enormous computing costs of pre-calculating enough outcomes for this to work. Better and cheaper to calculate at the endpoint.

And that does work for players actions, which generally take place in tens or hundreds of milliseconds. That's fine for most high-speed Internet these days, but there's more to it than that.

The latency would be unacceptable if all the processing of moving a character or shooting a gun was done entirely server-side, with everything being sent back to the client computers and rendered by their GPUs. That stuff needs to be displayed in near real-time.

That will require constant latency of under 1ms for most shooters, action games, RPGs, et cetera to work properly. Never mind the massive processing power and bandwidth that would be necessary.

Latency and bandwidth concerns are too substantial in too many different computing fields for clouds to truly replace high-performance desktop computing.

Yes, latency will not be an issue one day when we begin to make use of weird physical phenomena like the Einstein–Podolsky–Rosen entanglement. Information CAN travel faster than light. We simply do not have a clue of how to do it yet.

I think this will likely be overcome by localised clusters - on a fairly good connection you can see sub-10ms ping on servers that are near enough, which would be fine even for input in real-time graphics, given that to render something at 60FPS you have a 16.66...ms. frame time window - obviously we'd be looking at something that 'feels' like 30FPS (say we send the input to the server in 5ms, it renders it in 10, and sends two images back in 15ms) but looks like 60FPS by making up extra inbetween frames.

That said, I'm still against outsourcing my processing power, just because I like it where I can see it - unless it's very significantly inexpensive compared to buying your own computer and upgrading it over the long run, I just don't see this catching on.

Well my point was that the cheaper CPU wins. People have already shown that they will trade performance for decreased cost.

It doesn't have to be an ARM. There are other X86 chips coming out that are 50$ and under and can run Win7 + Anything that isn't high on graphics. A good example of this would be the ion platform which is an Atom + Nvidia integrated GPU. For ~300$ you'll have a solution that can run an OS and playback HD video.

For an embedded system, which most likely uses ARM. The cost of the CPU is more like 20 cents rather than $50 when integrating within an ASIC, and the power of an entire system is more like 2-4 watts and leaving the power budget of the CPU to be around 0.5 to 1 watt the most.

Well, it depends on the application in which your embedded system is aimed for. ARM is more practical for applications such as cell phones and Nintendo DS, but there are x86 embedded hardware that support larger requirements that doesn't allow for the full use of a typical computer.

For example, data acquisition systems will still need the horsepower from the x86 architecture but the whole design still has to be compress in a small form factor. PC cards are generally used. I've used cards years ago running light versions of linux on flash to handle 100+ channels. This would probably be too much for an ARM architecture to handle.

So while ARM is great for small embedded systems, x86 still has it's uses for larger systems that require the extra processing power. In fact, our point of sales terminal is using a 400Mhz+ ARM processor running linux on 32MB of NAND :) Since our code only took up 2MB or less we have more space to create backup copies of the partitions including kernal, OS, apps, boot, etc and still have room left. Our terminal runs Ethernet, 2 USB, 2 Serial, 2 modem, drives a touch sensitive LCD, prints, collects back/front check images and processes them for archival in seconds, and still be able to cook eggs :)

That last statement is why I keep commenting on how today's software are poorly coded taking up gigabytes of space, especially Vista.

Same ppl different site, look for kkrieger . Its a complete 3d game that runs about 10 mins or so in under 97k thats right UNDER 97K ! They haven't updated there web page since 07 but, they are still active in the assembly demo seen.

Yes, demos built upon DirectX and Windows APIs - the "gigabytes" previously mentioned. The reason they're small is because of procedural generation of maps and textures - it's very clever and what they achieve is brilliant, but rest assured that all of the other stuff is built upon a base of "poorly coded" APIs.

Agree with what you said. ARM is very low performance and isn't great for high power computing need that other architectures provide. However, even for those high power need. PowerPC and MIPS still performs much better with lower power.

Of course, if the development time is lacking and/or the volume to custom build a chip with the latest energy efficient process is not available, then you got to do what you can with processors like x86.

High volume embedded system like gaming consoles are on PowerPC architecture, and from what I know most high performance copiers with EFI controllers (in Foster City, OEM to various copier companies like Toshiba and Sharp) switches from x86 to MIPS due to the heat and power need.

I wish people would stop subscribing to the notion that they can tell others, or most, or even some people when they have 'too much' processing power.

Do us all a favor and shut up. Who the the hell are you to say that?

Until the day where we have a CPU that does EVERY process we could ever hope to want computed INSTANTLY (no, not 2-3ms, INSTANT) then we will continue to progress the technology of speed forward and people with your ridiculous mentality will always be left behind or learn to wise up.

Perhaps you should realize that nothing in this world is instant, never has been, never will be.

And if you do not believe me, hook up a 'scope and watch digital signals needing time to reach a one and time to drop to zero. It never has and never will be a perfect square shape, but more like a trapezoid.

However, we are talking ARM and cellphones here; there is plenty of room for development. On the desktop? Less and less people need that speed, since the apps they use have all the processing power they require.

In a nut shell, you don't see it when you use it, and on average every 10 seconds you just used an ARM powered device. Most likely unless you are an embedded system developer you will never see or notice it.

ARM is "the standard" in the processor world because of not only its low power, but also the completeness of its development tools, along with it the possibility to integrate it with almost everything out there and/or buying a reference design off the shelf instead of spending time developing it.

Well this is the lowest end ARM you can get. It's designed to run at 50MHz and be very cheap to manufacture - that means it's 12000 gates on a cheap 180nm process - and under 1mm^2 in area. It performs a lot more slowly per clock than the other Cortex series chips because it is lacking cache (as far as I can tell).

At 50MHz it uses 1/20000th of the power of famous 180nm processor like the Athlon (around 89W IIRC) or P4. It's also about 200 times slower.

It pretty much signals the end of embedded 8-bit and 16-bit CPUs over the next few years. It's probably less powerful at 50MHz than the CPU in the GBA (the 18MHz ARM7) due to being so cut down. This isn't a problem because ARM have the M3, A8, A9, ARM11, ARM9, etc to offer for more demanding purposes.

There's the OMAP chipset used in the Palm Pre that I'm interested in. Version 4, I believe, has HD and 20mega pixel capabilities, and should prove very useful in cell phones and small devices...if it takes off. I don't know much about the chipset nor have played with it so I can't comment much. Just read the tidbit of news while researching Palm Pre :)