Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

They will have a mini computer that will go 12 hours on a charge and do everything you need for web surfing except for crap designed flash websites.

I want one, I want a laptop that will go all day long and will recharge with a solar panel.

Hell I would buy a laptop that had a e paper display and could not play video. A backpacking computer / tablet that is durable as hell, goes a week on a charge, and cna let me record my thoughts or inspect maps at camp at the end of the day would be a god send.

that's totally silly, a non-x86 cpu has no benefit for running windows. You can't run x86 software on it, and that's why anyone uses windows in the first place. Windows without backwards compatibility will not sell.

Ugh... It's not MY argument, it's the reality of history. A software translation layer (VM) will be less efficient than microcode, which is what intel processors use anyway to decode x86 into a RISC ISA. You really don't know what you're talking about here.

Performance is, however, strongly affected by clock speed. Cortex-A8 isn't too far off microarchitecturally from the Pentium (two-issue, in-order), so they're probably not too far off in performance-per-cycle. Current-gen A9 cores are (I would say) around a fast Pentium II or a low-clocked Pentium III.

No, they're stacking this up against Pentium Ms and Cores. The Core architecture was a short-lived 32-bit mobile architecture in 2006, derived from the older P6 family. They are the final generation of actual Pentium (discounting the rebranding of new low end chips). The Core2 architecture was a completely new design, based off the same paradigms as the P6 family. Both mainstream systems detailed in this review were over five years old.

The latest ARM processors performance is comparable to a 1996 Pentium class processor.

Troll much?

These SoCs can devote a dozen or so mm^2 for the CPU cores, yet they still achieve performance comparable to a 1GHz Atom but at far far lower power consumption levels. In addition the rest of the SoC contains application specific accelerators (graphics, video, security are the most common) for the difficult tasks. The Phoronix benchmarks sadly didn't test the SoC as a whole, just the CPU cores. It's also up in t

The benchmarks provided by Phoronix focus on computational power, which is a relevant criteria. Yet, ARM-based systems aren't targeted at the high performance computing field. In their domain of application, criteria such as power usage and price tends to be much more relevant than how fast it compresses files, encodes MP3s or runs synthetic benchmarks. In fact, if it is fast enough to play media then it's fast enough to do anything at all.

So, how about comparing them where they need to be compared: power output and price?

There's even more to it...
In these benchmarks the accelerators of the OMAP4 were totally ignored. These would have improved things like x264 encoding to being a lot faster even than a Core i7 chip.
The OMAP4 as do other ARM Cortex A9 chipsets, have a lot of accelerators to deal specifically with highly computational tasks - and when you develop you actually use them...

DSP is generally meant to be a programmable array. Most video compression and decompression accelerators in such systems are in large part fixed function ASICs, and not programmable down to macroblock processing level. Even some audio codecs are fixed function.

Fixed function ASIC will always beat DSPs and CPUs in MIPS/watt domain, and often in $/MIPS as well.

These would have improved things like x264 encoding to being a lot faster even than a Core i7 chip.

And yet you provide no actual evidence of this beyond an assertion. Please show your x.264 settings, which corei7 you used (there is more than one model), link to your source material, etc. so that you're results can be duplicated and verified.

In these benchmarks the accelerators of the OMAP4 were totally ignored. These would have improved things like x264 encoding to being a lot faster even than a Core i7 chip.

And with just one misplaced letter, you show that you have no idea what you're talking about. The embedded platforms have ASICs dedicated to H264 encoding, which can run much faster than comparable quality settings in x264 running on a high end i7. If you actually tried running the software x264 encoder on an ARM, you would be looking at days for a single TV show, and that's assuming x86-optimized x264 will even compile and run on the ARM architecture. If you want to compare apples to apples, you w

That was what I meant.
Trying to compare Intel Atom to TI OMAP4 by checking the performance of a software video codec is plain wrong.
Both chips are headed towards mobility and consumer electronic devices, where things like power consumption (which was already mentioned) is a lot more important than number crunching.
The TI OMAP4 was designed with multimedia in mind, and as such it includes hardware accelerators for things like H.264 encoding/decoding and a lot of other things (3D graphics comes to mind). T

"The chart below contrasts power consumption between the Intel Atom N450 and the ARM Cortex-A8 while running miniBench. The power curves were generated from system power usage adjusted downwards so that idle system power was discarded. For the Atom, idle power was 13.7W with the Gateway netbook’s integrated panel disabled while the idle power for the Pegatron system

Looking at the numbers very roughly, one could say that the ARM is about half slower than the Atom, but uses only one third of the power. So if we normalize either the processing speed or power consumption to the same level, it can be concluded that ARM is 50% more energy efficient while making the same amount of work than Atom.

A lot of the tests are irrelevant when done between Atom and OMAP4.The OMAP4 has internal accelerators for voice and video coding - stuff like VP8 and x264 can be done a lot faster on the OMAP4 if you ditch the software itself and use the OMAP4's accelerators instead.We've been able to use OMAP4660 for encoding 720p at 30fps into H.264 while using only ~20% of the CPU. Try doing that in software on a Core i7 and see where it gets you.

When doing benchmarking on ARM Cortex chipsets, there needs to be more car

To be fair, Sandy Bridge does include a video encoding accelerator/transcoder as well. However Atom doesn't at the moment, although Medfield will presumably include one.

The thing about OMAP 44xx is that the accelerator is a programmable DSP, not fixed function hardware. The specs say it can do 1080p30 (at 30fps). That means it could do VP8, or presumably even your own custom DSP code.

As someone with experience doing embedded development on ARM I can tell you I found the OMAP architecture to be awful. I'll admit the only time I ever used it was on a demo board (the Beagle) vs a board with essentially identical specs from FreeScale, Renesas and a few others. TI was awful with support, their documents were awful, the hardware was flaky (overheating!?) and the sample sources and module sources they provided were absolute crap. On top of that when we did get the boards running and started comparing them the OMAP board was slow as tar on anything that involved a lot of memory operations in a small timeframe. Apparently the GLES subsystem was fantastic or something but after a few attempts we couldn't get the modules built correctly against the kernel we were using and just gave up. In the end we went with the FreeScale (not my choice) which was easily superior to the TI OMAP garbage.

I am doing a lot of development on Gumstix Overos and am finding that platform to be pretty sweet. I think many of the problems you are encountering might be an artifact of the Beagle (who incidentally flat out state not to base an actual product on their hardware).

Could you give me some advice on building a standalone mixing pult/equalizer? I was considering a cheap dev board with plenty o' DSP and linux, but I don't know how to pick it up from there. I'm kinda on a shoestring budget (<$800).

Well even on the project where the OMAP was evaluated I wasn't the one making the final decisions and I haven't been doing much device development lately either, so at the moment I don't think I could give you much good advice. You're lucky looking for something like that now though, because with all the tablets and smart phones out now everyone seems to be offering really complete and capable ARM dev boards with well done reference designs. You may also want to check out the communities around those manufa

Amen. If you need the latest greatest fastest it's not a fit, but it solves very many problems. My first RISC machine had, I believe, a 16 MHz CPU, it had a ~500MB SCSI-fast-narrow disk on a very poky VME controller, and an 8 bit, fairly stupid frame buffer... a Sun 4/260. And I ran a browser on it...

I will ENJOY seeing this absolutely DESTROYED, BEAT INTO THE ASPHALT in terms of price to performance by the Raspberry Pi very soon. Days of $100-200 ARM boards are coming to an end, now dear Pandawhatever please set the sane price of $50 for your board, or die out of existence.

Oh, that's right. Hasn't Broadcom licensed any of the Cortex cores yet? No wonder they're able to make them so cheap; they're several generations behind and ARM Holdings mustn't be charging much in royalties.

How far behind? Well each of the Cortex-A9 cores in this OMAP 4-based SoC perform about 2.5 times better than ARM11 at the same clock speed. So each one could get about the same amount of work done as the 700 MHz ARM11 while puttering along at only 280 MHz. The dual-core OMAP 4460 running at 1.2 GHz has

Apples to oranges. The ARM architecture provides a lot of integer performance, with little to no floating point performance. GPUs provide a lot of floating point performance, with little to no integer performance. The question then becomes which is more important for your specific application.

The ARM architecture provides a lot of integer performance, with little to no floating point performance.

That hasn't been true for a while now. Floating point support (in various versions of "VFP") have been standard since ARMv6 (e.g ARM11) and were optional in ARMv5 (e.g ARM9, ARM10, XScale). ARMv7 (e.g Cortex-A8/A9/A15...) has NEON ("advanced SIMD") as an option that most licensees also include. So ARM cores now have pretty good floating point performance too.

Why the hate? This has 4 times the memory, twice the clock speed and twice the cores of the Pi, of course it isn't going to be less then twice the price.Everything else being equal you might expect nearly 4 times the price (i.e. ~$130) but not only is this already actually available (and we don't know what it will cost once we can actually buy a Pi), but the Raspberry Pi is hoping to operate without profit and to short-cut the economies of scale with large government orders for education. If they achieve th

This has 4 times the memory, twice the clock speed and twice the cores of the Pi, of course it isn't going to be less then twice the price. Everything else being equal you might expect nearly 4 times the price (i.e. ~$130)

So, $130 for a bare board with CPU and RAM?Yeah that would sound great, except when anyone can build a whole PC for $191, http://www.pcmag.com/article2/0,2817,2392163,00.asp [pcmag.com] With 2x the RAM and a CPU that rips the ARM on PandaBoard into a thousand of tiny teddy bears in terms of performanc

Economies of scale. The price is relatively high due to the low volume of production. These are hobby boards. The only reason you can build a $200 PC right now is because the hardware gets production runs in the millions or more.

Economies of scale. The price is relatively high due to the low volume of production. These are hobby boards. The only reason you can build a $200 PC right now is because the hardware gets production runs in the millions or more.

Sure, and also maybe the chicken-and-egg problem?There won't be any scale until there's significant demand, and there won't be any demand while those boards cost so much, that even most ARM enthusiasts would find it difficult to justify the purchase.It's okay that the performance of

I think he is more refering to the current selection of arm dev boards, like TI's offering which starts at 130$ for a 96k 75Mhz M3 and then wants another 100 bucks for a book, and god knows how much for a compiler that if you install it you get some douche salesman calling you once a month wanting to know if you need more software.

A lot of the tests that were done would have benefited from having Hard Float. Ubuntu ARM port does not have Hard Float. They should have used the Debian HardFloat port to get more accurate performance metrics of what the hardware can do.

I would like to see a 12 hour benchmark that reported normalized resultsin terms of Kg of battery. All these processors are in the ball park foroperations per second but many can NOT do it all day long.

Twelve and 24 hour results are needed to be sure. But a smart phone with a three hourbattery life is not a smart design. Simply from the safety point of view this is important.

Yes, but then the issue is that the device is only for people who need more than three hours at full operating capacity without access to any kind of powersource... and in reality that is not necessarily very many people... especially if you consider that a $100 device can compete with a $600 iPad for most people's needs. My laptop only gets about 4 hours on battery with dimmed screen and wifi and blue tooth off and I have never really found that to be a major limitation of its portability or usefulness.

Yes, but then the issue is that the device is only for people who need more than three hours at full operating capacity without access to any kind of powersource... and in reality that is not necessarily very many people... especially if you consider that a $100 device can compete with a $600 iPad for most people's needs. My laptop only gets about 4 hours on battery with dimmed screen and wifi and blue tooth off and I have never really found that to be a major limitation of its portability or usefulness.

BUT a phone is a critical safety device. Dialing 911 or 999 for emergency serviceswhen stuck in a snow drift or calling to tell your safe and sound kids to stay put after a tornado haspassed... but wait the kids phone battery is exhausted because they were playingAngry Birds now you do not know....

In that case, I don't think it really makes sense to let children play Angry Birds on battery with your critical safety device. The problem seems to be that people expect a critical safety device to be useful as a toy, phone, computer, car battery and who knows what else.... which is of course rather a lot to ask from a retail device that you are literally risking your life on hoping it functions correctly in critical situations.

In that case, I don't think it really makes sense to let children play Angry Birds on battery with your critical safety device. The problem seems to be that people expect a critical safety device to be useful as a toy, phone, computer, car battery and who knows what else.... which is of course rather a lot to ask from a retail device that you are literally risking your life on hoping it functions correctly in critical situations.

I am with you -- yet the parents that give their kids "smart" phones are not thinkingabout a quake or a regional power outage. My guess is they are thinking -- ismy kid home yet, has he stopped at his GF house to neck blow smoke.

It starts with the very young kids, too young to exercise personal restraintespecially where Angry Birds is considered safe and blowing smoke and havingsex at age 11 is not.