Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

angry tapir writes "Calxeda revealed initial details about its first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores. The Calxeda chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box. The chips will be based on ARM's Cortex-A9 processor architecture."

Mmm, reminds me of the prototype card for Acorn computers that had 32, 600MHz ARM processors. They never released an estimated price though. This was back in the early year 2000's so would have been incredibly expensive. Cortex A9's are now in mass production, not in the hundreds/low thousands that Acorn used to make, so might be cheaper than you actually think.

I suspect that cost will largely boil down to the "fabric", type unspecified, and whatever the "because we can" premium for this device happens to be.

Since the A9s are in mass production, and have some vendor competition, they should be reasonably cheap, and of basically knowable price; but, depending on what sort of interconnect this thing has, you could end up paying handsomely for that. "Basically ethernet; but cut down to handle short signal paths over known PCBs" shouldn't be too bad; but if it is s

The second they try something smart - someone is gonna pull a HT interconnected chip and screw them over. Though the cache coherency protocol may be an issue. OTOH, 4x1Gbps backplane Ethernet, is standard, and wouldn't be too expensive to slap between chips, and let Xen in cluster mode handle the cache coherency. Beowulf SSI in a box. Sounds nice.

I am dying.. you have killed me. Way too funny for a Monday morning. Now I am at work literally laughing out loud and I can't explain what is funny to anyone who will get it... I am dead inside, killed by your humorous post...

When you start piling all you can onto a chip the power consumption is going to naturally creep up. Once you reach a certain threshold of x chips you lose on the benefit of ARM being "low-power." Am i wrong?

The analogy with a switchmode power supply is completely b0rked. It doesn't contain any cars.
(furthermore, switching off cores in a multicore server is complete unlike the 'switching' in a switch mode power supply)

5W average, so let's assume up to 10W per CPU according to the article.

Not bad. In fact good enough to replace completely a commercial non-metered hosted VM offering of the kind memset (http://www.memset.co.uk/) offers at present.

The interesting question here is what is the interconnect between them. After all, who cares that you have 480 cores in 2U if 90% of the time they are twiddling their thumbs waiting for data to be delivered to them.

TFA said 5W per node, meaning per 4 cores + RAM. That's 600W for the entire system, which is fine for a 2U enclosure.

Aside from the interconnect, the other important question is how much RAM are they going to have? They're using the Cortex A9, not the A15, so they just have a 32-bit physical address space. In theory, this lets them have 4GB of RAM per node (1GB per core), but some of that needs to be used for memory-mapped I/O, so I'd be surprised if they got more than 3GB, maybe only 2GB. That would m

The comment wasn't intended to be derogatory against the ARM. The ARM was just designed from the ground up with low power consumption in mind, not performance. The Cortex A9 has an 8-stage pipeline, 2.5 instructions per clock, around 13M transistors per core, runs at 800MHz to 1.5GHz, and has up to 512KB of L2 cache. The Pentium 3 has a 10-stage pipeline, 2.5 instructions per clock, around 10M transistors, runs at 500MHz-1.4GHz, and has up to 512KB of L2 cache. They're fairly comparable processors, with

They're fairly comparable processors, with the ARM probably having a better instruction dispatcher and branch predictor, and the P3 having better floating point performance.

The ARM chip probably doesn't have a better branch predictor. The Pentium 4 had a very good one, which was back-ported to the Pentium-M. The Pentium 3 one was pretty good. ARM chips didn't have one at all until very recently, because branch prediction is much less important with the ARM ISA.

A lot of ARM instructions are predicated, meaning that they are evaluated, but their results are only retired if a specific condition register is set. Branch prediction on x86 is very important, because short if se

600W per 2U server is possible but very impractical -- a full rack will require 12kW (to power it and 12kW of cooling).

I also don't believe, they thought it through, how to stuff 120 processors and at least 120 DIMMs into 2U case and cool them efficiently -- one ARM CPU requires no forced-air cooling, and one DIMM can be cooled by whatever blows around for ther reasons, but 120 of them need airflow, and plenty of it. If they don't use separate DIMMs and have fixed RAM (I hope, it's ECC and enough to run a d

They almost certainly aren't using DIMMs. To get the power consumption that they talk about, they'll be using MobileDDR in a package-on-package (PoP) configuration. This means that the ARM SoC and the memory are cooled as a single unit.

Not with the density of power they are trying to achieve -- it will mean that 10-20% of all power dissipation will happen on chips with nearly perfect thermal insulation around them (board, layer of air and another chip). It will be probably the first device ever to overheat ARM with the heat it produced. Even if air will be eliminated, RAM chips are not good at conducting heat from bottom to top.

It will make sense to place RAM on the opposite side of the board, and have airflow on both sides, but again, it

Not really, the server could stay powered up the whole time (unless you really get 0% usage at non-peak times, and those times are predictable, in which case it makes sense to just power down completely at those times). By scaling up I mean enabling more cores, thus improving the processing capacity of the server. Then you'd get the best of both worlds, with the server being fine for anything from small to massive workloads, while still using less power than the equivalent x86 setup. Like modern engines which can enable or disable cylinders at will to conserve fuel when not much power is needed.

If(!) ARM is more energy efficient then it delivers more processing power per unit of power. Principle works the same at 250mW and at 600W. It would also generate less heat. Ability to turn the cores on and off is additional benefit that would further improve efficiency.

It really depends on how much(and what kind of) support hardware ends up being involved in having lots and lots of them together in some useful way. That and what inefficiencies, if any, are present because your workload was really expecting a smaller number of higher-performance cores.

The power/performance of the core itself remains the same whether you have 1 or 1 million. The power demands of the memory may or may not change: phones and the like usually use a fairly small amount of low-power RAM in a package-on-package stack with the CPU. For server applications, something that takes DIMMS or SODIMMs might be more attractive, because PoP usually limits you in terms of quantity.

The big server-specific questions are going to be the nature of the "fabric" across which 120 nodes in a 2U are communicating. Because 120 ports worth of 10/100 or GigE would occupy 3Us and nonzero power themselves, I'm assuming that this fabric is either not ethernet at all, or some sort of cut-down "we don't need to care about the standards because the signal only has to travel 6 inches over boards we designed, with our hardware at both ends" pseudo-ethernet that looks like an ethernet connection for compatibility purposes; but is electrically more frugal. Whatever that costs, in terms of energy, will have to be added on to the effective energy cost of the CPUs themselves.

Then you get perhaps the most annoying variable: Many tasks are(either fundamentally, or because nobody bothered to program them to support it) basically dependent on access to a single very fast core, or to a modest number of cores with very fast access to one another's memory. For such applications, the performance of 400+ slow cores is going to be way worse than a naive addition of their individual powers would suggest. Sharing time on a fast core is both fundamentally easier, and enjoys a much longer history of development, than does dividing a task among small ones. With some workloads, that will make this box nearly useless(especially if the interconnect is slow and/or doesn't do memory access). For others, performance might be nearly as good as a naive prediction would suggest.

Most servers do not do heavy computing work: they serve up (dynamic) web pages, handle SQL queries, process e-mail, serve files. That sounds to me like lots and lots of threads that each have relatively little work to do.

For example/.: the serving of a single page to a single visitor will take a few dozen SQL queries and the running of a Perl script to stitch it all together. This takes, say, 0.001 seconds of time of an x86 core - a wild guess, may be an order of magnitude off, good enough for the sake of

But web pages won't even need you to do any floating point arithmetic.

Provided your application is written in a language that supports not-floating-point arithmetic. In PHP, for example, any division returns a floating-point result, as does any computation with numbers over 2 billion (such as the UNIX timestamps of dates past 2038).

It really depends on how much(and what kind of) support hardware ends up being involved in having lots and lots of them together in some useful way. That and what inefficiencies, if any, are present because your workload was really expecting a smaller number of higher-performance cores.

I've been saying for years that people should make their chunks of code smaller (eg, smaller functions, et al) so it's easier to understand and maintain. The old argument has always been that the compiler will inline it even

Right now I'm running an Intel D510 rack server with dual 2.5" drives, it's great, does a lovely job even with it running Ubuntu 10.04 server + VirtualBox ( Ubuntu 8.04 LTS ), however, I'd dearly love to shift over to something even more low-power/compact/SOC, so long as it has SATA, Ethernet, USB and runs a debian-based distro I'd be happy.

Something like a dual-core ARM machine would run ample for the server loads I'm seeing.

So, anyone seen anything like that yet? Or even just a MB in Mini-ITX ?

Take a look at the PandaBoard [pandaboard.org], if you want a low-power, dual-core ARM server, although you'd have to use CF + USB for storage, not SATA. Note, however, that VirtualBox is x86-only. If you want virtualisation, you're currently pretty limited on ARM. There is a Xen port, but it's not really packaged for end users yet.

While not in 1U format or a lot of off the shelf NAS boxes use ARM. My LG N2R1 NAS has a 800MHz Marvell 88F6192 and runs Lenny. I won't be surprised to see some NanoITX boards out running similar hardware. Plus, I've been very impressed with how many Debian packages are available for ARMEL. While not perfect, it's the most useful Linux server I've ever had.

I bought an RND-2000 and 2 fairly slow 2TB drives (5900 rpm for less noise) since it was to be installed in my bedroom. I got the whole thing shipped with 2 drives for around $430

Software-wise it's fairly nice, with support for Time Machine, AFP, CIFS etc and works great for any single task. But ask it to do more than 1 task and it just doesn't have the horsepower -- for instance copying a large file and trying to play a song causes the song playback to be delayed. If you're using an iPad to stream music o

It's pretty damned poor, yup. I figured the onboard software was probabl crap so I hacked mine to hell:

Managed to find the onboard serial pins and solder on a line-levelling serial adaptor, downloaded the WD GPL source, translated the needed Orion/Marvell code tree settings to modern/mainline kernel initialisation code, built a whole bunch of custom kernels, figured out the internal flash layout and how to create u-boot kernel images and initiramfs images and eventually got it to boot debian squeeze.

The MythTV guys have completely different needs than an underutilized server operator. We have to deal with a very complex scheduler, which if it takes too long to run can cause problems, and with HD video that typically can only be decoded single threaded. Single threaded performance, and a lot of it, is a must, meaning our minimum recommendation is 2.5GHz Core 2 or Athlon II, or better.

That's not to say you can't be low power while you're at it. Tom's Hardware did an article last year where with not co

Yes, they do. First, if you're hosting a single web-site on a single server then you'll probably want to install more than 4Gb just because RAM is so cheap now. And you'll inevitably use it (for databases, file cache, etc.). If you're hosting multiple sites on a single server, then you DEFINITELY need more than 4Gb of RAM per server (as it's going to be the limiting component).

Maybe ARM is justified for large Google-style server farms doing specialized work which does not require great amounts of RAM.

How about a link to this rant, if you want us to read it? And, if you've got a problem with PAE-like extensions, then I presume you're aware that both Intel's and AMD's virtualisation extensions use PAE-like addressing?

All that PAE and LPAE do is decouple the size of the physical and virtual address spaces. This is a fairly trivial extension to existing virtual memory schemes. On any modern system, there is some mechanism for mapping from virtual to physical pages, so each application sees a 4GB private address space (on a 32-bit system) and the pages that it uses are mapped to some from physical memory. With PAE / LPAE, the only difference is that this mapping now lets you map to a larger physical address space - for example, 32-bit virtual to 36-bit physical. You see exactly the opposite of this on almost all 64-bit platforms, where you have a 64-bit virtual address space but only a 40- or 48-bit physical address space.

The big problem with PAE was that most machines that supported it came with 32-bit peripherals and no IOMMU. This meant that the peripherals could do DMA transfers to and from the low 4GB, but not anywhere else in memory. This dramatically complicated the work that the kernel had to do, because it needed to either remap memory pages from the low 4GB and copy their contents or use bounce buffers, neither of which was good for performance (which, generally, is something that people who need more than 4GB of RAM care about).

The advantage is that you can add more physical memory without changing the ABI. Pointers remain 32 bits, and applications are each limited to 4GB of virtual address space, but you can have multiple applications all using 4GB without needing to swap. Oh, and you also get better cache usage than with a pure 64-bit ABI, because you're not using 8 bytes to store a pointer into an address space that's much smaller than 4GB.

By the way, I just did a quick check on a few 64-bit machines that I have accounts on. Out of about 700 processes running on these systems (one laptop, two servers, one compute node), none were using more than 4GB of virtual address space.

His complaint basically boils down to the fact that the kernel needs to be able to map all of physical memory, and have some address space left over for memory-mapped I/O. This is a valid complaint for a kernel developer (although Linus' 'everyone who disagrees with me is an idiot' style is quite irritating), but it largely irrelevant to the issue at hand. There is nothing stopping a kernel on ARM with LPAE from using 64-bit pointers internally. You still need to translate userspace pointers, but you nee

No, the problem is:1) Kernel is starved for _address_ _space_ for its internal structures.2) Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).3) Constant address space remapping is costly.

And it doesn't matter that you use 64-bit pointers internally, because you can't address data directly.

AFAIK, most OSes shut down the MMU in kernel mode - linux for instance. Address space remaps are costly because of a lot of explicit, non-cached memory accesses. Though I don't see why some more PAE bits can't replace 64-bit mode - you just need an IOMMU. And possibly hardware virtualization with a simple hypervizor. Though that might actually be faster, considering all the savings you make from pointers, not to mention that if the MMU and wide load/store instructions trap to the hypervizor directly - the c

AFAIK, most OSes shut down the MMU in kernel mode - linux for instance

Linux certainly doesn't do this on x86. It uses the segmentation mechanism. The kernel's memory is in a segment, marked as only visible to ring 0 code. When you make a system call, the current process's segment(s) remain visible to the OS, as does the kernel's segment. This means that you typically have 1GB of address space reserved for the userspace process, and 3GB for each userspace process. RedHat used to ship a kernel that used an entirely separate address space, so you got 4GB for the kernel and

There are so many things wrong with that, that I don't even know where to start. The MMU on x86 handles both paging and segmentation. Segments map from virtual addresses to linear addresses. Paging maps from linear addresses to physical addresses. Both are part of the virtual memory mapping handled by the MMU, which first walks the LDT / GDT, then the page tables, to translate from a virtual address to the physical.

It sounds like you're repeating something that you heard and didn't understand. What yo

On a database server, if it's highly used, is largely stuck on the slowest part (disk i/o) when it has to do full table scans. You solve this by building proper indexes

Until you have to use a DBMS that ignores your indexes. For example, MySQL appears unable to make efficient use of an index on a subquery that uses GROUP BY. From the manual [mysql.com]: "A subquery in the FROM clause is evaluated by materializing the result into a temporary table, and this table does not use indexes. This does not allow the use of indexes in comparison with other tables in the query, although that might be useful." The only reason I haven't already rewritten it as a join is that the subquery uses GROU

In favor of what? PostgreSQL, or something one has to pay for? Either way, dropping MySQL support in the next version would require a lot of clients to drop their current hosting provider and switch from (cheap) shared hosting to a (more expensive) VPS.

A proper webserver only needs 1 thread per core. Each socket/connection should only consume a few KB of RAM at most. A webserver shouldn't use more than a couple dozen MB of RAM at most, not including the OS file system cache. Look into Nginx or Lighttp.

I do scientific computing where we regularly use virtual address spaces larger than 4GB. Not all of that is in the working set, of course, but it's often necessary to have that much mapped. One recent example is my leakage power and delay models for near-threshold circuits. I implemented the Markovic forumlas and found them to be too slow. My simulations would take days. So, I figured out the granularities I needed for voltage, power, and temperature, and I implemented those models as giant look-up tab

If you are doing scientific computing, then you are not in the target market for a system like this. The virtual address space size is the least of your problems - the relatively anaemic floating point performance is going to cripple your performance.

Linus' rant is about using PAE in a desktop enviroment, which I agree with (that's why I said that I doubt any applications will use PAE). It says nothing about virtualisation. LPAE will work just fine for VMs.

Utter bollocks. I work for a data centre, and there is no way 4GB is *required* for multiple sites or anything like that. How about one server, running 20-odd Linux Jails, each with between 20-32 sites, all in 2GB.

Instead of virtualising ten servers on a single physical box, you could of course consider running a single server on a single piece of hardware again. And still win power/flexibility wise if you can get your "low-power" ARM board to cost much less than your souped up x86 board. If only because if a single board fails, just one server goes down. Not all ten.

Even programs that you wouldn't expect to need much memory often benefit heavily, as any modern desktop or server OS uses free RAM for disk cacheing. Adding more memory means fewer slow, slow disk reads are needed.

64bit memory range? Each node is going to have it's own memory slot(s). 120 cores, 4 cores per node = 30 nodes. If you plan to have less than 4GB of memory in this system, how small does each stick have to be when you plug 30 in? ~128mb. Good Luck finding a bunch of DDR2/3 128MB sticks to plug into your 4GB 120 core web server.
Anyway, each node needs its own local copy of the data it needs to serve up. If you web page needs ~256MB, each node is going to need the same 256MB of data duplicated, plus any ext

Nice to know:-)
If it works as a unified memory, then 2GB per node and 30 nodes is going to be way more than 32bit addressing, but it would be great for distributed work. If each Node runs as it's own machine, then they will have to have a separate boot drive for each node and each node will have to have some sort of network connection to every other node. Should be interesting once more info comes out.

My bet would be that each of the 120 nodes actually is a complete computer with 4 cores and its own memory - linked to the other 119 only via Ethernet. In this arrangement the 32-bit memory limit is not such a big issue. Each individual machine will not be particularly powerful anyway.

And you're posting on Slashdot, instead of flying your private jet to Japan to personally pick up debris and rescue people.

Oh right, only rich people have private jets, a lot planes won't fly to Japan now, and even if you get a flight, unless you are currently in Japan with a car (most public transportation is down where help would be needed, and most Japanese people don't own cars), you'd have to walk to the disaster areas. You can't do anything except donate money and hope.

So basically you want Slashdot to turn into every news outlet on earth right now?If I want to hear more about any of the current natural disasters, the state of Libya or even what lipgloss Jooolia is wearing this week - I'll turn on the Television or read a news-corporation owned website.

This is Slashdot, News for Nerds - just because a disaster happened doesn't mean we stop wanting to know about anything else.

The worst natural disaster in recorded history occurred less than a week ago, and you people are discussing Calxeda's first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores; as the chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box; chips will be based on ARM's Cortex-A9 processor architecture???? My *god*, people, GET SOME PRIORITIES!

The bodies of nearly 10,000 dead people could give a good god damn about the advent of LAN parties, your childish Lego models, your nerf toys and lack of a "fun" workplace, your Everquest/Diablo/D&D addiction, or any of the other ways you are "getting on with your life".

I have inlaws and friends in Japan, and thank God they are all fine. But even if something have had happened to them, what would you expect me, a/. reader, or anyone, to do? To cut my veins and pour ash on my head? What about the rest of the readers. You are just an attention whore looking for a cause celebre to be upset about. Nothing more as your little rant does nothing constructive.

You don't know if people reading this donated for the cause. You do not know anything about anyone here, about what they