Search This Blog

Sunday, September 30, 2007

Many-Node has many more implications than Many-Core

These shared-memory mechanism discussions continue to miss the point about the Many Core Era...

The many-core era will also be a many-node era. You will not have a C: drive except for those legacy systems running in their little jars of formaldehyde.

You will have a lot of computing power "out there" and several kinds of displays that are not directly attached to "your computer". You probably will not be able to locate an "application" as being installed on your C: drive and running on the CPU on the mother board that has a ribbon running out to the disk that has C: formatted on it.

Hardware will be too cheap to be organized this way. We need to begin reorganizing our software now or five years from now, we'll be really bad off, without any excuses.

If your current programming models distinguish between "these threads threads" and "those other processes over there" then it's almost certainly the wrong model for today, not to mention tomorrow.

5 comments:

Yes - but surely the point is 'will also be'... we have to find some way of getting from here to there - and effective concurrency on a single multi-cored machine is still not yet a 'solved problem'...

Oh - and you also have a 'leaky abstraction' problem if you attempt to abstract the network out of concurrency over a network.

Concurrency with multiple processes (or threads or transactions or whatever) on a single machine don't need to know about latency - concurrency across a network probably does need to know about it - and pretending it's not there is surely a recipe for pain...

"you also have a 'leaky abstraction' problem if you attempt to abstract the network out of concurrency over a network."

The nice thing about Erlang is that it doesn't abstract the network out. You simply won't have as many failures when running in the same node. but the code looks the same and should still assume things will fail.

Even running a single threaded program on a single core you need to know about latency; 15 years ago you could assume that a FP division took tens of times longer than a memory fetch, and iterating over a linked list was only a few times slower than iterating over an array. You can't ignore it anywhere now.

About Me

I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer.
Contact me at my gee mail account, username patrickdlogan.