Herb Sutter, a software architect from Microsoft, gave a speech yesterday at In-Stat/MDR's Fall Processor Forum. Addressing a crowd mostly consisting of hardware engineers, he talked about how the software world was ill-prepared to make use of the new multicore CPUs coming from Intel and AMD.

Funny, those are the issues Sutter (and others) are bringing to the C++ committee--that the language will have to address threads explicitly, even if it harms the single-threaded case.

Additionally, his arguments finally convinced me that GC in C++ is a good thing. He has also been one of the people working on a GC model for Managed C++ which separates object lifetime (which can be deterministic) and memory reclamation.

And I'm confident he's familiar with the C# efforts. Since C# has some problems in the MT arena (all languages do) I don't see why it has a better chance at addressing MT issues than C++ does.

Funny, those are the issues Sutter (and others) are bringing to the C++ committee--that the language will have to address threads explicitly, even if it harms the single-threaded case.

I'm not talking about threads. Native thread support in C++ won't make parallel code any easier to write than it is in Java. I'm talking about basing the language on an explicitly concurrent model of computing. The precise computing model to use is still an area of intense research, but Pi calculus (as used in Polyphonic C#), seems to be a popular one.

And excuse me if I don't trust the same people who came up with auto_ptr (and encouraged shared_ptr) to come up with a proper parallel C++.

Additionally, his arguments finally convinced me that GC in C++ is a good thing.

But you'll never be able to have a good GC in C++! As long as you can convert an integer into a pointer, you'll need a conservative GC, and that makes a lot of the really high-performance GC algorithms unusable.

And I'm confident he's familiar with the C# efforts.

Polyphonic C# is a seperate research project being funded by Microsoft --- not a part of their main C# effort.

Since C# has some problems in the MT arena (all languages do) I don't see why it has a better chance at addressing MT issues than C++ does.

The standard committe is likely to be the primary reason why any concurrency concepts in C# will suck (just as the metaprogramming concepts suck). They refuse to break compatibility with "classical" C++ and C, and are thus stuck to using inferior solutions. Moreover, pointers and guarantees of byte-level object layout will likely become another issue. For the overhead of a truely parallel language to be tolerable, you'll likely need powerful optimizers, which are exceedingly difficult to do in a langauge that makes as many guarantees about memory layout as C++ does. Hell, C++ can't even support a compacting GC!

I'm talking about basing the language on an explicitly concurrent model of computing.

Fair enough. I disagree on this point.

And excuse me if I don't trust the same people who came up with auto_ptr (and encouraged shared_ptr) to come up with a proper parallel C++.

Both of those have their place. If you're just knee-jerk objecting to them, then I'm afraid your credibility drops to near zero.

But you'll never be able to have a good GC in C++! As long as you can convert an integer into a pointer, you'll need a conservative GC, and that makes a lot of the really high-performance GC algorithms unusable.

Hell, C++ can't even support a compacting GC!

Now you're just exposing your ignorance. Read up on what has been done in Managed C++. Part of it includes the ability to include or exclude objects from the GC heap. Different rules apply to the GC heap objects. The CLI GC is compacting, and the (just released) VS2005 includes Managed C++ with GC.

I must respectfully disagree about the difficulty in writing concurrent code; although more difficult than writing linear code, but it is mostly a matter of learning a few additional concepts and adjusting one's perceptions slightly.

In contrast, it is far more difficult to introduce concurrency into existing linear code without dramatically curtailing the concurrency. In many cases this cannot be done without significant rearchitecture.

With a few exceptions, C++ is also not a bad language to write concurrent code in. Neither garbage collection nor smart pointers are needed. If it is approached with the idea that no more than one thread "owns" a piece of memory (generally exclusive right to update or delete it), the management is relatively straightforward and far more efficient than the two alternatives given. The greatest problem I have found is managing thread exit; threads must not exit while owning any memory.

If a reasonable threading model is started with, most problems yield relatively easily. Tracking down problems is more difficult, but the basic debugging tools are getting far better and the techniques, although nonlinear, are reasonably straightforward.

I have worked extensively with highly concurrent software for over a decade on Unix platforms, utilizing both concurrency using shared addressing in an interprocess and intraprocess context simultaneously; much of that effort being part of a DBMS engine, and learned to do it without any specialized training other than reading and experimenting.

With some training, any reasonable developer ought to be able to learn and use the techniques in six months, sufficient for most application work.