<codeplay@gmail.com> wrote in message news:07-02-060@comp.compilers...> > What's new in compilers? Very little. I think in 10 years, compiling> > will be largely forgotten.>> Parallelism. Processors are going parallel and there is no clear> solution to that problem. The future of processor architecture is> unclear.>
Amen.

> Compilation is pretty much necessary for software development. I do> remember the days of programming in assembly and I know people who> still do. I can see what you're saying, but it's the wrong time to be> saying it. People are really struggling to develop software for multi-> core processors (beyond the very simple). We're about to release some> new compiler technology to deal with parallelism and I know others are> too.

Compiling for such architectures is actually pretty hard, and conventional
languages clearly aren't the right answer, as many of them (C and C++
notably) have no model of parallelism at all. Small SIMD vector CPUs
are everywhere, but it is very hard to get current compilers to used the
SIMD instructions effectively.

Other application areas require other techniques. For symbolic computing,
we've made a bet that medium-to-small granules of computation are
the right concept to support the type of SMP parallellism we are seeing
(4 CPUs today, 32 CPUs ... soon) in desktops. We started making
that bet back in 1996, and developed PARLANSE, a parallel programming
language. PARLANSE addresses the problem of irregular parallelism,
where you may have lots of small tasks of uneven size with complex
dependencies. Our application is automated program analysis
and transformation, where the scale of our artifacts are large
(systems of source code), and irregularity comes from the shape of the
data structures that represent it.

Today we are running 500K SLOC programs in generated PARLANSE programs
that run on Wintel boxes, to support our DMS Software Reengineering
Toolkit. They seem to scale nicely.

Compiling code for irregular parallelism is an interesting topic.
How do you identify the parallelism? How do you estimate the cost of
grains?
How do you schedule them on a small but indefinite number of CPUs?
How do you keep context switching time low enough to make it
worth the trouble? How do grains interact efficiently?
How do you handle exceptions that cross parallelism boundaries?
Efficiently? PARLANSE is a point answer in this space.
I'm sure there are other, but our particular experiment is just
beginning to be interesting.