Post permalink

I'm just trying to have a conversation around developing new (revolutionary versus evolutionary) methodologies versus modifying old ones to exploit the advancements in hardware in the most effective manner as possible. Auto-parallelization at the machine
level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?

Of course, throwing everything out that's been invested in for so long is unrealistic, but this is why theory is fun

C

It is indeed an interesting topic.

Let me express what annoys me more than anything, which is this

There is not a single platform for software development; .Net did not kill off native code dispite its all-inclusive mantra, Microsoft itself still creates a lot of native software for maximum performance; maybe this picture will change a little as the
per-machine processor count increases

This in term means that

Everything is created twice - native and .Net. The latest example is Rx (which is even triply created; Javascript as well; - cool as that is).

C# and F# are awesome but

Is the "object-functional integration" good enough and does IL have the right abstractions?

Also, given a superbly expressive language like Scala

Wouldn't it be nice to be able to express a whole operating system using extremely modular, fine-grained and clean semantics (assuming better than C/C++) And is the Singularity/Sing# experiment the future for Microsoft?

This also ties into the parallelism and concurrency problem in that

It is preferable to be able to express as much code using a high level of abstraction, thereby pushing abstractions closer to the metal, to have as much code as possible yield the benefits of those abstractions and to maximize integration (IL + P/Invoke
vs C# + F#?)