File size

File size

File size

File size

File size

248.0 B

This is the second year I've been lucky enough to take part in the cross-platform software engineering conference
JAOO. Like
last year, I was very fortunate to get to sit down with a few key players in the programming languages design field and watch several technical presentations that span the industry and problems we face as software developers. One of the truly great things
about JAOO is that it is not a product-focused conference: it's about programming first and foremost and enables the sharing of perspectives and ideas among the world's best and brightest programming minds. As you can imagine, I, like many technical types
here at Microsoft, am a huge fan of JAOO. Thank you
Trifork!!!

In this conversation Microsoft Technical Fellow and Chief Architect of C# Anders Hejlsberg sits down with programming language
design legend and computer scientist Guy Steele (creator of Scheme and expert in
several languages ranging from LISP to Java). I think Guy is one of the smartest people I've ever met.

The topic of conversation is the elephant in the modern general purpose programmer's living room: Concurrency. With today's widely-used general purpose languages like C++, Java, C#, VB, Ruby etc it's hard to express parallelism in productive ways. Anders et
al are working on both language enhancements to C# and VB.NET and BCL support (Parallel Extensions to .NET for example). Today, Guy is working on a mathematical language (domain specific as opposed to general purpose) and runtime, Fortress, that is so concurrent
it makes it hard for programmers to even write sequential code!

Listen in to two of the programming industry's most successful thinkers and get a sense of their perspectives on the future of general purpose programming languages now that Concurrency and Parallelism are entering the development status quo.

Enjoy. More JAOO coverage to come. You can watch Anders' keynote on language futures
here.

Without a doubt it's very exciting to focus on the end goal of shiny new languages and functionally 'pure' (or at least side-effect annotated) frameworks but over the next 5 years I'd really like to see Microsoft et al spend some of their community education
budget on giving direction as to how we should transition our existing deeply object-orientated architectures to prepare for all this. How do we find that sweet spot of being more explicit about mutation without totally giving up on encapsulation. Should we
start annotating mutations? Surely Microsoft should provide those annotation definitions so we can use a common standard. Will we get short term tool / framework / runtime changes to support the
transition and not just the rewrite scenario.

Don't get me wrong, I'm not being anti-change here, but any pitch to management about adopting this stuff needs to include a technically strong discussion of how architectures can be changed over time in such a way that they don't immediately and severely impinge
on developer productivity. There is clearly more to the problem than just 'pepper your code with LINQ query statements' (I'm being deliberately provocative, not as an attempt to troll but because I think this part of the story is currently missing.)

Great Discussion. I tend to think the more functional stuff added to C#(Anders seems to suggest that!), the more polluted it's going to be. C# shouldn't be answers to all the problems, it's about time they don't try to make it that way!

No matter how much functional programming is useful, current languages will still be alive and kicking!

Great points. In fact, this is exactly what we are doing with Parallel Extensions for .NET and language/runtime changes in our stack. As Anders states in the video we don't expect developers to throw away their current toolset. Instead, we'll add new constructs
to the existing tools to make writing concurrent code productive and effective. Of course, a fair amount of magic needs to happen to pull this off and Anders et al are working very hard at it!

I thought both seemed to extol F# and pretty much say "if you want functional, then
that (F#) is what you want to use".

The appeal of functional programming to Anders (it seems) is when it comes to parallelize your code. The task parallel library is already working on a parallel for-loop. The next logical step in code, is any method where a time consuming task is present (work
need to be done) like your lambdas. I think that that is why Anders extols functional programming, insofar as "the elephant in the room" and functional programming being used as a tool in concurrency issues.

I therefore see C# developing as a hybrid language or creating a hybrid developer, whereby if any parallel tasks need to be done, the best practice in your application is to use functional constructs, as they can be made to run in parallel. Learning F# is going
to be the best way for .NET developers to leverage their existing .NET knowledge, without resorting to the extremes and complexity of Haskell.

That point about a base language that can support different syntaxes though library-like extensions - surely that has to be the way to go, in the long term?

We already have an ever-growing range of APIs in the CLR to let us dynamically compile code snippets into executables. The C# and VB compilers are "libraries" in that sense. They need to be reusable in different contexts, e.g. partial compilation for IDE intellisense
as well as the "real" compilation process. And so why not implement those two languages as AST processors on the same general compilation engine. And then introduce a way to let you switch syntax libraries in the middle of a file, or in an expression.

Assuming that someone writes software that super-concurrent, is there a way to verify that that is the case without actually trying it on a multi-processor computer?

For instance, let's say that I have a quad core computer, and I write software that scales amazingly to 4 threads. However, my intent was to actually scale to 8 or 16 or 32 threads, but I don't have the machine to test it.

Is there some kind of execution path analysis tool (perhaps Intel can help with this), which analyzes the binary code (or may be source) and says, "yes, your code scales to 256 threads, with the following data set, but no more than that"

Is there any research done in this area that anyone knows about?

Remove this comment

Remove this thread

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation,
please create a new thread in our Forums, or
Contact Us and let us know.