Description

In this talk, Technical Fellow Anders Hejlsberg will share project plans for the future directions of C# and Visual Basic, including a discussion of what trends are influencing and shaping the direction of programming languages. Anders will talk about asynchronous programming and Windows 8 programming, coming in the next version of Visual Studio. He will also discuss the long-lead project “Roslyn”, including object models for code generation, analysis, and refactoring, and upcoming support for scripting and interactive use of C# and Visual Basic.

The Discussion

Just wanted to send a big "Thank You" to all the folks who are creating the downloads. I could not make it to the conference in person but because of their hard work in making the sessions available so quickly I almost felt like I was there. Your work is definitely appreciated here!!!

Just wanted to send a big "Thank You" to all the folks who are creating the downloads. I could not make it to the conference in person but because of their hard work in making the sessions available so quickly I almost felt like I was there. Your work is definitely appreciated here!!!

I'd like to second that. The quality and speed is great. Someone is doing their job.

So, essentially what the .NET team has been working on for the last four years is async (yes. absolutely amazing stuff) and then to follow that up.. wait for it.. a Microsoft version of Resharper?(!!!)

Seriously.. C# vNext is going to be nothing more than access to the compiler so that we can build our own refactorings for visual studio? Who really needs that?

In the mean time, Herb Sutter and the rest of the Windows core group have realized that performance actually matters and fixed C++. Do we have access to the fully power of the CPU or GPU in .NET? No. Do we have auto-vectorization in the .net runtime? Do we have access to vector instructions in any form? Do we have proper numeric generic support for games or scientific computing? Has anyone spent any time whatsoever on narrowing the gap between native and managed? How about something on the order of Mono's LLVM optimizing native AOT compiler???

F# at least has type providers.. which are modestly useful to somebody.

@bcooley: wow, that is a really nasty post. What you fail to realize is that Roslyn will make AOT very easy since c# code can be *dynamically converted into C++* and then *compiled as native*. Try to use your imagination for positivity instead of being negative and rude! This takes C# light years ahead.

Furthermore c# 4.0 was just recently released, even though they call this c# 5 (marketing) I think this is more of like the 2.0/3.0/3.5 pattern we saw in the past (hence no *actually* new runtime version).

That is my absolutely favorite session of any PDC so far. I am impressed how the async enhancements make doing async calls so much easier -finally the average dev will use async as the standard way of method invoking

I don't feel that bcooley posts was nasty or negative. It really depends on the perception of .NET 4.5. The dev stuff had been probably reduced so they had to focus on a few important enhancements that fit into the new Windows 8 strategy and had to leave out many other features on their list.

That is my absolutely favorite session of any PDC so far. I am impressed how the async enhancements make doing async calls so much easier -finally the average dev will use async as the standard way of method invoking

I don't feel that bcooley posts was nasty or negative. It really depends on your perception of .NET 4.5. The dev team staff had probably been reduced so they had to focus on a few important enhancements that fit into the new Windows 8 strategy and had to leave out many other features on their list.

Peter

PS: Sorry for reposting but the first post had a few typos and I didn't expect that the comment went online without having to sign in.

The async stuff was nothing new to people who follow C# 5, but the Roslyn project pretty much blew my mind. The interactive prompt is nice. My favourite part of the demo is the custom refactoring methods.

Seriously.. C# vNext is going to be nothing more than access to the compiler so that we can build our own refactorings for visual studio?

It's not only about refactoring. As I understand it, Roslyn is a first step towards finally enabling (useable) meta-programming in C# by giving full access to the AST.

Do we have access to the fully power of the CPU or GPU in .NET? No. Do we have auto-vectorization in the .net runtime? Do we have access to vector instructions in any form? Do we have proper numeric generic support for games or scientific computing? Has anyone spent any time whatsoever on narrowing the gap between native and managed? How about something on the order of Mono's LLVM optimizing native AOT compiler???

These are all good points and there surely remains a lot to be done by both the CLR/CLI and C# people. I'm too still dreaming of a tracing JIT for the CLR, for instance...

As a scientist I would like to support "Roslyn". I see a lot of scientific applications of that project. From my point of view it would be nice to have the following features in c#:

- maybe it is possible to add "Assembly.UnLoad" method for .net. "Roslyn" allow to write very dynamic code (which is very important to me) but it is not possible to unload an assembly.Lack of that feature make component programming very hard. I understand the following reasonshttp://blogs.msdn.com/b/jasonz/archive/2004/05/31/145105.aspxand I know that there is "System.AddIn" but this problem looks very strange.If that is not possible, then maybe you can add more cross AppDomain communications mechanism. At this moment writing dynamic code with different components in .net is a painful experience. It is possible to unload unmanaged components (using LoadLibrary and FreeLibrary from kernel32.dll) but I cannot do that with managed components. That looks very strange.

- the biggest source of problems with performance in c# is Garbage Collector. GC usually is very good but in some situations developer would like to have more control on how to manage memory. Maybe it would be possible to disable GC for some part of the code and let a developer to manage memory in some cases.

- it would be good to have some way to write some low level code (assembler machine code) for the part of the program which require a lot of performance (for SSE or GPU). Maybe you can disable GC for that part of the code treat it as a "black box". If something bad happened then that is a problem of the developer.

- Maybe you can add support for bigDecimal (I know that I can use J# but ... it would be nice to have it in c#). Since it is in J# so that shouldn't be too difficult.

I spent a bit of time hovering over the keyboard before clicking "Comment", even returned a while later to re-read the post, and decided to leave the post as is. It just needed to be said.

If you check Microsoft Connect, you'll see that there are quite a few outstanding issues related to performance and access to the CPU and GPU. These issues date from as long ago as 2004 (lack of support for numerical operations in generics). There is still really no way to fully or easily access the power of either the CPU or GPU through .NET without dropping into native code, and even there you're dealing with thunking issues. This matters because .NET is the primary development platform for mobile devices, and these devices are underpowered relative to the desktop.

I fully appreciate that the old C/C++ infrastructure for the VB/C# compiler needs to be replaced, and that there is a nice upside to this in that it simplifies support for refactoring and other IDE features. However it's simply far more important to address the performance issues with the platform as a whole, and to recognize that it's primary use is no longer necessarily on desktops with desktop level cpu power, and not necessarily being used to write desktop web or LOB apps.

That, and I suspect that the Core desktop and application (Office) teams also put their collective feet down and insisted that the future platform not sacrifice native level performance

There's really so much that could be done to improve managed performance. Eliminating some of the thunking overhead by re-engineering the native compilers to be more managed friendly. Moving operations that would require native/managed interop into C#/VB. The biggest optimization that was never done was to AOT/NGEN managed code and add it to the cache using an optimizing compiler. Mono has done this for mobile platforms with LLVM, not microsoft. I really don't understand what the benefit is of running the JIT on all code each time it's run.. it slows performance and disallows serious optimiztion.

There's also the issue of a very poor interop experience with C and C++ code from managed code. PInvoke worked for C code but required transliteration of anything in the header files, but never for C++, and that put most C++ library access out of reach for managed code. No effort has really been put forward to solving these issues, or even admitting that they exist.

In any case, it's really a lost opportunity to push what is really a better platform towards covering more of the development stack, at least down into large complex applications, scientific work, or games. Instead the core and app teams decided to fix the module and platform issues with C++ and move forward.

@bcooly: I think that interoperability between managed and native code just made a big leap forward by introducing WinRT components. I just watched a build session where they demoed how to write a WinRT component in C# consuming it in JavaScript. Herb Sutter showed how to write a WinRT component in C++. It seems to be clean, easy and performant in all languages, managed and native. For me that is the most important aspect of WinRT. Write the 10% really performance critical parts of your app in C++ maybe using the GPU and automatic vectorization, wrap it in a WinRT component and consume it seamlessly in the rest of your application written with best productivity in C# and .NET. No COM, no interop, no Pinvoke. It would be a dream if this would work on Win7 and XP, too (not the WinRT itself, only WinRT components). Bernd

WinRT components looks really interesting.I hope that I will be able to unload WinRT components and update them at runtime. In .net that is not possible at this moment (only by using AppDomain which is not a very practical solution).

@rab36: I agree. Microsoft has pulled off unifying it's previously fractured platform again with WinRT, and looks to have positioned itself well into the future. Unfortunately for .NET that unification revolves around native code and C++, and not .NET. Sure there is nothing preventing anyone from writing apps in .NET, but it is not at the core of the platform, and it will not offer either the performance, nor easy interoperability with traditional C++ libraries. The platform is built on C++. The reason for this is simple: performance and interoperability with existing C and C++ code.

As a comparison, take a look at the iOS/OSX platform, which revolves around Objective C. Objective C is somewhat hobbled by it's reliance on selectors, which use a form of optimized dynamic dispatch. However Objective C actually "is" C and C++. C and C++ code can be intermixed freely in Objective C, you do not pay a penalty for JIT compilation, and code that does not use Objective C selectors runs with the same performance as any C/C++ function, and there is no GC penalty.

Contrast the strategy. In the mid 200x's Microsoft introduced Win/FX with WPF, WCF, and other major .NET based replacements for the native level Windows platform. Apple introduced Cocoa and the other NS based "kit" libraries for OSX. The .NET Windows platform is now deprecated on on it's way out in favor of the native Win 8 platform, while Cocoa became the standard platform across all apple devices.

Native performance and tight interop with C and C++ won on both platforms. Java has been deprecated on iOS, and .NET has been put in 2nd place (though certainly it is NOT deprecated) on Windows 8.

My post and original point was that it really didn't have to be this way.

So I'm sure Anders considered having the default for async calls be await without having to type the await keyword allowing for a smaller more concise format. I know this would not work for backwards compatibility but it could be turned on with a project setting or something of the sort. Then there would be a new keyword to use if you don't want to await. I have spent only 2 second thinking of this so I'm sure there are all sorts of reason why this is a bad idea that I have not thought of yet but could so you speak on some of the decisions around the await keyword. Thanks.

I am great fan of MS and their products, but I hate their disgusting manner in which they present "new" features that have been there for a while. Now I am speaking about this Compiler as a Service thing, that has been in Mono for few years! And they have the same REPL console as a sample in Mono! Now Anders shows it like they invented it, no, guys, you just created one more implementation, may be an outstanding one, but you are definetely not inventors here. I guess MS should have employed Miguel de Icaza, and could have get this Compiler as a Service (and possibly other cool things) years ago and save money and efforts.

Andrey: First presentation of Compiler as a Service comes from PDC 2008 (http://channel9.msdn.com/Blogs/pdc2008/TL16)after that presentation Miguel de Icaza introduced that concept in Mono. There is also cs-script project (http://www.csscript.net/). However both Mono and cs-script use only traditional dynamic compilation and I do not think that you can compare these products with "Roslin" in which the whole compiler is written in c#.

I agree with bcooley. This is not metaprogramming. This is IDE programming. @Andrey REPL is not something difficult to implement

@Jedrek they are very similar in deed. Whole Mono compiler is written with C# since many many years.

For the guys who look for a better langauge, Check out Nemerle ( http://nemerle.org ). It is open source, functional, object oriented and supports true metaprogramming and with decent support on VS 2010 and 2008

@Jedrek as @Onur pointed out Mono's compiler is written in C# that makes it cool. What is cooler in Roslyn that it gives you intermediate results such as AST so that you can manipulate it. Mono's compiler is a pretty black box. It is so coupled that even if you look into sources (I did) you can't get it out.

@Onur, I agree that REPL is not hard to implement, I am just pointing out that MS used even the same sample that Mono has for Compiler as a Service and for me this looks like slight hypocrisy.

If Anders talks, smart people listen. He's always over-the-top. He's a hero since Delphi - a strong advocate for developers and business productivity. The only concern is that the lead soldier is diving into meta-compiler initiatives, and not taking on the elephant-in-the-room, HTML5, Javascript and the effective closure of platform-specific code in favor of some TBD stateless wonderland. You can see from the video where his, albeit, genius, enthusiasm lies - Roslyn. Maybe I'm mis-reading, and he's sticking to his guns, but those of us who advocate Microsoft need to see that Microsoft, on the web, will be more than C#/VB.NET wrappers and syntactic sugar for HTTP calls, slightly shadowing the broader Rails / Python / Google / Apple community. Anders has always stepped up when a meta-step was necessary in the industry - I'd venture that time is now. Can we get him on the case?

I see that at Microsoft there are a lot of people from the university (I am also working at the university). It looks like these people have big influence on the Microsoft's politics at this moment.

I would like to say that as a scientist I am VERY happy that "Roslin" is moving forward.

However, I understand that many professional developers are not so happy about that.

Being able to write very dynamic code is very important from scientific point of view (for example it is possible to creature DSL much easier and refractor code dynamically etc.). It is possible to write 1 000 000 scientific papers about that ... . Which is very good for me and future of the universe ...

However I am not exactly sure if this is a tool for professional developers.

well ... we will see what will happen to Microsoft which started to care about people at the university more than people in the enterprises.

As a person from the univeristy I would like to say that I like that new politics very much ...

@bcooley:“There's really so much that could be done to improve managed performance.”Yep, with a lot of work, that few percent performance overhead C# currently has over C++ could have been almost completely removed. They chose to improve other areas that they considered more important.

“In any case, it's really a lost opportunity to push what is really a better platform towards covering more of the development stack, at least down into large complex applications, scientific work, or games. Instead the core and app teams decided to fix the module and platform issues with C++ and move forward.”Haha, no.The real problem is that most lead people in the Windows/Office teams were left/stayed out of the design of .NET. They didn’t support it at all since the day it was introduced. It doesn’t matter if the Hejlsberg’s team eliminated the little performance difference between C# and C++, they’d just find another aspect to attack it from. They’ve never written a default application for Windows in .NET. Why you think they didn’t? Before you say “performance”, please remember that these very same people just pushed Javascript through right as a core platform into Windows 8.Why did they do that? Because it’s so f-in popular (http://tinyurl.com/3xutoh)? Because it’s so much more performant than C# (hahaha)? Nope. They brought it in so they can finally attack C# from another perspective than performance.Watch as suddenly 10%-100%-500% performance difference will stop mattering, and these teams, who never touched .NET, going to use Javascript for a lot of applications. Wondering what exactly the new keywords are going to be, probably either “portability” or “learnability”.

For the career of these high-caliber people within MS, who were left/stayed out of the .NET world, its success is severely counter-beneficial, and would rather see C# dead. With Sinofsky’s recent advancement within the company, they began winning.

Today, the keyword is performance, and we will see as Sinofsky clapping over his head crying they have reduced Window’s stand-by memory consumption by 17(!!!) kilobytes from the developer preview to CTP. Tomorrow, the keyword will be easy learnability, and we’ll see the UI of several core Windows applications written in Javascript, because that’s such a great way to demonstrate this new (LOL!) language!The day after tomorrow… They’ll find their keywords, don’t worry.

Of course, I’m not saying they’re all doing it consciously, but if you look at what happened in the past 10 years, it’s pretty obvious that the pushback of .NET’s has very little to do with its actual capabilities, it’s all politics.

It indeed looks like the development of C#/VB has slowed down since they began working on Roslyn, but it makes sense. They obviously can't finish a much more sophisticated compiler than the one they developed for 10 years in C++ overnight, I'd guess it might take another 2 years or even more before they can finally put the old one to sleep. After it's ready however, it'll be much cheaper for them (and possibly even for the community) to add new features to C#/VB, while providing tons of additional useful services. Don't want to discuss my ideas about them right here, because I don’t have enough time, but –if they make it right- its impact is going to be huge :-)

"Yep, with a lot of work, that few percent performance overhead C# currently has over C++ could have been almost completely removed. They chose to improve other areas that they considered more important."

@niall I understand where you're coming from, and trust me we're on the same side as far as preferring the 3-10% loss in performance vs. native. Unfortunately there's quite a bit of native code out there, and actually native performance and native interoperability does really matter.

Imagine if you were an Office programmer with 2-3 million lines of C/C++ code and you were told that over the next 3 years the team would be moving exclusively to C#. Would that make sense? If you understand why it doesn't, you'll understand where .NET went wrong during Vista and the continuing problems with where it is now.

Here is a comparison of iOS/Objective C and .NET in terms of what Objective-C supports that .NET does not that makes it work for Apple -> in the primary dev language role -- the role that .NET has largely been a failure for at Microsoft.

Interoperability with C ObjC/Yes .NET/PInvoke

Interoperability with C++ ObjC/Yes .NET/No(very difficult)

Interoperability with C/C++ headers ObjC/Yes .NET/No

Cost of call to native code ObjC/None .NET/Thunk

Optimized Code Generation ObjC/Yes .NET/No

CPU Vector Instructions ObjC/Yes .NET/No

CPU Auto Vectorization ObjC/Yes .NET/No

Management of large/complex Memory/Heap Objects ObjC/Yes .NET/No

GC, ref counting, and manual mm. ObjC/Yes .NET/GC only.

Direct marshalling of Memory Bufs to Heap Objects ObjC/Yes .NET/No (Ever wonder why it takes so long for game levels to load in .NET?)

Control of memory alignment ObjC/Yes .NET/No

Support for numeric generics ObjC/Yes(C++) .NET/No

Macro/practical metaprogramming ObjC/Yes(C preprocessor) .NET/No

Large working set sizes ObjC/No .NET/Yes

Slow startup time ObjC/No .NET/Yes

-------------

This is a very long list of very important missing features in .NET vs. Objective-C. This doesn't mean that .NET isn't useful as a high level application platform (like Java), but it does mean that it simply can't fill the role that Objective-C fills for OSX/iOS as the primary dev platform for Windows 8 or Win Mobile. This is the reason for the resurgence of C++ at Microsoft for WinRT and mobile.

>“There are many cases where how memory is managed for large mutable datasets gives native code a very wide margin of performance advantage.”I am aware of some of these cases, they can usually be overcome by knowing the nature of the GC well enough (plus avoiding heap allocation whenever possible). Have yet to see an application whose performance can’t get close to the C++ one with the right optimizations. Of course, improving the garbage collector is always good but the resources of the developer team are limited.

>“completely missing the support for numeric”While the lack of generic support for numeric types isn’t a great thing, this is definitely not a performance issue. If you want to make a high-performance numeric library, you'll often need to use the basic characteristics of the data type. You obviously can't make the same optimizations for integer and floating point numbers, and often even 32 and 64 bit integers require different algorithms to most effectively solve specific operations on them. For performance, you need to use type-specific algorithms anyway.

>“B) they realized that native level performance really matters on mobile, and so does C and C++.”Java is pretty damn successful on Android.

>“Imagine if you were an Office programmer with 2-3 million lines of C/C++ code and you were told that over the next 3 years the team would be moving exclusively to C#.”Well, until the language has actually proved itself well enough, moving all existing code to it doesn’t make sense. But the office team wrote its new online content in Javascript, at the times when Silverlight was having its peak (already around ~70% coverage, and no post-“shifted” madness). The Windows team introduced several new applications since .NET was published, and none of those were written in .NET (except for Media Center maybe? but seriously…). I guess the outside communication was “we don’t use it due to performance issues”, but as I said, just watch as they are now opening to Javascript and the excuses will change, just the real intention won’t.

>”Interoperability with C / C++ / headers”I actually thought you copied this stuff out of an objective-c marketing table, ‘cause you fragmented up features that are tied close together into multiple lines several times. Anyways (googled it and couldn’t find, so I guess you made it up this way yourself for some reason), I can’t really comment on it, but doubt this one is gonna happen, this would mean that the future characteristics of C++ (partially) define the future characteristics of C#, and would stop C# from being an independent language. Possibly in the form of some extension, some automatic wrapper generator similar to how now C# communicates with WinRT. But I bet there are tons of gotchas in making something like that work.

>”Interoperability with native heap / Management of large/complex Memory/Heap Objects / GC, ref counting, and manual mm. / Direct marshalling of Memory Bufs to Heap Objects / Control of memory alignment”I guess allowing lower-level memory management could be beneficial in some cases. Again, the C# team needs to measure how much it would lower the current advantageous characteristics of C#, and how much resources providing this feature would require from their side, which they obviously are limited with.

>”CPU Vector Instructions / CPU Auto Vectorization” Yep, I guess it’s pretty much a matter of resources on the developer team’s side again.

>”Macro/practical metaprogramming”I bet it’s not gonna happen. C# doesn’t have macros by design. It has several advantages that weren’t an issue 30 years ago when C++ was designed, but much clearer to see today. Ever noticed how much better VS intellisense is working with C# code compared to Visual C++? Huge part of the reason is macros.

>”Support for numeric generics“Nope, C++ doesn’t have support for numeric generics neither, those are templates. Different breed. Generics are type-safe and support robust programming very well. Templates have whole other attributes and are useful for different scenarios.

>”Optimized Code Generation”Oh, it’s there. The compiler optimizes the code a lot. Of course it can always be improved. By the way, there are some 3rd party tools that further optimize at least the IL as well.

>”Slow startup time”Ngen is already there, it doesn’t eliminate the difference? I never had much issue with any of my application’s startup time, never used ngen for that purpose, so I don’t know. If not, I guess this is again a matter of resources.

>”I'll also point out that the above list looks very different for Mono, proving that "it didn't have to be this way"Mono runs applications with severely lower performance than the .NET Framework on Windows. Really no need to comment more on this.

Anyways, in C#, features that support manageability and extensibility of large code bases will always be favored over ones that improve performance but lower the first two. Performance is the secondary goal. In C++, the order is pretty much the opposite (with a huge added legacy burden that make it qualify for the first two aspects even a lot worse than it would without). It could still get even a lot closer to native performance than it is today, still I wouldn’t ever expect macros to appear in it, for example.

If you want to make a high-performance numeric library, you'll often need to use the basic characteristics of the data type. [...] For performance, you need to use type-specific algorithms anyway.

True. For solving large problems you would always want to use low level high-performance implementations of BLAS/LAPACK like the MKL that is tuned to individual CPUs and cache sizes. But what generic programming should enable you to do is to built easy-to-use interfaces on top of these low level libraries, something you can do with template meta-programming in C++ (e.g. uBLAS, FLENS) -- though it won't be a particular pleasant experience.

A more typical use case for "generic numerics" on .NET would be something like "implement a generic class for small vector types". On .NET you can't really express the required type constraints (like "T is supposed to define a zero and to have a binary operator"), nor do .NET generics support the flexible typing behavior of C++ templates (structural typing) or the flexibility of Haskell's type classes. Yes, there are a few ways to still achieve the goal, but all of those workarounds either require interface dispatch or run-time code generation, both of which prevent inlining with the current generation CLR JITer.

I bet it’s not gonna happen. C# doesn’t have macros by design. It has several advantages that weren’t an issue 30 years ago when C++ was designed, but much clearer to see today. Ever noticed how much better VS intellisense is working with C# code compared to Visual C++? Huge part of the reason is macros.

It needn't (and shouldn't) be C macros, i.e. untyped text replacement. There are many languages out there that provide a sane way of meta-programming. Have a look at Scheme, Nemerle, Boo or D, for instance. Still, I also do not expect C# to gain such abilitiesas a built-in language feature in the foreseeable future.

Nope, C++ doesn’t have support for numeric generics neither, those are templates. Different breed.

C++ does have support for true generic programming (by accident, admittedly) and that's what bcooley is probably referring to; it's not about what "generics" mean in the context of .NET.

Generics are type-safe and support robust programming very well.

Templates are type-safe as well, they just don't use nominal typing. If concepts would have made it into C++11, this would have brought templates much closer to Haskell's type classes (see Bernardy et al., 2008).

The compiler optimizes the code a lot.

I don't know, whether you're talking about the C# compiler or the JITer. The latter one is the component that is critical for performance and lacks quite a bit behind Sun's/Oracle's HotSpot JIT compiler when it comes to inlining virtual method calls, not to mention optimizing C++ compilers (of course, C++ compilers don't need to compile on the fly, so that's not really a fair comparison).

Mono runs applications with severely lower performance than the .NET Framework on Windows. Really no need to comment more on this.

Until recently, yes. I don't have up-to-date measurements, but I suspect that their new GC and the LLVM backend might change this in the not too distant future. Well, hopefully...

The async functionality in C# 5.0 and VB 11.0 look really nice. The only thing that really disappoints me is that it completely ignores IObservable<T> which is a shame as it appears that it would work amazingly well when paired with the Reactive Framework extensions. It is my opinion that if await were to be combined with foreach that a really simple and easy to follow syntax could be created to allow "enumeration" over IObservable<T> and IAsyncEnumerable<T>:

.net has one fundamental problem: MSIL.I think that original design of .net was based on Java. JRE runs on multiple platforms so there is a need for some intermediate layer. However .net runs only on Windows (Mono is not a Microsoft product). I see absolutely no reason why .net has to have any intermediate languate or similar technology. Microsoft has no plans to move .net to other platform (OS X, Linux etc.) so what is MSIL for?I hope that WinRT become new native .net framework and everything should be great. Maybe in the future we will get native c# (without MSIL), that would be even better.

.net has one fundamental problem: MSIL.I think that original design of .net was based on Java. JRE runs on multiple platforms so there is a need for some intermediate layer. However .net runs only on Windows (Mono is not a Microsoft product). I see absolutely no reason why .net has to have any intermediate languate or similar technology. Microsoft has no plans to move .net to other platform (OS X, Linux etc.) so what is MSIL for?I hope that WinRT become new native .net framework and everything should be great. Maybe in the future we will get native c# (without MSIL), that would be even better.

The platform argument isn't as important as the hardware. MSIL allows the same binary to execute as x86-32, x86-64, IA64, PowerPC (XBox360/XNA) and ARM (Windows Phone 7 and Windows 8) without recompilation. A C++ application, even a WinRT Metro C++ application, will need to be recompiled to target each hardware platform*.

* Yes, I know that an x86-32 application will run just fine under WOW64 on 64-bit Windows, but it can't take advantage of the 64-bit platform by doing so.

> While the lack of generic support for numeric types isn't a great thing, this is definitely not a performance issue.

Agreed. It's a nasty papercut. But issues like these really do put off native developers. I can't say that this alone had anything to do with the move back to C++, but well who knows.

>"B) they realized that native level performance really matters on mobile, and so does C and C++."Java is pretty damn successful on Android.

Yes, and so is C++.

Lots of native apps are written on android, including Mono. Mono is also native on Android via C++. And mono can also run native on android.

> I guess the outside communication was "we don't use it due to performance issues", but as I said, just watch as they are now opening to Javascript and the excuses will change, just the real intention won't.

i don't know anything about the internal politics of MS, but I actually do think that the internal teams would probably have preferred to use C# for both Windows components and Office.

>"Interoperability with C / C++ / headers"I can't really comment on it, but doubt this one is gonna happen, this would mean that the future characteristics of C++ (partially) define the future characteristics of C#, and would stop C# from being an independent language.

Mono will be addC++ interop in an upcoming release (at least they've blogged about it). Their interop uses a C++ compiler to automatically generate interop assembly code that can be directly consumed from C#.

>"Optimized Code Generation"

No, the code generated by the jitter is not well optimized, simply because there is no time to optimize it. .NET's jitter does not do hotspot optimization, and NGEN does not optimize.

> Anyways, in C#, features that support manageability and extensibility of large code bases will always be favored over ones that improve performance but lower the first two. Performance is the secondary goal.

Well, that's too bad for Microsoft's .NET I guess. I wonder how having to move the Windows 8 platform to C++ and javascript will help managability and extensibility?

@bcooley:>"i don't know anything about the internal politics of MS, but I actually do think that the internal teams would probably have preferred to use C# for both Windows components and Office."

What? If the internal teams would have preferred C#, they would have used C#. They've never used .NET. Not for Office, which is very-very close to a LOB application, in which area .NET became an enormous success outside MS, not for the offline client of Windows Mail, not for Live Messenger, not for WordPad... nothing. You can't seriously believe that the reason behind this is the tiny performance overhead.Yes, there are people in the Windows teams who would prefer .NET, but the key people, who actually decide which technology to use, are severely biased against .NET. They never contributed anything for .NET, rather the opposite, suppressed it where they could. Generally, a stronger .NET orientation within MS means stronger positions for leaders who were focused on it, and thus relatively weaker positions for leaders focusing on different areas (you could very well say, sometimes focusing AGAINST .NET).The reason behind this temporary "C++ Renaissance" is that these people with anti-.NET interest, succeeded to convince the upper management with their current goals.

>”Well, that's too bad for Microsoft's .NET I guess. I wonder how having to move the Windows 8 platform to C++ and javascript will help managability and extensibility?”You’re joking, right? Why do you think those people –currently bitching about any performance overhead all the time- pushed through Javascript as a core platform into Windows? Why do you think .NET was and will always be a taboo for them, but not Javascript? Sure it has nothing to do with the fact that it’s something THEY introduced into Windows, and its success doesn’t have the threat of putting different divisions/people at higher positions in the eyes of the management.

Again, performance is just the current boogeyman they can use against .NET. Once it’s fixed, they WILL find something else (and to use Javascript over C#, they’ll find other reasons well before that).

I am not saying performance shouldn’t be focused on more, (although C# should definitely keep code quality as its primary aspect). I’m just saying that in reality, performance has little to do with what’s happening today inside Microsoft. Still, better performance would mean smaller vulnerable surface for C#, so they should indeed put some more effort into it now. Just keep in mind that it’s senseless to sacrifice too much for this area, because even if it’s solved, >95% of people who are bashing C# for its performance now, will definitely find something else they can keep on bashing C# for.

Funny that Windows Metro won't officially support 3D graphics directly from C#, while Sony is making C# the language of the PlayStation Suite SDK, that'll be available for Android tablets (plus smartphones, PS Vita and PS3).