C99 acknowledged at last as Microsoft lays out its path to C++14

Full conformance with C++ standards, and maybe an end to Visual Studio versions.

At its BUILD developer conference in San Francisco, Microsoft developer and C++ Committee chair Herb Sutter talked about what Visual Studio users can look forward to over the coming years.

Microsoft is transitioning to a new policy of rapid releases. The plans aren't as aggressive as the rapid releases of Firefox and Chrome, which see new versions released every six weeks or so, but a set of roughly annual releases is still a major change from Redmond's historic practice of making major releases every three years.

These rapid releases will allow the company's C++ team to deliver substantial functional updates to their compiler and libraries. In the past, there have been minor updates to these tools (shipping in service packs or feature updates), but these have been conservative to ensure that there are no breaking changes. Microsoft doesn't want a program that compiles and works correctly in, say, Visual Studio 2012 to stop working in Visual Studio 2012 Update 1.

Combined with the three-year span between major updates, Microsoft's schedule has caused Visual Studio to be slow in catching up with new versions of the C++ standard.

Now, Microsoft will ship a new version of the tools every year or so. This will mean that, at least on an annual basis, the company can ship breaking changes if that's necessary to ensure greater standard compliance.

In between these annual releases, there will be conservative updates that will fix bugs and add minor, non-breaking functional changes as well as Community Technology Preview (CTP) releases. These CTPs will give a taste of what's to come in the next annual release. They may include breaking changes, and they won't generally include full IDE support (for example, IntelliSense may not handle new features that the compiler understands).

Sutter also outlined Microsoft's ambition to comply with both the existing and still current C++11 standard, and the forthcoming update to that spec that should be completed in 2014.

C++14 is designed to "complete" C++11. It rounds out the specification, filling gaps and omissions, and adds extra features based on user experience with C++11. As a result of this, Sutter said that Microsoft would end up implementing and shipping some C++14 features (those that are particularly important to making the most of C++11) before all of C++11 is implemented.

The first of these annual updates will be Visual Studio 2013, which will ship later this year. This release will include some features that are currently available as a CTP for Visual Studio 2012 and also available in the preview of version 2013 that was released on Wednesday, including variadic templates, initializer lists, and raw string literals. The final release will add additional C++11 features, such as non-static initializers and defaulting and deleting of implicit member functions.

Both the current preview and the RTM will also include a few C++14 library features; the preview throws in a handful of utility functions for instantiating objects and creating iterators, and the RTM version will add type aliases using C++11's alias syntax.

The final version of 2013 will also include a few C99 features. Microsoft has long avoided supporting C99, the major update to C++'s predecessor that was standardized last millennium, claiming that there was little demand for it among Visual Studio users. This was true, but only to a point; it's true that many Windows developers weren't especially interested in C99 because they had no good tooling to support it. Open source developers, however, embraced the update, as it makes C a lot less awkward to work with.

While full C99 support is still not in the cards, Sutter said that certain specific features were going to be added, including C99's compound literals and direct initializers. These additions will mean that many open source projects will become buildable in Visual Studio without demanding significant code changes. Sutter mentioned widely used video codec library ffmpeg as a specific example of a project that would work given these compiler improvements.

After 2013 is released, a CTP will deliver a bunch more C++11 features, with C++14's generic lambdas and return type deduction likely to be included, along with a selection of C++11 features. The remaining C++11 and C++14 features will be implemented in subsequent releases (as will a couple of C++98 features that Visual Studio doesn't quite get right).

In addition to the main C++ standard, which covers the language and core library, a number of subgroups have been created to produce specs for specific features, such as filesystem handling, concepts, database access, and so on. Microsoft is promising to implement these libraries as and when they become available, with concepts likely to be one of the first.

Microsoft's C++ roadmap. I want generic lambdas so bad.

Apart from its standard support efforts, Microsoft said that it was going to work on some non-standard features: specifically, support for asynchronous programming that's comparable to the support it already has in C#.

Smartly preempting any complaints that Microsoft was trying to bastardize or corrupt the language with its own non-standard extensions, Sutter pointed to similar efforts by g++, clang, and even past work by Microsoft, where compiler developers created experimental extensions that subsequently fed back into the standardization process and helped shape the language's future. clang developers, for example, are working on a module system that will inform the work of the C++ modules subgroup. clang also led with an experimental implementation of generic lambdas.

Microsoft intends to go a similar route with its asynchronous extensions: it will build an implementation and collect developer feedback, and use this as a proof of concept for the standardization process.

The company will also strive to give regular updates on this roadmap. The first will be delivered in two months, when Microsoft hosts a second "Going Native" conference to discuss C++.

And what of the more distant future? Well, Microsoft probably won't have Visual Studio 2016, 2017, 2018, and so on. S. Somesegar, corporate Vice President of the Developer Division at Microsoft, said that the long-term ambition was simply to have Visual Studio, updated on a subscription basis. This approach will be consistent with the rapid release mentality, which tends to downplay version numbers and emphasize being up-to-date, and should see Microsoft delivering more functionality to more users in less time.

127 Reader Comments

Argh, it's hard enough just to learn one version of C++ thoroughly... the extra dimension added by the endless standard tweaking is a nightmare. Glad I don't have to do much C++ these days. I've got enough other new things to keep up on without having to constantly relearn the old ones too.

Great. So in a few years time I'll need to have VS2003, VS2005, VS2008, VS2010, VS2012, VS2013, VS2014, VS2015, and finally VS (not sure what version this project used to release to prod. Fan-bleeping-tastic).

I'd prefer a slower release cadence... The Real World does not go back and update existing applications to use the latest and greatest versions of VS -- it's too expensive to fully regression test a large application just because some developer wants to use the latest version of VS for their CV's benefit.

I'd prefer a slower release cadence... The Real World does not go back and update existing applications to use the latest and greatest versions of VS -- it's too expensive to fully regression test a large application just because some developer wants to use the latest version of VS for their CV's benefit.

Plenty of us do. The newer versions also include major performance enhancements to compiled code and the new language features assist in productivity. After working out the lost savings, it's too expensive to stay on an older version when you have dozens or hundreds of highly-paid engineers whose job is to release Product vNext rather than to patch Product vNoNewCustomers.

I suppose my segment of the industry may be rather different than your own, though.

I'd prefer a slower release cadence... The Real World does not go back and update existing applications to use the latest and greatest versions of VS -- it's too expensive to fully regression test a large application just because some developer wants to use the latest version of VS for their CV's benefit.

Plenty of us do. The newer versions also include major performance enhancements to compiled code and the new language features assist in productivity. After working out the lost savings, it's too expensive to stay on an older version when you have dozens or hundreds of highly-paid engineers whose job is to release Product vNext rather than to patch Product vNoNewCustomers.

I suppose my segment of the industry may be rather different than your own, though.

I generally work with pretty dull LOB apps. There's one particular .Net 1.1 I just can't justify upgrading -- it doesn't bring in enough revenue to bring it up to .net 4+ (no generics... grrr!). I'd need to involve dozens of people if I did a major overhaul vs me and a tester for a minor bugfix or new feature.

I generally subscribe to the "if it ain't broke don't fix it" mentality. There's no answer that suits every situation, though.

Smartly preempting any complaints that Microsoft was trying to bastardize or corrupt the language with its own non-standard extensions, Sutter pointed to similar efforts by g++, clang, and even past work by Microsoft, where compiler developers created experimental extensions that subsequently fed back into the standardization process and helped shape the language's future. clang developers, for example, are working on a module system that will inform the work of the C++ modules subgroup. clang also led with an experimental implementation of generic lambdas.

Many of the non-microsoft extensions would tend to make sense as opposed to microsoft's superficially appealing yet practically troublesome extensions like "__try/__catch" constructs.

Additionally, microsoft has an undeniably troublesome track record for introducing extensions to languages and protocols to not only (vendor) lock customers into their products, but also to claim ownership of the 'extended' body of work.

Not surprisingly, the number one reason why companies are trying to avoid using microsoft products these days (and favouring platforms such as linux) is to avoid vendor lock-in.

It would be interesting if the Visual Studio tools were released as cross platform solutions targeting the same platforms on which they run, but I guess microsoft already has enough trouble supporting their own variety of platforms.

Great. So in a few years time I'll need to have VS2003, VS2005, VS2008, VS2010, VS2012, VS2013, VS2014, VS2015, and finally VS (not sure what version this project used to release to prod. Fan-bleeping-tastic).

I'd prefer a slower release cadence... The Real World does not go back and update existing applications to use the latest and greatest versions of VS -- it's too expensive to fully regression test a large application just because some developer wants to use the latest version of VS for their CV's benefit.

Does the license for VS include earlier versions, too?

I don't think anyone expects corporations to hop compiler versions the very afternoon they come out.

But if you have code bases that only compile under VS2003, then you have code bases groaning under ten years of bit rot. Do you really want to try for eleven?

Additionally, microsoft has an undeniably troublesome track record for introducing extensions to languages and protocols to not only (vendor) lock customers into their products, but also to claim ownership of the 'extended' body of work.

Have you ever looked at how many gcc extensions there are? And how often they turn up? And how much a PITA it is port code that uses them?

Yet usage of them is quite common, at least any time I deal with embedded. And no two version of gcc for different arches (or from different vendors) are the same.

Extensions aren't added for vendor lock-in. They are added to solve a specific problem that makes using a vendor's product easier, or even possible. (like using #import).

Great. So in a few years time I'll need to have VS2003, VS2005, VS2008, VS2010, VS2012, VS2013, VS2014, VS2015, and finally VS (not sure what version this project used to release to prod. Fan-bleeping-tastic).

I'd prefer a slower release cadence... The Real World does not go back and update existing applications to use the latest and greatest versions of VS -- it's too expensive to fully regression test a large application just because some developer wants to use the latest version of VS for their CV's benefit.

Does the license for VS include earlier versions, too?

I don't think anyone expects corporations to hop compiler versions the very afternoon they come out.

But if you have code bases that only compile under VS2003, then you have code bases groaning under ten years of bit rot. Do you really want to try for eleven?

I would expect that corporations would use newer compilers for the most recent versions of their software. But, at some companies, old releases of software have to be supported. If you make a small bug fix to a 7 year old piece of software,you don't want to update that entire release to the latest compiler. The cost would be brutal.

Of course, I wonder if Microsoft is to some extent trying to be like Apple, moving in quicker release cycles like Apple does with XCode. But I'll grant that Visual Studio has it's backers like XCode, but It just seems like if they didn't do a yearly release cycle, they'd be left in the dust since Apple would've successively taken away developer mindshare during those intervening years, as it has in mobile. Also, I question the author's motive when he says Microsoft doesn't want Visual Studio 2012 Update 1 to break existing programs. I mean, it's not like any other compiler that has a quicker release cycle has had any widespread problems either. Usually it's been a pretty smooth road, unless you're doing something on the fringes of cutting edge. So I don't get the reasoning behind cutting Microsoft slack here when the rest of the industry was updating their compilers and software at a quicker pace and Microsoft was so careful of quality that it couldn't let its development software out the gate until 3 years had passed. What a load of BS, it's not like Microsoft couldn't have put more resources into developing Visual Studio at a quicker pace, I mean, they're developing Windows every 2 years now, they couldn't be bothered to write an update for their development software mid-cycle?

Great. So in a few years time I'll need to have VS2003, VS2005, VS2008, VS2010, VS2012, VS2013, VS2014, VS2015, and finally VS (not sure what version this project used to release to prod. Fan-bleeping-tastic).

I'd prefer a slower release cadence... The Real World does not go back and update existing applications to use the latest and greatest versions of VS -- it's too expensive to fully regression test a large application just because some developer wants to use the latest version of VS for their CV's benefit.

Does the license for VS include earlier versions, too?

I don't think anyone expects corporations to hop compiler versions the very afternoon they come out.

But if you have code bases that only compile under VS2003, then you have code bases groaning under ten years of bit rot. Do you really want to try for eleven?

I would expect that corporations would use newer compilers for the most recent versions of their software. But, at some companies, old releases of software have to be supported. If you make a small bug fix to a 7 year old piece of software,you don't want to update that entire release to the latest compiler. The cost would be brutal.

MS themselves does this. VS2003 ends support on October 2013, but Windows XP and Office 2003 ends support in April 2014, for example. Of course, MS only uses the VC2003 compiler, not the IDE or any other parts, but...

Additionally, microsoft has an undeniably troublesome track record for introducing extensions to languages and protocols to not only (vendor) lock customers into their products, but also to claim ownership of the 'extended' body of work.

Have you ever looked at how many gcc extensions there are? And how often they turn up? And how much a PITA it is port code that uses them?

Yet usage of them is quite common, at least any time I deal with embedded. And no two version of gcc for different arches (or from different vendors) are the same.

Extensions aren't added for vendor lock-in. They are added to solve a specific problem that makes using a vendor's product easier, or even possible. (like using #import).

Wake up dude. Many of the gcc/gnu extensions are provided as convenience and I've never really encountered an inability to work around the extensions.

I can't say microsoft extensions are necessarily required, however microsoft goes a long way to encourage use of their extensions, many of its users blindly follow this encouragement and the result clearly becomes something microsoft platform specific. The lack of C99 support for all this time should suggest how microsoft operates, regardless of their reasons/lies for not having provided C99 for all this time.

And, as I've previously stated, microsoft has demonstrated to do everything in its power to make its extensions proprietary. Even if microsoft releases extensions or else products that are non proprietary, they always seem to depend on something else from microsoft which is proprietary. Any gcc or gnu extension by definition is non proprietary.

I would expect that corporations would use newer compilers for the most recent versions of their software. But, at some companies, old releases of software have to be supported. If you make a small bug fix to a 7 year old piece of software,you don't want to update that entire release to the latest compiler. The cost would be brutal.

Many corporations tend to follow the "if it ain't broke, don't fix it" philosophy. Switching compilers could involve a full retesting cycle.

I could understand not wanting to support some of the newer language features, but I could never understand why they dragged their feet on implementing C99 library functions (e.g. snprintf) or supporting things that were already present in C++ (e.g. being able to declare variables anywhere).

I could understand not wanting to support some of the newer language features, but I could never understand why they dragged their feet on implementing C99 library functions (e.g. snprintf) or supporting things that were already present in C++ (e.g. being able to declare variables anywhere).

Because you're not supposed to be compiling any new C code, you knuckle-dragging neanderthal.

I've been following C++'s evolution with a mix of bewilderment, awe and horror. The language has become excruciatingly complex; some of my peers say that unnecessarily complex. Some of the template stuff is so convoluted not even the compiler is able to issue meaningful error messages when something is amiss.

On the other hand, I cannot think of any other language that has achieved C++'s expressive power and abstractive depth without sacrificing performance. And for that I'm profoudnly grateful to Herb Sutter, Bjarne and the C++ standards committee who have remained faithful to C++ core values.

Argh, it's hard enough just to learn one version of C++ thoroughly... the extra dimension added by the endless standard tweaking is a nightmare. Glad I don't have to do much C++ these days. I've got enough other new things to keep up on without having to constantly relearn the old ones too.

Endless tweaking? Microsoft have finally joined the standards bandwagon so now you can actually go and learn C++ without having to deal with Microsoft'isms as you had to in the past. If I was a developer I would be friggin over joyed knowing my skill set is now transposable to other jobs without having to learn a new array of voodoo just to get things working.

As for others here whining about a subscription model - most developers are already on a subscription model so the idea of ever buying a once of retail version is a non-issue to begin with. End of the day, don't view these as 'new releases' but a rolling release that rolls up bug fixes and features into one package. The benefit means that rather than backing up large changes into a single release they can deploy improvements when they're ready and mature rather than forcing them out for an artificially designated release date to only spend the next six months furiously deploying bug fixes to address the numerous features that were adequately tested due to the time frame imposed upon them.

I would expect that corporations would use newer compilers for the most recent versions of their software. But, at some companies, old releases of software have to be supported. If you make a small bug fix to a 7 year old piece of software,you don't want to update that entire release to the latest compiler. The cost would be brutal.

Many corporations tend to follow the "if it ain't broke, don't fix it" philosophy. Switching compilers could involve a full retesting cycle.

Yes but the process is made a lot easier by actually reading the documentation and finding out why things are breaking in the first place, and that's really what's failing, effort. Things generally break for a reason, that's why when you convert from one version to another, the first thing you get is a big list of warnings. Often times it's a huge indicator of things that were broken in the first place and that your original testing obviously never caught because it was functioning as you expected.