I only briefly looked into this and it didnt seems like something I am only dealing with, hopefully it isnt.

I have noticed with C++ projects (even CLR) build times increase as the project gets bigger which makes sense I guess, this is in comparison to C# where build times are exceptionally quick careless of how big a project gets. For example I am currently working on a project thats approx 95% C# with multiple class libraries, control libraries and unit tests, these all compile very very quick.

However C++ projects take a while even when projects contain very minimum careless of if I have pre-compiled headers on or not, now I understand the more external dependencies I have the longer it will take to compile (or at least thats what I thought the reason was) but I cant actually work out what makes C++ compilation slower, I always would have assumed C++ to be faster in compilation than managed languages.

Assuming this is fairly normal and standard, what exactly is it that causes C++ to compile noticeably slower than C# careless of if im compiling a full project or an individual class, I remember having a similar issue with F# in VS 2010 but that isnt so much the case in 2012

Oh this is also in VS 2010/2012 with default project settings, not sure if that matters or not

Are you using lots of templates yourself or boost libraries that are essentially template (i.e. all in header files) libraries?

Template instantiation can be slow. If your own code is using lots of templates try using explicit instantiation. Otherwise, if you're using something like boost::spirit, et. al., then you just have to expect long compile times. I mean there are things you can do to mitigate but anything that is doing extensive template meta-programming is just really slow to build.

Many books have been written about improving C++ build times. Exactly which of the thousands of slow patterns your code uses is something we cannot tell. Many powerful and useful design patterns have slow compile times in C++ compilers.

C++ compile times are slow. It will always be slow due to the design of the language.

For our C++ code base a full optimized rebuild takes about 3 hours; fortunately the app is segmented into libraries and such, building a single library only takes about 4 minutes.

Many books have been written about improving C++ build times. Exactly which of the thousands of slow patterns your code uses is something we cannot tell. Many powerful and useful design patterns have slow compile times in C++ compilers.

C++ compile times are slow. It will always be slow due to the design of the language.

I'm not sure why you say that. There's work to add support for modules to C++

As above, C++ is just designed badly with regards to compilation/linking when compared to newer languages.

Books like :"Large-Scale C++ Software Design" will teach you how to mitigate long compile-times with sensible use of the language.

MSVC in particular has a wonderful "incremental linking" option, which can drastically reduce you link times, however, it's extremely hard to convince VS to actually perform this task, even if after enabling it (it usually just does nothing!). You've actually got to get rid of all your libraries and add all your source files into the main EXE's project, OR use the little documented "Use library dependency inputs" option, which links against your library's input OBJ files instead of it's resulting LIB file.Check out this, and the previous entries: http://www.altdevblo...the-holy-grail/

For our C++ code base a full optimized rebuild takes about 3 hours

Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.

I'm not sure why you say that. There's work to add support for modules to C++

Don't hold your breath. Agreement on the C++11 spec was only 10 years late...

Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.

No, our build times are quite reasonable for the games in our studio.

Since almost all development is done in the smaller modules, a rebuild of the module only takes about 4 minutes, which fits your coffee break description. Most small edits take under a minute on the PC build, or when Edit and Continue is working it takes effect instantly (Yay Microsoft for Edit and Continue!) We also employ tools like Incredibuild so build times aren't painful for the programmers.

The full rebuild on build servers is the worst case scenario, and yes it really does take about three hours. Each platform is done in parallel on the build servers, it gets run directly and doesn't use Incredibuild when built on the build machines. Optimizations on those final build are turned up to maximum, including a bunch of incredibly slow program-wide optimizations.

I'm not sure why you say that. There's work to add support for modules to C++

While that would be nifty, one of two things needs to happen if module support is added:

Legacy compilation is supported. Then you're still in largely the same boat. The compiler can't make shortcuts because it needs to deal with the legacy gotchas that make it horrible today.

Legacy compilation is not supported. Then the expansive pile of existing code that is one of C++'s few strengths is useless. Then you get to wait N years for all of the library writers to modularize their code.

And that's after all of the spec writers settle on a good way to bolt modules onto the language (while still supporting template behaviors? good luck), and after all of the compiler writers actually implement the behavior in something vaguely resembling a standard way.

So yes, I suppose that it's possible in some far flung future for C++ to get module support that makes compilation times shorter. But what do you think its competition will be doing during that time?

In C++, each translation unit (typically one .cpp file) has to go through 7 stages of processing over the .cpp file itself and all of the headers that are included. #including iostream results in about 40,000 lines of code that has to be processed.

Due to things like operator and function overloading and templates, all of the syntax and semantics of a construct entirely depends on all of the code that comes before it, so there aren't very many shortcuts a compiler can use.

Each time a file is #included it can result in different code being generated, so every include file has to be processed independently for every translation unit, and if it is included more than once in a translation unit, some of the processing has to be done multiple times for that translation unit even if you use #ifndef protection.

A 1000 line C++ program can easily result in millions of lines of code actually being processed by the compiler.

And then there are the various compiler-specific optimizations that may have to process the resulting data structures thousands of times.

While there are some things that can be done with the language to make it inherently faster, I don't think it's a bad design. From the very beginning C++ was meant to be a language that emphasizes performance and flexibility over compiler performance 100%. Languages like C# make specific trade-offs of language features to improve build times. Neither is right or wrong, they're different tools for different jobs.

While that would be nifty, one of two things needs to happen if module support is added:

Legacy compilation is supported. Then you're still in largely the same boat. The compiler can't make shortcuts because it needs to deal with the legacy gotchas that make it horrible today.

Legacy compilation is not supported. Then the expansive pile of existing code that is one of C++'s few strengths is useless. Then you get to wait N years for all of the library writers to modularize their code.

And that's after all of the spec writers settle on a good way to bolt modules onto the language (while still supporting template behaviors? good luck), and after all of the compiler writers actually implement the behavior in something vaguely resembling a standard way.

I believe they've decided on the first option in terms of supporting legacy compilation. The legacy issue is difficult, but not sure there's a better way. There could be an option in the compiler if you're working on a newer project to provide the faster build times.

Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.

No, our build times are quite reasonable for the games in our studio.

Since almost all development is done in the smaller modules, a rebuild of the module only takes about 4 minutes, which fits your coffee break description. Most small edits take under a minute on the PC build, or when Edit and Continue is working it takes effect instantly (Yay Microsoft for Edit and Continue!) We also employ tools like Incredibuild so build times aren't painful for the programmers.

The full rebuild on build servers is the worst case scenario, and yes it really does take about three hours. Each platform is done in parallel on the build servers, it gets run directly and doesn't use Incredibuild when built on the build machines. Optimizations on those final build are turned up to maximum, including a bunch of incredibly slow program-wide optimizations.

What's the reason for not using incredibuild on these machines? As building a release or close to disc version of your game should be part of your build monitor process.

Languages like C# make specific trade-offs of language features to improve build times. Neither is right or wrong, they're different tools for different jobs.

Also most of the managed languages that need to be compiled actually do this whilst you are writing it. This is what makes C# compilations so blindingly fast.

Also most of the managed languages that need to be compiled actually do this whilst you are writing it. This is what makes C# compilations so blindingly fast.

C# compilation is still very fast without this. I use an embedded C# compiler in my game engine to build cs files on launch and I can build a whole directory full of dozens of cs files in less than 20 seconds.

(OT) XCode 4 builds C++ code while you're writing it, and it's really annoying, especially since they decided this feature was so great that there was no need to be able to initiate a compile of an individual source file in a project any more :/ I had to buy a new top of the line Mac Pro just so I wouldn't have to wait 10 minutes every time I tweaked a template method to make sure it wasn't going to break my build.

The dominating reason for slow times is the preprocessor, no question. The preprocessors #include directive can mean compiling a 100 line file requires processing several orders of magnitude more code, and #if... directives make it very hard to cache the results of processing a file.

There are also other issues; C++ compilers do more than C# ones. Some parts of C# compilation are deferred until runtime (JIT) and the Microsoft C# compiler doesn't optimize nearly as aggressively as their C++ compiler or g++. For example, it doesn't ever perform tail call optimization (there's a IL 'tail' instruction which the F# compiler emits, but the C# one doesn't). This is minor in comparison to the preprocessor issue though.

A good number of methods for speeding up compile times already mentioned above. Pre-compiled headers,unity builds, and virtual functions can also help along with distributed compilation with distcc or Incredibuild. Although I've seen full builds of several hours in a few game studios, we've usually been able to get this down to 20 minutes or so with the majority of changes only needing a few minutes.

Take a look at the "Modules in C++" proposal by Daveed Vandevoorde (quoted below), perhaps it will be able to address this issue in the future (it also addresses the backward compatibility issues and smooth transition from the #include world): http://www.open-std..../2012/n3347.pdf

Regarding the timing:"There's a lot of new things to look forward to in the upcoming versions of the C++ standard. Here's a high-level overview of what transpired in Kona and what I personally think would be nice to see in the next version(s) of C++. I mention versions because the committee has decided that we'll be working on getting a short-term update to the standard tentatively shooting for 2017 (C++1y or C++17) and a long-term update aiming for 2022 (C++22).. . .Modules -- not dynamic libraries (although those would be nice to have too) but more on logical grouping of symbols for replacing header files and #include. There are two different proposals on Modules for C++ and there's a Study Group formed to address this particular issue. This looks to be one of the things that might be making it into C++17."http://www.cplusplus...-libraries.html

4.1 Improved (scalable) build times

Build times on typical evolving C++ projects are not significantly improving as hardwareand compiler performance have made strides forward. To a large extent, this can beattributed to the increasing total size of header files and the increased complexity of thecode it contains. (An internal project at Intel has been tracking the ratio of C++ code in“.cpp” files to the amount of code in header files: In the early nineties, header files onlycontained about 10% of all that project's code; a decade later, well over half the coderesided in header files.) Since header files are typically included in many other files, thegrowth in build cycles is generally superlinear with respect to the total amount of sourcecode. If the issue is not addressed, it is likely to become worse as the use of templatesincreases and more powerful declarative facilities (like concepts, contract programming,etc.) are added to the language.Modules address this issue by replacing the textual inclusion mechanism (whoseprocessing time is roughly proportional to the amount of code included) by a precompiledmodule attachment mechanism (whose processing time—when properly implemented—is roughly proportional to the number of imported declarations). The property that clienttranslation units need not be recompiled when private module definitions change can beretained.Experience with similar mechanisms in other languages suggests that modules thereforeeffectively solve the issue of excessive build times.