I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.

It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp

I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)

What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?

All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?

We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.

1

Definitely pathological; I can only guess at the reason why anyone might want to do that (if you, on the other hand, can ask them directly, you should). Normally in C++ you want to do the opposite, keep not only implementation files but also headers well separated. (A common trap of C++ projects being "#include spaghetti", with every header file depending on every other.) Perhaps to stress test the compiler ?
–
MorendilFeb 12 '09 at 22:26

An introduction on "Unity Builds" along with benefits, disadvantages and a complete CMake integration can be found at cheind.wordpress.com. hth, Christoph
–
Christoph HeindlJan 3 '10 at 8:20

Our official build always need a rebuild so I believe this approach could improve build performance a lot. But since the official builds are mainly consumed by Devs, but the UnityBuild generated pdbs maybe invalid for no-unitybuild code. (We don't want to develop with a unity build configuration, right?)
–
Baiyan HuangJan 12 '10 at 10:52

Completely different reason to include some implementation files into another implementation file is: these files can be autogenerated. It is much easier to autogenerate an entire file than dealing with changes injection into the existing code.
–
Sergey K.Jul 25 '12 at 17:32

4 Answers
4

It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.

It's not just because of I/O. Each single .cpp file often includes many header files. If you compile them separately, then you compile the code in the header files multiple times -- once for each .o file. If you use "Unity Build" then these header files and everything else is compiled only once.
–
FrankFeb 12 '09 at 22:57

4

Er... that falls under I/O. But yes, that's more precise.
–
MSNFeb 12 '09 at 23:05

3

Compiles are not usually I/O bound, unless you have insufficient memory. See my answer down below. The disk cache works. The linked-to post spends a lot of time on sophistry to explain how bad disk I/O is, but it is irrelevant because of the disk cache. Also, compiling the code in the header files multiple times is completely different from I/O. The redundant compilation is the problem, the redundant I/O never actually happens. Some of the perils of unity builds are covered here: randomascii.wordpress.com/2014/03/22/…
–
Bruce DawsonMar 26 '14 at 15:55

I wonder if when using CMake you could have it generate the unity build .cpp file from the list of sources fed into a target. After all, CMake already has all this information. I also wonder how all this changes when using flash memory drives instead of spinning drives, but most people have already commented that the raw I/O isn't the problem, but rather the repeated compilation. In a non-unity build, I certainly noticed an improvement with flash drives for the raw I/O.
–
legalizeOct 29 '14 at 20:19

Unity builds improved build speeds for three main reasons. The first reason is that all of the shared header files only need to be parsed once. Many C++ projects have a lot of header files that are included by most or all CPP files and the redundant parsing of these is the main cost of compilation, especially if you have many short source files. Precompiled header files can help with this cost, but usually there are a lot of header files which are not precompiled.

The next main reason that unity builds improve build speeds is because the compiler is invoked fewer times. There is some startup cost with invoking the compiler.

Finally, the reduction in redundant header parsing means a reduction in redundant code-gen for inlined functions, so the total size of object files is smaller, which makes linking faster.

Unity builds can also give better code-gen.

Unity builds are NOT faster because of reduced disk I/O. I have profiled many builds with xperf and I know what I'm talking about. If you have sufficient memory then the OS disk cache will avoid the redundant I/O - subsequent reads of a header will come from the OS disk cache. If you don't have enough memory then unity builds could even make build times worse by causing the compiler's memory footprint to exceed available memory and get paged out.

Disk I/O is expensive, which is why all operating systems aggressively cache data in order to avoid redundant disk I/O.

I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?

Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.

That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.

That said, it would be curious to see if there is indeed any difference.

This redices the SPEED and Hard disk Drive head movements A LOT!
–
Петър ПетровJul 17 '13 at 0:02

1

Петър Петров: Do you have any benchmarks that indicate this? It should increase speed because it means that there is no need to recursively and repeatedly visit, open, and parse header files.
–
ArafangionJul 17 '13 at 6:15

I agree with Bruce; from my experience I had tried implementing the Unity Build for one of my .dll projects which had a ton of header includes and lots of .cpps; to bring down the overall Compilation time on the VS2010(had already exhausted the Incremental Build options) but rather than cutting down the Compilation time, I ran out of memory and the Build not even able to finish the Compilation.

However to add; I did find enabling the Multi-Processor Compilation option in Visual Studio quite helps in cutting down the Compilation time; I am not sure if such an option is available across other platform compilers.

If you agree and aren't adding anything, it doesn't seem like this should be an answer. This sounds like more of a story and comment.
–
RacerNerdJun 9 '14 at 15:30

1

Actually; I intended to add a comment about my own experience with the issue(on an industrial strength & sized code-base); for someone trying the same because the idea of Unity Builds kind of gives the impression that it would almost result in lesser Compilation; which isn't the case for large projects. The reason why we would be trying this in the first place; but I am nobody to rule out exceptions and certain design considerations.
–
spforayJun 9 '14 at 18:25