While discussing buffers overflows, somebody told me that compiling your own binary for an application (with specific compilation flags) instead of using the "mainstream binary" makes it more difficult for an attacker to leverage buffer overflows.

Apparently because it "changes memory allocation" compared to the mainstream binary.

4 Answers
4

As the question is given in the headline, "Does compiling from sources .. protect from buffer overflow attacks?", the answer is in general no.

However, here is a guess at what your friend might have been thinking of: Current versions of the GNU C Compiler (GCC) can optionally use the GCC Stack-Smashing Protector when invoked with the -fstack-protector command line flags. There are several other techniques that must be enabled at compile time, see the Ubuntu link below. I'm guessing one or more of these is what your friend meant, and he assumed that stock Linux/BSD's don't use this already.

But compiling yourself isn't helpful if your Linux/BSD distribution already uses these techniques. And many of them do nowadays, here is a handy overview of what Ubuntu does in their Server Editions (see headlines Stack- & Heap Protector, Pointer Obfuscation, 5x ASLR, and 4x "Built...").

Lastly, not all software will work with these techniques. If you want to do this yourself, then you risk spending quite a lot of time tracking down elusive bugs. Your upstream distribution vendor is in a better position to apply these changes, and give the resulting binaries enough testing and quality assurance to deem them stable.

These techniques are helpful, and do add a meaningful layer of defense-in-depth. But don't compile from source yourself for this. Instead pick a Linux/BSD distribution which has a good security engineering mindset, and has already applied relevant protection mechanisms (minimal attack surface, buffer overflow protection, maybe SELinux/AppArmor) as appropriate for your needs.

It is mostly untrue. Using a compiler of a different version than the one used for the "mainstream" binary, or using it with different compilation flags, may result in a few things ordered differently, but chances are that most of the code elements will appear in the same order. Insofar as it changes anything with regards to buffer overflow leveraging, recompiling things may ease things for the attacker in exactly the same way that it may make things more difficult.

Most buffer overflow leveraging consists in trampling a code address (e.g. return address of a function, on the stack, or a function pointer somewhere, in particular in vtables for objects in C++) so that the attacker forces execution to go where he wishes to. Traditionally, the attacker tried to make execution jump to the data he just filled the buffer with, but non-executable stacks and more generally the so-called W^X model prevent that from succeeding: the data will not be interpreted as code. So the attacker now targets the C library (or a similar system library, depending on the OS), which contains nice functions to execute other files, e.g. a shell. This entails guessing where the said nice function is in memory. Bottom-line: if you want recompiling to have any effect on that, then you must recompile the C library, not the application.

Address space layout randomization is just a generic way for making the address of a function unguessable. It is much more thorough than recompiling (even assuming that recompiling really swaps things around, which it does not) because address space layout randomization is randomized again at each execution. In particular, an attacker will not gain any insight on the target function address by getting a copy of the executable file, because that address has not been decided yet.

Yet address space layout randomization is not a silver bullet since it must operate under severe constraint: it cannot put the library just anywhere because of alignment (usually, the loading address must be a multiple of 4 kB) and fragmentation issues (a 2 GB address space -- on a 32-bit system -- is too small to allow scattering libraries everywhere; this would artificially prevent allocation of big memory blocks).

(Also, a "silver bullet" is a way to kill werewolves. It is an "ideal solution" only if your idea of medicine is shooting patients.)

An a buffer overflow is a bug. If the attacker can make your application overwrite parts of its data with bytes chosen by the attacker, in a silent way, then there is no a priori limit on the damage the attacker can do. That's the reason why languages like Java were invented: at least, with Java, on a buffer overflow, you get an immediate thread crash, not a silent data corruption.

Your answer is correct in the general case. But it's worth noting that Linux can have 'productized' (off-the-shelf) stack- and heap-protection, as well as variants of Address Space Layout Randomization (ASLR) in both kernel, C Library, and user programs. Compiling with support for these (optional) techniques does make buffer overflows substantially harder to exploit into remote privilege escalations.
–
Jesper MortensenJul 19 '11 at 0:11

+1 to Jesper. If you examine the way...let's pick a big one - Ubuntu...is configured off the shelf, firefox is opted out of memory protection policies. Compiling from source is NOT a solution. It can force the user to examine protections that the OS packagers/maintainers "turned off" or "patched to fix a problem compiling". Like the use of uninitialized memory/reducing the randomness of openssl from debian a couple years ago.
–
hbdgafJul 19 '11 at 13:39

[Does] compiling your own binary for an application (with specific compilation flags) instead of using the "mainstream binary" make it more difficult for an attacker to leverage buffer overflows?

It depends on the operating system, the language the application is written in, and the compiler.

First, the programming language must be a compiled language: C, C++, C#, Java, Objective-C, Delphi, etc. Interpreted languages (JavaScript, PHP, Ruby, etc.) are run from source (not compiled), so to modify the memory behavior you must change the interpreter's settings or the source. Obviously, no compile, no protection.

Second the programming language must allow manual memory management. Java and C# use automatic memory management, preventing the basic buffer overflow vulnerability. C and C++ allow manual management of dynamic memory. If the programming language has managed memory, then compiling will not help.

Third, the compiler or libraries used in the application must support some extended dynamic memory management monitoring or control. The Microsoft C++, Intel C++, GNU C++, LLVM Clang C++ compilers all support -fstack-protector, IBM XL C++ supports -qstackprotect. Some libraries like Avaya Labs's LibSafe, also have mechanisms to protect the stack. If the compiler doesn't support stack protection, and there are no drop in stack protection libraries available, then compiling will not help.

Fourth, the operating system may provide some protection. Some operating systems already protect the stack by exeuctable bits, stack base pointers, and saving return addresses in registers. If the operating system is already protecting the stack, the compilation will not help.

Is it way less efficient than Address space layout randomization ?

Address space randomization is a technique that prevents an attacker of using a well known address to call a system function. For example if setuid() is always at address 0xDEADBEEF then the attacker can attempt to overwrite a return address with 0xDEADBEEF and execute setuid. The randomization does not prevent buffer overflows, it just prevents the use of static address values.

Is this point rendered moot by new memory management strategies in OS kernels?

Not necessarily. It depends on the OS. Some OSes are more focused on performance than security. Those OSes may actually introduce more vulnerabilities with memory management tricks.

What he is talking about is probably that in certain cases where the original binary has not been compiled with available protections, you can recompile it to enable them.

Nowadays compilers and operating systems offer many advanced security protections, but sometimes it's up to the developer to apply them on his application, and some of them are applied at the compile time, and may or may not be 'on' by default.

Perhaps the most famous example is protecting against stack based buffer overflows. This protection is purely compiler-based: At the compile time, extra instructions are added to the program so as to add and check canaries in the stack, which mitigates (the results of) a certain kind of overflows. But this protection is only added if, in linux for example, you apply the -fstack-protector flag or in windows the famous /GS flag. In recent operating system and compilers those flags are on by default, but that was not always the case.

Other such options are (for gcc) -D_FORTIFY_SOURCE=2 which adds some sanity checks on certain vulnerable glibc functions and protects against some other buffer overflows, and -Wl,-z,relro that mark certain memory areas as read only.

Another thing that he may have meant, is that you can include other, safer libraries while you compile, for example libsafe, which overrides certain known unsafe functions with 'safer' versions that have more sanity checks.

Also, DEP and ASLR in windows are enabled with /NXCOMPAT and /DYNAMICBASE. Now I think visual studio has those enabled by default, but again that was not the case. You'd be surprised by the number of applications and dlls that still do not have these flags enabled - even provided by companies like adobe and microsoft. Case in point: Adobe Flash only got those protections in version 10, they were not enabled before. It's common practice for exploit writers to hunt for such dlls while trying to find exploitable targets (or ways to escalate privileges etc).

At least as of Visual Studio 2008, NXCOMPAT and DYNAMICBASE are on by default. (They're on unless you specify /NXCOMPAT:NO or /DYNAMICBASE:NO) (Well, NXCOMPAT is only on by default if you specify a subsystem greater than 6.0 required, but you need to target that subsystem to make much of a difference with NXCOMPAT anyway)
–
Billy ONealJul 19 '11 at 3:54