No. While it depends on your end users (end users of some products / libraries / etc are very technical, while other products draw from a much larger, less technical user base), a non-trivial number of bug reports are due to user error, or to something that you don't actually have any control over. Skipping stage 1 probably makes sense in all cases, but the rest of the stages are all valid. Sometimes you never get past stage 2 because the answer is "oh, right, because my machine isn't infected with something" or "because I didn't mis-configure the application".

Reminds me of this overflow bug [seclists.org] which was fixed in sudo 1.6.3p6. It writes a single NUL byte past the end of a buffer, calls syslog(), and the restores the original overwritten byte. Seems unexploitable, right?

Wrong. Here's the detailed writeup [phrack.org] of the exploit. It requires some jiggering with the parameters to get the exploit to work on a particular system, but you don't need a local root exploit to work every time, you just need it to work once and you own the system.

While you have a point, you shouldn't forget the Raspberry Pi. It is probably the most popular internet facing non-mobile ARM platform today. Literally millions of these run glibc and at least hundreds of thousands are in some way or form directly connected to the internet. While I don't believe that this bug can be exploited without first gaining RCE on the raspberry pi, once an attacker gets access to the rpi, this bug should be able to get them to escalate to root privileges.

There are quite a few people that put a full debian (or other) distribution on their NAS server. I own a zyxel NSA 325 and it is possible to install a full debian release on this and some other NAS boxes. These might be a limited amount of systems overall, but it's significant enough to deserve mentioning because they too often are internet facing.

64-bit systems should remain safe if they are using address space randomization.

Nah. It just takes more crashes before the exploit achieves penetration.

(Address space randomization is a terrible idea. It's a desperation measure and an excuse for not fixing problems. In exchange for making penetration slightly harder, you give up repeatable crash bug behavior.)

Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.

Use your imagination...

I was imagining a special type of pointer, but one compatible with ordinary pointers. Kind of how C99 added the "complex" data type for complex numbers, but you can assign to them from ordinary non-complex numbers. A future version of C could add a type of pointer that includes a limit, and a future version of malloc() could return this new type of pointer, and for compatibility, the compiler can just downgrade it to an ordinary pointer any time it is assigned to an ordinary pointer, so that old code continues to work with the new malloc() return value, and new code can continue to call old code that only accepts ordinary pointers. Of course, we won't call them "new" and "ordinary," we'll call them "safe" and "dangerous" when, after several years, we grow tired of hearing of yet another buffer overflow exploit discovered in some old code that hasn't yet been updated to use the new type of pointer.

...or I'm sure there's many other possibilities. This isn't an impossible thing to do.