There are days where I miss Slashdot, this is not one of them. I did read the article (always do) and understood it. You can throw resources at a problem to try and "solve it" by masking the problem(s), but sometimes you actually have to sit down and do the real work to fix it. Sorry if you don't understand that concept.

Thank you for showing me what an arrogant post looks like.;) Far from a student, and I don't think I qualify as "young" anymore but thank you for trying to make assumptions. Engineers like any other group of people come in all different flavors and there is no magic stereotype that helps you figure out which ones are "better" than others; some are loud, some are quiet, some are great at certain things while others are better at other things - blah, blah, blah...

That would be like me asking you to specifically state exactly why Android needs a 16 GB build environment. What I have stated is not that they are doing it wrong, but the fact that if it truly requires 16 GB to build reasonably then they can probably do a better job of organizing the source and/or build environment. Considering most projects, even larger than Android, build fine with far less resources I don't think that's too much of a stretch.

We probably just disagree about this, but being a common way does not mean its a good way - there's also probably a bit of difference between a project the size of SQLite and Android. I do agree compiler optimizations in general should not cause a problem, but in practice I think more judicial reliance on them decreases the effort in maintaining a solution especially as the size and/or scope of the solution evolves. Even if you were bent on bundling all source files up and sending them to the compiler in one shot, I would think that if you truly need a machine with 16 GB of RAM that would be a good indicator its also a good time to better organize and segregate your code in to more discrete and logical pieces.

Forgetting for a second that especially in an embedded style device, I want the results to be very predictable - at what point do you draw the line between compiler optimizations and just plain bad organization or code?

If you're advocating packaging everything up and sending it to the compiler in one shot so it can figure things out - no, I would say that is not a good way to optimize a project especially if it has a large code base.

I think the difference in opinion between the two of us is I think some of those decisions should be pre-determined design decisions and not left up to a compiler trying to figure things out after the fact. The larger the code base, the more strongly I would feel about this.
I know they were just quick examples, but...
"can this function ever throw?" - it's been a while since I went scratching through an object or class file, but wouldn't the compiler already know this from previous compiles?
"is this code reachable?" - unless we're using goto's, isn't this confirmable by module?
"is the memory allocated here always eventually freed?" - I'm not sure a compiler could ever truly know this answer, best case would be management code either injected by the compiler or part of the framework to determine this at runtime.

Annoying? Maybe. Obtuse? No. While I agree optimizations are important, in a project with a code base of this size structured logical components and boundaries is going to be far more important than finding that last piece of code that can be "inlined". Its becoming less of a critical factor with devices supporting 1 GB of RAM, but inlining is not always the most preferential optimization.

My "smart" answer would be not much more than the host operating system itself needs. However, yes, RAM can affect the number of parallel compiles but I think its safe to say in this case we are not optimally limiting the number of symbols and objects that the compiler needs to work at any given time.