The title of the question might be a bit strange, but the thing is that, as far as I know, there is nothing that speaks against tail call optimization at all. However, while browsing open source projects, I already came across a few functions that actively try to stop the compiler from doing a tail call optimization, for example the implementation of CFRunLoopRef which is full of such hacks. For example:

One possible pitfall might be that an application works smoothly on several platforms and then suddenly stops working when compiled with a compiler that doesn't support tail call optimization. Remember that this optimization can actually not only increase performance, but prevent runtime errors (stack overflows).
–
Niklas B.May 28 '12 at 21:51

5

@NiklasB. But isn't this a reason to not try to disable it?
–
JustSidMay 28 '12 at 21:54

4

A system call might be a sure way of wharting TCO, but also a pretty expensive one.
–
larsmansMay 28 '12 at 21:54

39

This is a great teachable moment for proper commenting. +1 for partially explaining why that line is there (to prevent tail-call optimization), -100 for not explaining why tail-call optimization needed to be disabled in the first place...
–
Mark SowulMay 28 '12 at 22:09

16

Since the value of getpid() is not being used, couldn't it be removed by an informed optimizer(since getpid is a function that is known to have no side effects), therefore allowing the compiler to do a tail call optimization anyway? This seems a really fragile mechanism.
–
luiscubalMay 28 '12 at 23:26

3 Answers
3

This is only a guess, but maybe to avoid an infinite loop vs bombing out with a stack overflow error.

Since the method in question doesn't put anything on the stack it would seem possible for the tail-call recursion optimization to produce code that would enter an infinite loop as opposed to the non-optimized code which would put the return address on the stack which would eventually overflow in the event of misuse.

The only other thought I have is related to preserving the calls on the stack for debugging and stacktrace printing.

I think the stacktrace/debugging explanation is much more likely (and I was about to post it). An infinite loop isn't really worse than crashing, since the user can force the application to quit. That would also explain the noinline.
–
ughoavgfhwMay 28 '12 at 21:59

3

@ughoavgfhw: maybe, but when you get into threading and such, infinite loops are really hard to track down. I've always been of the mindset that misuse should trigger an exception. Since I've never had to do this, it's still just a guess.
–
Andrew WhiteMay 28 '12 at 22:02

1

synchronicity, sort of... I've just run into a bad bug that kept an application opening new windows. This makes me think, if the application would have crashed before trying to saturate "the heap" (my memory) and choking X, I would have not needed to switch to the terminal to abruptly kill the crazy app (since X started soon to become unresponsive). So maybe, it would be a reason to prefer the "fail fast" approach that could come with a stack overflow and no optimization...? or maybe it's just a different matter, though...!
–
ShinTakezouMay 28 '12 at 22:03

2

@AndrewWhite Hmm I totally love infinite loops - I can't think of a single thing that's easier to debug, I mean you can just attach your debugger and get the exact position and state of the problem without any guessing. But if you want to get stacktraces from users I agree that an infinite loop is problematic, so that seems logical - an error will appear in your log, an infinite loop won't.
–
VooMay 28 '12 at 23:16

1

This assumes that the function is recursive in the first place – but it isn’t; neither directly nor (by looking at the context where the function comes from) indirectly. I made the same mistaken assumption initially.
–
Konrad RudolphMay 29 '12 at 10:50

My guess here is that it's to ensure that __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ is in the stack trace for debugging purposes. It has __attribute__((no inline)) which backs up this idea.

If you notice, that function just goes and bounces to another function anyway, so it's a form of trampoline which I can only think is there with such a verbose name to aid debugging. This would be especially helpful given that the function is calling a function pointer that has been registered from elsewhere and therefore that function may not have debugging symbols accessible.

Notice also the other similarly named functions which do similar things - it really looks like it's there to aid in seeing what has happened from a backtrace. Keep in mind that this is core Mac OS X code and will show up in crash reports and process sample reports too.

Yes, makes sense indeed. But if you look where these functions are called from, you will see that they are always only called from one function, for example my example function is only called from __CFRunLoopDoObservers which definitely shows up in the stack trace...
–
JustSidMay 28 '12 at 22:21

1

Sure, but I guess it's another marker for exactly where the observer callback / block / etc is getting run.
–
mattjgallowayMay 28 '12 at 22:25

@R.. I can only accept one answer though and Andrew White also named other cases where tail call optimization might not be wanted. Remember, I didn't ask why the function did it but why it might not be desired in general and gave the function as real world example.
–
JustSidMay 29 '12 at 1:38

making profiling easier at the cost of slowing down the program is kinda weird though. It makes as much sense as diluting your oil before measuring how far your car can go :x
–
Matthieu M.May 29 '12 at 7:45

@MatthieuM.: Such a thing wouldn't make sense if the added call was performed millions of times in a loop, but if it's executed a few hundred times a second or less, it may be better to leave it in the real system and be able to examine how the real system behaves, than to take it out and risk having such removal make a subtle but important change in system behavior.
–
supercatFeb 24 at 0:26