Posted
by
EditorDavid
on Saturday May 05, 2018 @04:34PM
from the what's-GNU? dept.

"Are you tired of your existing compilers? Want fresh new language features and better optimizations?" asks an announcement on the GCC mailing list touting "a major release containing substantial new functionality not available in GCC 7.x or previous GCC releases."

An anonymous reader writes: GNU has released the GCC 8.1 compiler with initial support for the C++20 (C++2A) revision of C++ currently under development. This annual update to the GNU Compiler Collection also comes with many other new features/improvements including but not limited to new ARM CPU support, support for next-generation Intel CPUs, AMD HSA IL, and initial work on Fortran 2018 support.

... that gcc has gone "uncool", largely because llvm is where all the hipsters are but also because it's now trying too hard, and worse, that C++ is trying to prove something, only to end up like some sort of perl or something. This doesn't seem to be a recipe for success to me.

The problem with C is that in order to implement any kind of ADT like linked lists, height balanced trees, you have to rewrite every single manipulation function (insert, delete, reorder) for each and every structure required. The only workaround are C+ macro voodoo.

C++ fixes this problem with templates. The people who write the specifications for C++ are just getting round to adding all that theoretical concepts written about back in the 1960's. LLVM is where the action is happening now. Everyone can just

The difference is not the supported languages, but the supported targets.The embedded systems, toasters, light bulbs of the world are all developed in a language (usually C, and to a lesser extent C++) compiled using GCC or a derivative. But I've never seen a CPU vendor toolchain based on LLVM. I am sure it exists, but the popular ones are still based on GCC. And I've used various platforms from TI, NXP, Microchip, STmicro, Atheros, Broadcom, etc. All based on GCC.

For me, GCC went from being buggy in version 6 to excellent in version 7.

I don't think GCC has lost its luster. It's more GNU that has, and thus everything related to it. And that's because the developer-freedom of the richest corporations has replaced the user-freedom of the freeloaders in the mind of many open-sourcers.

I use gcc every single day, mostly for embedded work, and I'm quite happy with the results. Maybe inside it's an unwieldy monster, but I don't have to look at that, so it doesn't bother me. I even use some the compiler extensions, as they are quite useful for my job.

"Are you tired of your existing compilers? Want fresh new language features and better optimizations?"

"Then consider ditching gcc and going with LLVM". Is that how the quote ends?

I can't wait for that festing pile of bloat and compiler bugs to finally die, I really can't. Every single new release brings more code-generation bugs that we have to work around in our product, we're slowly working away at The Mgt. to get them to simply require LLVM or some other compiler that doesn't break things on every release, and whose maintainers will actually respond to bug reports rather than closing them all with WONT

"Are you tired of your existing compilers? Want fresh new language features and better optimizations?"

"Then consider ditching gcc and going with LLVM". Is that how the quote ends?

I can't wait for that festing pile of bloat and compiler bugs to finally die, I really can't. Every single new release brings more code-generation bugs that we have to work around in our product, we're slowly working away at The Mgt. to get them to simply require LLVM or some other compiler that doesn't break things on every release, and whose maintainers will actually respond to bug reports rather than closing them all with WONTFIX, "if you squint at the spec from just the right angle and use your imagination then this showstopper bug is actually permitted".

I'm calling you a liar on all of those. The biggest difference is that LLVM is trendy and GCC is not.

very single new release brings more code-generation bugs that we have to work around in our product,

Example? (I know you don't have an example of consecutive releases with different codegen bugs, but asking at least makes it clear to other readers that you don't know what you are talking about).

I know you don't have an example of consecutive releases with different codegen bugs, but asking at least makes it clear to other readers that you don't know what you are talking about

Gosh, you know a lot about this, don't you? Which version of gcc would you like the bugs for? There's so many of them I'd have to go for a specific version.

Incidentally, this code is built using between thirty and fourty different compilers, depending on how you count them (for example are VC++ 6.0,.NET, and the current Visual Studio counted as the same compiler or not? There are at least three different code bases there). gcc has more code generation bugs than every other compiler combined. That's th

You claimed [slashdot.org] that every new release of GCC brings more codegen bugs:

Every single new release brings more code-generation bugs

So, please, list the codegen bugs you claimed were added between every single release. In other words, for each of the releases listed below, please fill in the new codegen bugs that you found in that release. Since you also claimed that each release has more bugs that the previous one, your list should either grow or contain only bugs that were never fixed in subsequent releases.

So, here is the list; for each one fill in at least one codegen bug that was introduced in that release:

I can't wait for that festing pile of bloat and compiler bugs to finally die, I really can't. Every single new release brings more code-generation bugs that we have to work around in our product, we're slowly working away at The Mgt. to get them to simply require LLVM or some other compiler that doesn't break things on every release, and whose maintainers will actually respond to bug reports rather than closing them all with WONTFIX, "if you squint at the spec from just the right angle and use your imagination then this showstopper bug is actually permitted".

"if you squint at the spec from just the right angle and use your imagination then this showstopper bug is actually permitted".

IOW you're writing noncompliant code and blaming it on compiler bugs.

All compilers do that whole "squint at the spec from the right angle" because that's how optimizers work. You put in the rules of the spec and the code into a theorem prover and it crunches on it to figure out dead code, aliasing, constants and so on and so forth.

You're a gcc maintainer I assume? That's one of their main comebacks, "your code is noncompliant and it's not our compiler that's broken". Funny thing is, the thirty to forty other compilers that the same code is built with (see my other comment above) all work fine, it's only gcc that generates invalid code. Odd that, isn't it, that gcc is right and every other compiler out there is wrong?

That's one of their main comebacks, "your code is noncompliant and it's not our compiler that's broken".

Yep, and they're correct.

Funny thing is, the thirty to forty other compilers that the same code is built with (see my other comment above) all work fine, it's only gcc that generates invalid code. Odd that, isn't it, that gcc is right and every other compiler out there is wrong?

Sometimes I wish that if the optimizer finds something really juicy (like eliminating a dozen lines of code because it can prove they will never be called by assuming that undefined behaviour won't be triggered) it would just refrain from optimizing the code out and instead emit a diagnostic telling the programmer to apply that optimization in the source code. You could then review the code and either say 'gee, the compiler is right, I will delete that whole branch', or 'ouch, that should not be happening.

emit a diagnostic telling the programmer to apply that optimization in the source code

It usually isn't that simple. The obvious source-level optimizations you're describing are relatively rare; the ones which lead to major size and performance improvements come about when you combine generic code in inlined functions and macros—often from distinct source modules, perhaps even different projects—with the results of prior optimizations. To apply these optimizations at the source level you would need to specialize the definitions for each use case.

About the only thing it's trying to "prove" is that it can move with the times. And it's proving that by doing so. C++98 was awfully long in the tooth by 2011 in that C++11 provided in many cases better, more efficient, shorter, more obvious and cleaner mechanism for doing a lot of common things.

Other things have simply proven incerdibly hard ot get right: concepts has been in the works for 30 years!

... that gcc has gone "uncool", largely because llvm is where all the hipsters are but also because it's now trying too hard, and worse, that C++ is trying to prove something, only to end up like some sort of perl or something. This doesn't seem to be a recipe for success to me.

No, everyone moved to LLVM because for a long time, GCC development basically stalled. It was "good enough" and everyone put up with it. DIdn't like it, just put up with it.

C++ had Frankenstein's Monster syndrome back when STL started, and the generated templates were near impossible to debug. K&R complained C had too many operators. Here they are adding more and more. Step back from the keyboard and let the language be.

A buffer overflow is one thing, they can cause bad things to happen, but you really don't want your ABS to throw an exception or start the garbage collector at the wrong time.

And what would you want it to do? Return a nonsensical result and keep the program going on without detecting the fault? In the past many safety-critical applications were coded in Ada because of the language's safety-oriented design, which included exceptions.

The value proposition of 'C' was that you could efficiently program near the metal.

Instruction sets and micro-architectures co-evolved with 'C' along those dimensions of efficiency. All those extra operators idioms like "+=" and "++" were inherited from its predecessor B** (well actually, adapted from "=+" because of the lexical ambiguity) to match accumulator style instructions common in contemporary instruction sets and to reduce compiler complexity (using these kind of idioms, even a memory/perf constr

All those extra operators idioms like "+=" and "++" were inherited from its predecessor B** (well actually, adapted from "=+" because of the lexical ambiguity) to match accumulator style instructions common in contemporary instruction sets and to reduce compiler complexity

The first compiler was made on a PDP-7 that didn't even have increment instructions.

The += operator is useful on any platform, simply because it saves you from writing (and reading) the same expression twice.

But for long after that, in simple C compilers, ++ would use INC if it was available while +=1 and a=a+1 would generate multiple instructions or use ADD immediate (and so take more cycles to execute).

Maybe, but that has nothing to do with the reason they are in the language. Also, I doubt many compilers worked that way (can you name one ?). If I had to write a compiler, the first step would be to generate an abstract syntax tree, where a++, a+= 1 and a = a + 1, would all be represented as assign(a, add(a, 1)). Choosing between inc/add would be done at the code generation phase.

Cowabunga! This fixes the single most vexing upward compatibility issue between C and C++, and also a glaring maintainability issue in C++. How sweet that it only took, hmm, two decades to work through the initialization order wankery. Note: gcc has had this since forever, but disabled because the standard org didn't bless it.

In the case of Apple and Qualcomm, they apparently prefer a compiler that will let them distribute a proprietary (non-free, user-subjugating) derivative. Brad Kuhn, President of Software Freedom Conservancy [sfconservancy.org], has predicted that as soon as Apple finds the compiler to be good enough they'll stop their upstream contributions.

Your namecalling notwithstanding, Brad Kuhn has already covered this as well and there's nothing particularly special about the examples you list. Apple certainly stands out because of Apple's irrational hatred of being a GPL licensee (which dates back to how NeXT treated NeXT OS users with their Objective-C additions to GCC, referenced in Copyleft: Pragmatic Idealism [gnu.org]). Kuhn pointed out something that might be the case now: there are non-free add-ons for that compiler. As these add-ons gain popularity devel

That "ideological wankery" is why so many kids can afford access to mainstream professional development tools today. The kids in the '70s and early '80s didn't use BASIC because they thought it was the best choice, they did it because a decent professional grade C compiler cost hundreds of dollars (close to a thousand in today's dollars)

What you call wankery, I call simple practicality. Why would I want to tie the future of my software to the "good will" of a proprietary vendor?

The kids in the '70s and early '80s didn't use BASIC because they thought it was the best choice, they did it because a decent professional grade C compiler cost hundreds of dollars (close to a thousand in today's dollars)

I thought it was because I was running a 6502 with 12kB of memory and cassette tapes.

That's why you did it, but the Apple ][, PET, VIC-20, and C64 could have managed it if expanded from the base models. Certainly, the old IBM-PC (or the many clones) could handle a C compiler, but it was a few years before anything like a useful proprietary C compiler became somewhat affordable.

The Vic-20 came out before there was a C64. So some people did expand their existing Vic. I also saw one with the ROMs swapped out for a FORTH system.

I said early '80s exactly because Borland did make having a C compiler more affordable. Of course, at that time it was seen as more of a beginner's compiler than pro grade (even if it was in many ways superior to MSC).

In the case of Apple and Qualcomm, they apparently prefer a compiler that will let them distribute a proprietary (non-free, user-subjugating) derivative. Brad Kuhn, President of Software Freedom Conservancy, has predicted that as soon as Apple finds the compiler to be good enough they'll stop their upstream contributions.

Well, several reasons. First was GPLv3 which most companies are extremely wary of. Apple began investment in LLVM long before GCC went GPLv3 - LLVM was available as a limited functionality

they apparently prefer a compiler that will let them distribute a proprietary (non-free, user-subjugating) derivative.

Worse clang can be integrated with an IDE ( refactoring, syntax highlighting, autocompletion,... ), somthing gcc can't and never will support. IDEs that offer these features for C++ either have to rely on incomplete hacks or simply use functionality provided by clang. Last time someone tried to write refactoring with official gcc support for emacs he got stalled by RMS for over a year before it was effectively killed - RMS "planned" to look into a possible solution, none ever surfaced.

It's no different to any previous compiler release by any vendor. Any new version may increase the strictness, and if you read the list of extra checks the compiler is doing, they are all entirely reasonable and in most cases only affect buggy code which would misbehave and already needed fixing. Bring on the extra strictness and improve the quality of your codebases, I say.