Posted
by
timothyon Sunday May 13, 2012 @01:12PM
from the squash-it-like-a-figurative-bug dept.

An anonymous reader writes "Shared in last quarter's FreeBSD status report are developer plans to have LLVM/Clang become the default compiler and to deprecate GCC. Clang can now build most packages and suit well for their BSD needs. They also plan to have a full BSD-licensed C++11 stack in FreeBSD 10."
Says the article, too: "Some vendors have also been playing around with the idea of using Clang to build the Linux kernel (it's possible to do with certain kernel configurations, patches, and other headaches)."

Well.. GPLv3 specifically. FreeBSD is stuck on GCC-4.2, the last GPLv2 gcc compiler. It's getting quite dated now. It's a switch from gcc-4.2.2+ hacks/patches to clang instead of a GPLv3'd gcc-4.6 or later.

"Stuck"? FreeBSD gets a foot in the door of companies where GPL (and GPLv3 in particular) is something they'd prefer not to deal with. Being able to use a modern GPL-free OS as a foundation of a product is a convenient option to have. And being GPLv3-free can be even more compelling.

maybe they want to distribute a USEFUL version of the compiler, for a particular purpose. Stallman et. al has long battled to keep usable API and plugin points to the front end of gcc only, and to prevent such in all parts of the toolchain, even to keeping the documentation for parts obfuscated. screw that, some people want real freedom....

One example would be integrating the compiler with its own custom tools.

The only valid way of integrating compiler with custom tools is calling the compiler from them (everything else is shit design made by shit developers). That was done with gcc for as long as gcc exists.

The only valid way of integrating compiler with custom tools is calling the compiler from them

So what's the valid way of finding what functions exist and what variables belong to what functions? Such functionality is needed for "go to definition of selected symbol" and "search for uses of selected symbol" actions.

Have you ever actually tried to parse a declaration reliably using regular expressions? I have. They just kept getting more and more horrible the more complex the input became, and at some point, after fixing a bunch of bugs, I concluded that it was an insane way of doing things, threw out all the regular expressions, and started over with a tokenizing parser.

Trust me when I say that parsing declarations with even Perl-compatible regular expressions, much less BRE or ERE, is not something you want to attempt if you value your sanity in the slightest.

One of the key design objectives of Clang is that it is highly modular, and implemented in such a way that various compilation stages are self-contained, and have clean APIs and data structures. This allows development tools such as IDEs to link directly against the stages of the compilation pipeline then need to implement syntax highlighting, code completion, refactoring tools and so on.

Apple's XCode does precisely this, and licensing and lack of modularity in the GCC source tree would have been major factors in their choice to support Clang and LLVM development.

The traditional way of implementing these functions in IDEs has been to effectively re-implement the front-end of the compiler (often not completely). This is a big deal when developing in C++ against the STL/Boost/TR1 when you find that code completion can't grok template properly. This is something that XCode and Visual Studio (which takes a similar approach) are both capable of doing.

You know, it's been my experience that when someone says something along those lines (any other way of doing something is shit, and anyone who does it is shit) that they really don't know what they are talking about, are full of themselves.

Or, maybe, they have seen it done the shit way before, and want to warn people against doing it again.

It's entirely necessary if using the compiler to drive code completion, syntax highlighting and in-edtior display of compiler errors and warnings. All of these are things are highly interactive and users will notice the lag of GCC getting invoked every time they type a character in their editor. GCC's clunky pre-compiled header support really doesn't help matters.

Of course, as I said earlier, lots of tools have provided this kind of functionality without deep integration of the compiler into the editor,

Whether or not you consider this to be a sabotaged version depends on if you believe BSD-licensed source code suddenly disappears from existence the moment a company uses it for something.

The funny thing about this is that it is *EXACTLY* what can happen.

Although unlikely with high profile projects, a situation could easily exist in a smaller scope project where somebody uses an allegedly more liberal open source license than how they might see the GPL, perceiving the latter as perhaps somewhat "viral", and then what happens is some company with a lot more cash than the initial developers comes along, modifies it, and distributes it essentially as their own product. With no obligation to release source code, consumers of the product are left unaware of its origins. With a higher public profile than the original developer, the original developer's credibility is not actually improved by the distribution of his own software under different licensing terms than those he originally used. Although his ability to independently distribute is not impacted, the simple fact that he may not have the distribution capacity of the larger profile company would end up adversely impacting him. Worse, when he does attempt to claim credit for it when dealing with people or companies that he is endeavoring to do business with, they may perceive him as trying to take credit for what they perceived as the larger company's work.

Finally, of course, if the original author is unable to continue his software before somebody else who values its free distribution manages to take up the gauntlet of sharing it freely with other people, then even though derived works of it may be available under another license, the open source nature of the work will be gone, forever.

"Stuck"? FreeBSD gets a foot in the door of companies where GPL (and GPLv3 in particular) is something they'd prefer not to deal with. Being able to use a modern GPL-free OS as a foundation of a product is a convenient option to have. And being GPLv3-free can be even more compelling.

Not to troll, but what companies are those? What's the closest thing to Red Hat that's selling FreeBSD support, what volume are we talking about? Or are they all providing their own support? Don't get me wrong, I know particularly a lot of web hosting companies run it - 6/39 [netcraft.com] of the top providers on Netcraft's list are FreeBSD, but I doubt they have a problem with the GPL. If BSD went away, they'd probably just join all the Linux hosting companies. There's of course Apple and then there's.... who?

I'm willing to bet that all three have some proprietary stuff that they're not feeding back. It doesn't mean that they completely ignore the community. Apple owns CUPS now. iXsystems picked up FreeNAS development.

I'm just on my way back from BSDCan and the FreeBSD DevSummit. At the DevSummit, there was a Vendor Summit, for companies that use FreeBSD in their products. Not all were there (Sony, for example, was absent), but companies like Fusion IO, Yahoo, IX Systems, Juniper, Apple, and so on all sent people. There were about 40 companies represented in total, for a developer meeting with about 70 attendees.

The GPL has never really been the issue, LLVM was offered for GPL-Next... but the GCC maintainers have always insisted upon a monolithic compiler without standardized intermediary representations to prevent other compiler builders just using GCC as a front end and thus refused, but that's on the maintainers and not the GPL.

And yet, as so many people here love to point out when it favors them, you still get to have the pig. Apple has taken nothing, the pig still exists as ever and apple has provided you with a sausage as well.

Maybe you don't like sausage and thats fine, but don't act like someone took away your (or anyone else's) pig.

How are Apple "stepping on your feet" by using BSD code and developing successful products with it? If OS X had flopped would you still be saying that?

Apple has put a great deal of effort into open source development because they realise that it is mutually beneficial to everyone concerned. Oh, of course their primary goal is their own success and their own bottom line, but they have been able to strike a pretty good balance with open source software and the community at large on their rise into the co

The BSD guys are an interesting crowd. Well they can continue to watch Microsoft and Apple pilfer their software stack usually giving nothing in return while they can use GNOME or KDE or whichever other GPL or LGPLed project exists on their system (takes only a recompile) because otherwise they don't have a useable desktop and are stuck with 1980s user interfaces.

I have used clang and it was neither faster to compile nor produced faster code than GCC. The only noticeable thing is the ANSI colored error messages... blech. I understand it is supposed to be easy to port because of LLVM but the fact is GCC has already been ported basically to every architecture that matters so it probably wasn't that hard to port GCC either.

I take no issue with this part of your opinion... as it is close to mine.

Apple just wants to clamp down everything to be BSD so they can batter down all hatches eventually for the day when they give nothing back. All it takes is a change of heart or leadership. I still remember in the early days they only released source code much time after they did the release which is counter to the GPL people had to beg to them to get access to the source code or, heaven forbid, actually participate in development (it seems for Apple all developers outside of Apple are a bunch of idiots who can't code or something).

Just fyi, Apple is an enourmous contributor to OSS. Here's what they admit to [apple.com], but even if there wasn't all that, IMHO, WebKit [wikipedia.org] alone would be sufficient to compensate humanity for all the OSS technologies from which they have freely and legally benefitted.

Complaining about the GPL is like complaining that you can't play dirty pool with code licensing(see Tivoization).

I haven't heard Apple complaining about the GPL or trying to circumvent it - they're just switching to alternative projects.

Of course, its a pity, because even if if you Tivoized GPLv2 code you still had to share your source so people could learn from it, or use and modify it on other (or jailbroken) hardware, whereas now people are moving to BSD-style licenses with no such benefits... but if the FSF want to let the perfect be the enemy of the good, declare jihad on Tivoization and have a tilt at the patent windmill, that is their right.

I think one of the heads of Red Hat nailed it when he was asked about RMS: "Richard treats his friends as his enemies". Whether the community wants to accept it or not when RMS specifically targeted a single company with GPL V3 he gave a pretty damned good reason for businesses to stay away from the GPL, fear of being the target of GPL V4. BTW I personally bet that if there is a GPL V4 the new buzzword will be "Androidization" since RMS hates Android even though it has put more Linux devices into users hand

I think one of the heads of Red Hat nailed it when he was asked about RMS: "Richard treats his friends as his enemies". Whether the community wants to accept it or not when RMS specifically targeted a single company with GPL V3 he gave a pretty damned good reason for businesses to stay away from the GPL, fear of being the target of GPL V4.

That's a lot of meat without bones. Fear of being the target of parasite lawyers who sue over GPL is more of the reality, whether it's GPL2, 3, LGPL or other.

BTW I personally bet that if there is a GPL V4 the new buzzword will be "Androidization" since RMS hates Android even though it has put more Linux devices into users hands than anyone else in history.

I'm glad you have insight into Richard's brain and can tell us what he hates.As for bringing more devices into the hand of people, that's not the purpose of the open source movement. At best it's a side effect. The purpose is to do what governments and their constitutions fail to do - support progress, by ensuring that new code becomes available to

I know there are reports that C is even with Java again. But what you say made me wonder about this. Despite what Java advocates say, the idea of the app server and enterprise Java was not new to Java. There were and still are brokers around that do much of what a Java app server does, but using C/C++. Tuxedo is one. The thing I am thinking about however, is that Java started a heyday when groups like Apache came around and there was a huge resource of Java utilities and helpers and libraries that were fre

Of course, its a pity, because even if if you Tivoized GPLv2 code you still had to share your source so people could learn from it, or use and modify it on other (or jailbroken) hardware, whereas now people are moving to BSD-style licenses with no such benefits... but if the FSF want to let the perfect be the enemy of the good, declare jihad on Tivoization and have a tilt at the patent windmill, that is their right.

This is absolutely the case! When TiVo was complying w/ GPLv2, the FSF suddenly discovered a major objection to their practice - namely, that they were putting the code in read-only devices, and declared a jihad on the company. However, even GPLv3 doesn't explicitly say that GPL software cannot be put on a Read-only memory (which would again violate the GNU's Freedom #3) or copy-protect memory (which could prevent the device that contains the software from getting copied) or anything else about the devices that the software can reside on.

As you very well put it, it's one more of those cases of the perfect being the enemy of the good, and in the process, the FSF waging a war on its own licensees, namely TiVo. Given that track record, which company in its right mind, even if they endorsed the liberation of software, would want to get into bed w/ the FSF?

In a couple of years time, there will be a proliferation of different, incompatible versions of CLang/LLVM that will be increasingly expensive to maintain. Furthermore, I can foresee vendors making incompatible changes to the code produced by CLang, subtle ABI breakage and the like. The upper levels will suffer too : vendor A's version will not be able to compile source code with vendor B's extensions and vice versa.

Hindsight is invariably more accurate than foresight. And in this case, hindsight tells us that there are plenty of non-GPL free packages that you use every day that haven't succumbed to either of your fears. In fact you use at least a couple of them when you read this.

This sounds like the 1980s/ealy 1990s all over again

That wouldn't be bad. The productivity per user has never been higher, and most of what we use now was invented then. I'd rather see that again that these modern days where ideas are scarce and productivity per user base at an all time low.

Which took decades, GML started around in 1960. ISIL was in the 1980s. That's not a fair comparison you have no idea what technologies being invented today are important for the computing world of 2030. How would you know?

I can tell you as someone who was around when the web starting being used in the early 1990s I didn't think of it as all that big a deal. I actually thought Gopher with built in indexing was going to be better than the HTML with

In a couple of years time, there will be a proliferation of different, incompatible versions of CLang/LLVM that will be increasingly expensive to maintain.

It's already happened. This is why so many companies are now actively involved in the LLVM community: it's cheaper. I'm currently on my way back from BSDCan (where I was talking a bit about the progress in switching to clang) and I was at EuroLLVM a couple of weeks earlier. Both conferences were full of corporate contributors to LLVM and FreeBSD (two projects that I work on). They like the fact that the license means that they don't need to run everything that they possibly want to do past their legal team and, over the past decade, they've all discovered (at different speeds) that it's much cheaper to engage the community and push work upstream than it is to maintain a private fork.

You get much better support from companies that join your community because they regard it as being good for them than if they dump code on you because they are legally obliged to. We don't want drive-by code dumps, we want long-term commitments to maintenance.

One reason for that is because the GCC team won't accept Apple's patches for new versions of Objective-C. Apple want to move Objective-C forward, GCC has become a barrier to that, so they support CLANG/LLVM development. The version of GCC included is simply for legacy support and will be removed in due course once CLANG support for C++ is good enough.

He wasn't complaining about the GPL, he was stating (correctly) that it is one of the key reasons some groups choose not to use GCC, particularly GPL version 3 (note that FreeBSD has not used and of the recent GCC release specifically for that reason - they were fine using GPLv2).

It is a case of choosing the right tool for the job. Until recently their choice was to redefine the job (change the parts of their projects and licensing policy that GPLv3 conflicted with), keep the old tool (using older, GPLv2 licensed, releases of GCC only), or use something less stable/proven/compatible. They chose the middle option. Now option three is replaced by "use something else that is now stable/proven/compatible enough to be an alternative" they have taken that choice. Again this isn't complaining about GPLv3, it is simply refusing to use it because it is incompatible with some of their chosen goals.

I'm told there are technical reasons why Clang and the related tool chain are preferable to GCC in some circumstances too, though I'm out of the loop on that one so I don't know what they are or if they are significant to FreeBSD.

While his answer was rather terse and it would have helped to be less so (it made him appear to some, to you at least, like an anti-GPL troll), what you appear to have done in your response is set up a strawman to attack. This is the sort of thing that anti-GPL people (both within the open-source arena and external to it) will jump on as "proof" that GPL advocates are rabid loonies, so by defending the GPL in such a manner you may be harming the cause rather then helping it. You might want to be more careful not to come across that way (it may not have been you intention on this occasion, but it does seem that way by my reading).

Here is what I personally don't get, maybe someone can explain it to me, but WTF was it with RMS and the TiVo? It was ONE device that had NO choice but to be made the way it was. It wasn't like TiVo was being run by Cobra Commander here, they knew that if there was a way to get the content off the device (which is EXACTLY what running custom versions would have allowed) it would have been banhammered in the west quicker than you can say "copyright infringement".

Would he have been more happy if it had used WinCE? Because that is what it feels like to me, that RMS wants ONLY true followers of "the way' to use anything GPL. And since there is multiple other OSes out there including BSD and Windows Embedded it just seems stupid to attack one specific corp and make other businesses afraid of being next on RMS' shit list and all over a device that frankly could have been made no other way without only being sold at China Mart and other "pro piracy" hardware sites.

Here is what I personally don't get, maybe someone can explain it to me, but WTF was it with RMS and the TiVo? It was ONE device that had NO choice but to be made the way it was.

If it had not been TiVo it would have been something else. His problem was the use of GPLed software in that manner. While it wasn't found to break the letter of the license it broke the clearly stated spirit of the license, so that wording was updated in v3 to patch the hole.

Would he have been more happy if it had used WinCE?

Yes, basically. Or some form of BSD (the licenses used there would allow this sort of use IIRC). Or anything else not GPL licensed. They had those choices available to them.

RMS is an absolutist on this and similar matters (some would say extremist, but I feel that label to be rather too strong here): if you want to use Free, keep it Free with your use, otherwise use something else (paying for it if need be).

it just seems stupid to attack one specific corp

He wasn't going after one specific corp, just the first one that did it (visibly) first and shoring up the hole before others tried. Remember that TiVo could keep using GPLv2 software as they had already done, they'd just have to start maintaining by other means once later versions switched to GPLv3, so the switch to GPLv3 did not explicitly stop them distributing their product.

and make other businesses afraid of being next on RMS' shit list and all over a device that frankly could have been made no other way without only being sold at China Mart and other "pro piracy" hardware sites.

That is where it falls down of course, but so does every other license commercial or otherwise - if you can't enforce the license in a territory people wanting to do something against the (letter of the) licence in that territory are at an advantage to those elsewhere. "Pro piracy" regimes are not a GPL specific problem and not really relevant here - you could just as easily state that VMWare's recent licensing model changes are an attack on compliant companies.

There are many people who think RMS is wrong on the matter, of course. Linus for instance still explicitly uses GPLv2 as evidenced by it being the license git is released under (the kernel is a different matter: that could not be switched even if he wanted because of how many contributions there have been where rights were not explicitly handed over to the project).

But I think you mean the opposite of what you said: people want to be free to do whatever they want with the software, including taking away the software's freedom.

That's the thing. Free software is not about your freedom, it's about the software's freedom. It is not for the benefit of anyone in particular, it is for the benefit of the whole humanity. When you think about where rms came from, and when you read his writings, you realize that his ideal is not an indifferent "here's some code, use as you wish". It is an ideologically grounded "here's some code, it's for everyone to use, and if you build upon it, the result is also for everyone to use".

That's the thing. Free software is not about your freedom, it's about the software's freedom. It is not for the benefit of anyone in particular, it is for the benefit of the whole humanity.

The problem with that argument is that what is best for the software and humanity is not clear cut. Most of the better software out there has significant corporate backing. Far too often, open source software falls into the trap of writing code that "works for me", where "me" is defined as the person who wrote it, yet tends not to "work for me", where "me" is defined as anyone else. Corporate backing tends to fix a lot of that because you have lots of "mes" working on the code, each of whom has a significant interest in making it work correctly and reliably (because they're getting paid to spend their time doing so). Any licensing requirements that are sufficiently onerous to scare away that corporate backing, therefore, tend to result in software of lesser quality.

IMO, the ideal situation is a BSD or similar license with the code owned by a non-profit organization. In this way, you have a reasonable assurance that the code won't suddenly get closed by its primary maintainer, and other companies are unlikely to want to close the code themselves because of the maintenance headaches of keeping a proprietary branch in sync with something that is regularly getting updated by others. However, companies are willing to work on the software and improve it because they don't have to worry about crossing some fuzzy line and getting sued.

We don't have to guess which model works best, at this point we have historical data. Your model failed with respect to X. MIT created and maintained an X that they released via. the MIT license. All the UNIX vendors then took this MIT code and intermixed it with their custom code creating value add X's that were specific to their platform, and closed source. The effect was that the X that existed in the public domain was worthless for end users, and the X's that were worthwhile were closed. X itself couldn't progress because it fragmented so all the interesting stuff existed in other layers. Years later when there was a desire for a workable open X, the XFree86 project had to start, essentially from scratch and this took years. We still haven't gotten all the features that existed in those proprietary Xs 2 decades ago.

That is the classic example of why BSD style licensing doesn't work. The primary maintainer is not unchanging.

Conversely the GPL has a long history of successful multi corporate contributions over time. The historical data simply refutes your theory of what should work.

Far too often, open source software falls into the trap of writing code that "works for me", where "me" is defined as the person who wrote it, yet tends not to "work for me", where "me" is defined as anyone else. Corporate backing tends to fix a lot of that because you have lots of "mes" working on the code, each of whom has a significant interest in making it work correctly and reliably (because they're getting paid to spend their time doing so).

I don't see this working in the real world, e.g. with Android. Corporate backing tends to push code of low quality (cf. the plethora of bugs that were fixed when the Android specific code was put in the upstream Linux kernel) because it was written quickly due to the corporation feeling the pressure from its competitors, and because its developers are paid for the time they spend coding; their interest is focused on solving the corporation's own problems (a corporation is a very big and selfish "me") with no regards to the effect that their solution will have on others' problems (cf. what happened with Apple and CUPS). And when a corporation has moved to the next product, they have no interest whatsoever for either the old code itself or its users (cf. what happens every time a new release of Android is revealed and users would like to upgrade, but they can't because of the binary blobs or forked code).

Any licensing requirements that are sufficiently onerous to scare away that corporate backing, therefore, tend to result in software of lesser quality.

This is not what I'm seeing with GPL projects such as Linux and the GCC. I think that the code quality of an open source project depends more on the community that it's able to gather than its license. But even if it we assume it's so, then the problem lies with the FUD about the license rather than in the license itself. FUD that I find in your comment, too:

they don't have to worry about crossing some fuzzy line and getting sued.

No company has ever been sued because of "crossing some fuzzy line". A couple of companies were sued because they absolutely refused to put a tarball on an FTP site despite the fact that the authors of that code had tried to convince them to do so for years. In comparison, Google is getting sued to hell because of BSD-licensed code. The truth is that no license will make you safe from copyright/patent trolls.

I always thought about it this way: the GPL is about user freedom, and BSD is about developer freedom. If you're using GPL'd software, you are explicitly given the right to know what it's doing and the right to change it. If you're developing with BSD software, you're given the right to control how it's integrated into your project and how it's distributed. Unfortunately it's impossible to guarantee both rights at the same time; the correct choice for each project depends a lot on how that project is meant

There are companies shipping FreeBSD-based products using MIPS, ARM, and PowerPC, as well as x86[-64]. ARM support in LLVM is good (ARM and Apple both work on it). MIPS support is mostly there if you use an external assembler, but the integrated assembler is broken - some MIPS people are working on it. PowerPC just has three guys working on it, but everything except some thread-local storage models and position-independet code on 32-bit work. We're probably going to flip the switch for x86[-64] to default to compiling the base system with clang this week (I meant to do it yesterday, but I was only near my computer at the same time as I was near beer, and thought that this was a commit that should be done sober). For other architectures, it may take a little bit longer.

Well, if somebody wants the code they write to be used by others and popularized that way, it would make sense to use BSD, or one of the less restrictive licenses. There is a plethora/bonanza (depending on which way one looks @ it) of licenses, which gives every software project plenty to choose from - from downright proprietary to less restrictive ones like BSDL to most restrictive ones like GPLv3. Given how many companies wouldn't touch GPL software, if a project wants its work to be used by such a comp

Some of us (BSD people) are old, wizend and experienced. We have found by experience, often personal, that some companies are not actually run by Gollum, and find that it is beneficial to give back improvements so the public source tree has them, and the public maintains the new, improved, code for free.

If software is infrastructure, and not part of your product, this is almost certainly the case - you want the software to work well, but you don't want the cost of maintaining it.

Notwithstanding this, many companies ARE run by Gollum, or arsehats, or corporate lawyers (guaranteed worse than decendents if a illegitimate union of Gollum and arsehats). That is life - suck it up!

No only licensing issues, but also performance. While CLANG may not have all the bells and whistles of GCC, it does a good job compiling C code - and given that the base system is mostly C, even a small improvement in compile time (and memory usage) can make a big difference, specially for those who - like me - prefer to build and upgrade from source.
Another motive to seek alternatives (but not directly related to FreeBSD) is the lack of support of some architectures. Some "obsolete" architectures were rem

Some people argue that LLVM/Clang offers better code generation, compile time warnings, and code analysis. Some compiler developers think the gcc code has become too bloated and complicated. Even gcc devs have described the gcc code as "cumbersome".

There are various efforts to get Linux building under LLVM/Clang. Especially for embedded environments.

For example, if you have a printf()-like function whose format string has a default value, every use will emit:warning: format string is not a string literal (potentially insecure) [-Wformat-security]without a mention of the line in question. The default value here is also a literal, making the whole warning bogus in the first place.

1) It compiles slower than clang at -O02) It produces slower code than clang at -O3 and -Os3) It's error and warning messages are not as good4) It's not as modular as clang, which can be used in parts, to produce useful tools like CSA5) The GPL.

1) It compiles slower than clang at -O02) It produces slower code than clang at -O3 and -Os3) It's error and warning messages are not as good4) It's not as modular as clang, which can be used in parts, to produce useful tools like CSA5) The GPL.

Got facts to back this up? Every benchmark I have seen as showed GCC producing faster code than Clang on 90% of the time, Phoronix benchmarks has in the last week showed this to be true.

The derivative works companies build using BSD-licensed software are effectively proprietary software. And if they control the hardware, they can make sure it only runs binaries signed by them, so you can't even run the original unmodified code.

After all these years of BSD code existing and thriving without issue, it's amazing that people still spread this kind of fearmongering despite the fact that this scenario has never come true.

The original code and its contributors don't magically disappear the moment a company makes a closed change. And if a company makes contributions it doesn't show anyone, you're free to make your own open contribution that competes with it. In fact, it's in company's best interests to rely on open contributions, because they don't want to waste time and manpower on, say, maintaining a compiler. This has proven to be the case with Clang. There hasn't been some evil proprietary fork that somehow ruined the world--and even if there was, people would just contribute free versions of the fork's features to the main tree. Companies are smart enough to know that this would happen and therefore realize that closed contributions of major features would be wasted effort.

Winsock isn't based on BSD sockets. And it wouldn't matter if it was. The original BSD source would still be there. Why does every BSD scaremonger act like the original source goes away? How many decades of BSD software has to exist without issue before people stop making this bogus argument?

I normally don't respond to ACs but this is one thing I've noticed and found interesting....anybody notice how the Anti-BSD GPLers sound a HELL of a lot like the RIAA? Both act as if copying is stealing, both come up with these giant FUD scenarios of doom which never seem to happen, and both act as if THEIR way is the only 'right" choice and frankly you are an idiot or "one of THEM!" if your views don't stay in lock step with their own.

I just find it fascinating that those that constantly scream they are for "freedom" sound a hell of a lot like the most restrictive bunch out there.

What? Have you read a quarterly statement by Apple recently? Apple makes more profit than any other phone maker and little of it came from software. Overall Apple made 47% gross margin with the vast majority coming from hardware. Apple prefers BSD because it has fewer restrictions than GPL. It's more practical to them. There isn't anything more nefarious than that reason.

Someday, when the whole world of technology is balkanized, where premium code contributions are walled of into proprietary software owned by Apple, Google and Facebook, we will wonder how "open source" became a form of virtual date rape.

Quite the opposite from what I've seen. iXsystems, Isilon, Netapp, etc., have found it much better to contribute non-special sauce code back to the project than keeping it in house. Any patches that you don't contribute, you have to maintain, and over time will drift from the mainline development of the public code. Unless the patches are very sensitive and core to your product differentiation, it makes no sense to keep them hidden.

People will contribute back to BSD projects because it is the most practical

GPL'd software is open source, so if you don't think the documentation is sufficient go and read the source and write better documentation for it.

Here's the thing: If I don't think the documentation is sufficient that's it. I'm not going to waste my time reading the gcc source. It's not that entertaining and I have a lot better things to do with my time. And if the people who wrote the compiler can't be bothered writing documentation, why should I?

Stallman and others deliberately fought having APIs, proper documentation and to allow plugins for all parts of the GCC toolchain, to keep control of the thing.

Mostly result of dispute with DEC SRC when GCC and parent FSF failed to enforce GPL on Modula-3. Moving target known as GCC internals has been problem ever since, mostly to "legitimate" GNU compiler developers.

LLVM, on the other hand, made ingenious move with standard and open IR. Overall modular design is another boon.

GCC was in blind alleys before. No real reason for them not to survive this one. Another EGC can happen, to pull GCC in future.

One of the FreeBSD developers gave a talk about this. FreeBSD has commercial users, and the new GCC just wouldn't have been an option for them. The older license-compatible version still in FreeBSD wasn't receiving updates, and it was beginning to affect developers too greatly.

Whether this compiler switch is a good thing or not depends on how much you hate the idea of commercial vendors using open source. GCC's strictness is admiral from an ideological perspective, but certainly not from a practical one. It should be noted that even Linus Torvalds adheres to a more pragmatic worldview [linux-mag.com]:

There are "extremists" in the free software world, but that's one major reason why I don't call what I do "free software" any more. I don't want to be associated with the people for whom it's about exclusion and hatred.

It's pretty damning when Linus himself no longer refers to Linux as free software because he doesn't like the extremism of the free software movement. And why should he? He's an engineer, not a religious fundamentalist.

... Which is hilarious because it is the BSD fundamentalists who are re-implementing huge projects just to avoid a license they don't like for no reason other than political correctness...

Untrue. Gcc is handicapped by political decisions in it technical design. It intentionally does not allow "others" to plug into some "internals". "Internals" that would facilitate other tool builders, especially those creating a graphical integrated development environment.

LLVM/Clang doesn't come with such technical baggage. Its modular rather than monolithic. It is a newer code base that is far easier to work with, even gcc devs moan about the bloat/complexity of their code base. Nearly all long lived project reach a point where it is better to toss the legacy code out and start from scratch, gcc may have very well surpassed that point.

Untrue. Gcc is handicapped by political decisions in it technical design. It intentionally does not allow "others" to plug into some "internals". "Internals" that would facilitate other tool builders, especially those creating a graphical integrated development environment.

Software you can't plug-in to and interoperate with?
Isn't that the opposite of free software?

No, they're doing it to make sure that you can't extend GCC in any way, shape or form without those extensions themselves being under GPL. That's why Stallman always hated the idea of a publicly documented intermediate language - because it would allow third-party authors to write tools to process that IL (e.g. optimizers), which, because they don't link with GCC, wouldn't be 'derived works' in legal sense.

It's all laid out pretty clearly in their license for the runtime library - have a read [gnu.org].

Simple proof: GCC has precompiled headers, and is in some ways more modular than Clang, separate precompiler for instance? What clang has is better API for the code analytics where GCC's modular APIs are either too high level (precompiled source), or too low level (intermediate GIMPLE code).

It's not quite as simple as that. The development of Clang is being funded by Apple. They need a BSD license so that they have the freedom to make further modifications down the line (without leaving them open). Yes, I'm a GPL advocate. No, I don't agree with Apple's ideology. But it's the case anyway.

In any case, it doesn't do us any harm to have an underdog in the world of open source C compilers. If you only have one option, then people start treating even the programme's eccentricities as standards. The need for compatibility encourages people to document. Not to mention that the different attitude taken in Clang from the offset means that it may be more suitable for certain applications. This page [llvm.org] makes for some interesting reading.

I think it is more likely that they are worried about the patent grant implications of GPL 3.

There are two things in the GPL that make companies nervous. One is the patent grant, the other is the revocation clause. It is possible to discover that the product that you are selling 100,000 instances of a month violates the GPL, and have your license instantly revoked. Getting the license granted again requires explicit interaction with the copyright holder (just getting into compliance does not make the license valid again), so if you can't get hold of every copyright holder then you have to stop d

Having all this great open source compiler technology competing with each other is great, but one does wonder if the alienation caused by GPLv3 was worth it, as it is the primary reason both Apple and FreeBSD embraced Clang (in fact, Apple started the Clang project). As a result, GCC wasn't updated past GPLv2 on either platform. Apple couldn't integrate GCC with their IDE like they wanted, nor could FreeBSD's commercial clients work with it. Flexibility and pragmatism usually wins out over rigidness and ideology.

I've found that code that will compile properly under a variety of compilers tends to be of better quality.

One of my current projects started out on an old 2.x branch of GCC. When I finally got around to updating to a current GCC, I had to fix quite a few bugs before it would actually work - the different compiler was catching problems I hadn't noticed before.

Same when I tried compiling it under Visual Studio, or Clang - the more compilers I made it work under, the less bugs there were in the code.

Now, if a given program actually uses some special feature of GCC, that's fine - if only one compiler will do what you need, that's fine. Or if it's too much work to maintain a "port" - I stopped maintaining the VS project files a while ago, since I no longer used it. But if you have a chance to at least test it against a different compiler, go ahead and give it a shot.

Seriously, what features? I've never tried to write anything as low-level as a kernel, but I can't think of much compiler-specific stuff besides a bunch of compiler detection to set up properly-sized typedefs. Maybe differences in assembly syntax?

Also, clang was designed to be a drop-in replacement for gcc in many (not all) ways. Might make it easier for this case.

I believe it was really the Tivozation rule of GPLv3 that forced FreeBSD to abandon GCC in their base. FreeBSD wanted to ensure that a specific version of GCC would be in their highly integrated base operating system. The FreeBSD base has no real comparable analogue in the Linux world, but its a system that is tested and designed to work together from the pseudocode to the final compiled product. GPLv3, with its Tivozation clause, however, made this tying together essentially illegal.

Also the BSDs have long since desired to remove GCC from their base system simply because it has a different license than the rest of the base. They attempted using PCC, but the code it produced was not optimized to a level comparable with GCC. clang/LLVM however, is both BSD licensed and produces well optimized code. Its also newer and cleaner code; sometimes rebuilding everything from scratch helps (though usually not).

I'm a 10 year+ FreeBSD contributor. You're all missing the point. Linux and BSD target different markets and are optimized in all ways, organization, release process, license, code, to fit these different needs. One isn't better or worse. Obviously Linux is larger in all ways than BSD but larger doesn't mean better or we'd all just be using windows. This isn't a question of llvm being better than gcc, bsd being better than linux, or bsd license being better than gpl. They are just different and do different things. Use what's appropriate for your needs and leave it at that.

I can say as a long time contributor to opensource software I am disgusted at reading the comments of blowhard 'enthusiasts' who denigrate the hard work and contributions of hundreds of people when they get in these pissing matches. I am friend with Linux kernel contributors and I can guarantee we don't flame each other in this manner.

At the moment I write this there are 297 comments mostly debating the merits of LLVM/Clang vs. GCC. There is not one mention of EGCS.

Fifteen years ago GCC was forked [google.com]. A group of people we're frustrated with GCC and its leadership because they had contributions to make and talent to offer that was not welcome. They called their fork EGCS.

Why are we doing this? It's become increasingly clear in the course
of hacking events that the FSF's needs for gcc2 are at odds with the
objectives of many in the community who have done lots of hacking and
improvment [sic] over the years.

The GCC you use today is EGCS. A few years later EGCS was adopted as GCC 2.95 after the merits of EGCS became undeniable.

Looks like we've come full circle. The cool kids are off in the weeds making cool stuff. Better stuff, and the `Powers That Be' are not interested. The `needs' of the FSF today are no longer in sync with the `needs' of the developers of today.

The bottom line is that GCC as it is with it's leadership, code base and license agenda doesn't cut it for those who have the talent, motivation and capital to create a tool chain that does cut it. You don't get to impede that, however righteous you think you are.

I've heard positive and negative claims regarding this. Certainly, Apple thinks it's production-ready (I think it was Xcode 4.2 that they stopped shipping GCC). Do you have a link showing that generated code is significantly worse? Which versions were compared?

At -O2 and -O3, clang and gcc are within 10% for the vast majority of code, with no overall winner. There are a few corner cases, however:

The autovectorisation support in LLVM is a very elegant design, but is very new code and so still performs worse than GCC in a lot of cases (about 70% of the autovectorisation test suite is faster with GCC).

Clang has no in-tree support for OpenMP, so anything using OpenMP (vaguely competently) will be faster with gcc because clang will fall back to the single-threaded version.

GCC's Objective-C support is just embarrassing, and (on non-Apple platforms) performance can be an order of magnitude better with clang, with a 20-50% speedup being pretty common.

If you're interested, take a look at the talk by Hal Finkel at EuroLLVM a couple of weeks ago. I believe there are actually three vectorisers in progress for LLVM, but the one Hal works on is particularly interesting because it approaches the problem with a very general solution, while the GCC version just transforms hard-coded patterns. I'm not sure if this code made it into trunk just before or just after 3.1 was branched, but I believe the plan for 3.2 is to have it along with a pattern-matching approa

Static for functions or variables with file level linkage makes them "private" to that file. E.g. in this case, several source files can define global variables named world_type without collisions. That is provided all declare them static. One of the files might ommit, but if two or more source files declare non-static global variables named world_type then the linker will (correctly) complain when linking.

According to Kernighan and Ritchie, the static modifier restricts the scope of externally declared variables to the rest of the source file. AC might not want to accept the GPL and BSD definitions of free/unecumbered to non-software contexts.