Posted
by
Unknown Lamer
on Wednesday September 25, 2013 @11:25AM
from the john-titor-joins-the-team dept.

An anonymous reader writes "LLVM's libc++ standard library (an alternative to GNU libstdc++) now has full support for C++1y, which is expected to become C++14 next year. Code merged this week implements the full C++1y standard library, with support for new language features in the Clang compiler frontend nearly complete."
GCC has some support for the soon-to-be standard too. The C++ standards committee is expected to produce a more or less final draft in just a few weeks. The LLVM and GCC C++14 status pages both have links to the proposals for the new features.

Now that Clang/LLVM has got this finished, I'm wondering what a system would look like with:

* Linux as the kernel
* Clang/LLVM as the system C/C++ compiler
* Heirloom Toolchest as the basic userland toolchain
* Wayland as the underlying display system
* musl as the system C library

That would be Linux, but would contain almost no GNU code. Not that I have anything against GNU, but the Heirloom Toolchest, Clang, and musl are all more standards compliant, smaller, and often faster than their GNU counterparts. I wonder what a Linux distribution like that would look like. I'd use it.

(I hate how "GNU's Not Unix!" is really becoming more and more true. Unix was about minimalism, and sometimes GNU seems like it's about stuffing everything possible into every tool.)

Faster at compiling at least. And there's lots of code floating around that was compiled in GCC version 4.3 or earlier - Clang/LLVM-compiled code runs circles around that. GCC had a huge speed boost at version 4.4 which took time for LLVM to catch up to. The latest version of LLVM is actually very competitive against the latest GCC except where the GCC code makes use of OpenMP (which LLVM is still busy implementing).

The latest version of LLVM is actually very competitive against the latest GCC except where the GCC code makes use of OpenMP (which LLVM is still busy implementing).

For the values of "competitive" of "around 1 level of optimization slower" on the average (although particular cases vary widely). Granted, gcc tends to compile around that level of optimization slower in return, but for production code, you compile once run many.

And on the high end of optimizations... gcc compiles one piece of code with LTO in 40 minutes on a 256MB raspberry pi, while clang OOMed after several hours on my main box (8MB ram + 8MB ssd swap).

And on the high end of optimizations... gcc compiles one piece of code with LTO in 40 minutes on a 256MB raspberry pi, while clang OOMed after several hours on my main box (8MB ram + 8MB ssd swap).

You're compiling on a system with only 8 MB of ram? Are you still stuck in 1993 or something?
Memory restrictions in embedded systems is one thing, but on a development machine it seems bizarre. Like cutting off your legs just to see how slow you can run a marathon on stumps.

From my experience with both the latest GCC and Clang, Clang is about two times faster and uses 25% less memory (40% less than GCC 4.8 which somehow uses much more memory than its predecessors).

Optimization with clang is also slightly better. It depends what sort of optimization you're looking for, but Clang is quite good at getting rid of the abstraction penalty, which is the only thing I really care about.

Now that Clang/LLVM has got this finished, I'm wondering what a system would look like with:

... * Clang/LLVM as the system C/C++ compiler

Slower

I'm developing a tiny game engine in C(11) and I've built profiling into the core, and I profile many of the math-heavy parts separately as well. Clang 3.3 actually almost always does better than gcc 4.8 here. Not by much, but there you have it. You should take a look at the SLP vectorizer, which will come enabled in -O3 as of Clang 3.4 but can already be enable separately with -fslp-vectorize.

So for single threaded code I'm already leaning towards Clang. OpenMP is going to get integrated as well, as of then, all bets are off. Exiting times to be a C/C++ dev... (or any other kind, for that matter, LuaJIT never ceases to amaze me).

Aren't vectorizers a serious language smell? Once you're basically forced to start guesstimating what is the mutable-variable-manipulating C-level program trying to actually do, all bets are off what the vectorizer is going to be able to deduce.

so you can either use the vectorizer or start writing embedded assembly

This is a false dichotomy, if I've ever seen one. There are languages such as APL, J, K, etc. that don't need vectorizers and still can generate native code with vector instructions, simply because their high-level operators already contain the algebraic information that the vectorizers for C/C++ are trying to recover from loopy code.

X isn't GNU licensed... replacing it with Wayland would not decrease the amount of GNU code in the system.

And GNU/Linux was a term coined by Stallman, probably because he may have been getting antsy about there not being an official GNU kernel yet, and so he figured he'd just appropriate another one without actually asking anybody... one that already existed with a GNU license, and was already in respectable condition. But Linux was never actually part of the GNU project, so calling it GNU/Linux strikes

And GNU/Linux was a term coined by Stallman, probably because he may have been getting antsy about there not being an official GNU kernel yet, and so he figured he'd just appropriate another one without actually asking anybody

It is very appropriate, but at the time, a whole lot of people counldn't understand why. Now you have Android. Linux kernel without the GNU userland. GNU/Linux and Android are not compatible

No... it's not appropriate. It completely misrepresents the origins of LInux as having anything to do with the GNU project.

Linux was GNU licensed, but was not *ever* part of the GNU project. If GNU made a distro of LInux it would be reasonable to call that distro GNU/Linux, but afaik they do not. The term would still be inapplicable to other distros, however.

Listing all Linux distributions in an application's "system requirements" would require excessive space. This means there's a need for a precise name for the set of Linux distributions that support Gtk+ and/or Qt userland as opposed to only the Android userland. Would Qt/Linux or Gtk+/Linux be more accurate?

No... it's not appropriate. It completely misrepresents the origins of LInux as having anything to do with the GNU project.

I thought it was petty on Stallman's part to try and call it GNU/Linux, but without the GNU tools there is no Linux. I can see why it bothered Stallman that all the glory and name recognition of Linux went to Linus and his kernel.

That took a long time to happen. Linux gained fame as a kernel combined with the GNU tools over 20 years ago. Stallman's complaint originated early on. Wikipedia helpfully has a history [wikipedia.org] section on it.

Nobody will argue that the utilities that came with Linux were all GNU, but that does not make it GNU/Linux any more than the system that I used while I was at university should have been called GNU/HPUX because the system administrator had replaced all the standard HPUX tools with GNU ones.

Linux was originally developed with and for the tools that came with Minix at the time... almost all of which were also GNU, but nobody ever called Minux GNU/Minix. It's more the case that the GNU tools were ported

"Nobody will argue that the utilities that came with Linux were all GNU, but that does not make it GNU/Linux "

Neither does it make an entire OS Linux just because the kernel is Linux. That being said, I'll argue it. The user space tools (which you are calling utilities) weren't all GNU. Some distributions have a BSD based init system while others had a AT&T System 5 based init [wikipedia.org]. Neither of those is from GNU, and yet almost all but Gentoo had one of them until systemd [wikipedia.org] came along. While it is indeed t

The GNU code was compiled for and then run on that particular platform. The platform does not become GNU just because one is running GNU tools on it any more than a somebody running a JVM on windows turns their windows box into a Java machine. It's still just a windows machine that is running Java.

Similarly, I used an HPUX that had GNU tools when I was at university. That doesn't make it GNU/HPUX, because it had no affiliation with the GNU project. It might still be GNU code, but the platform it was

I don't understand that argument. The HPUX kernel has nothing to do with the Gnu project. Neither does the Linux kernel, except that Linus likes GPLv2. If Linux were a Gnu project, it would make sense to just refer to a Gnu OS. (Personally, I generally use phrases like "Linux distro", "Linux kernel", "Gnu userland", "X windows", and avoid the whole Gnu/Linux controversy that way.)

Why are you defining "Linux" as just the kernel? The original meaning was the entire OS, with the phrase "Linux kernel" referring to just the kernel. If you want to err on the side of brevity, "Linux" is accurate, "GNU/Linux" is what RMS wants everyone to call it, and "GNU/MIT/BSD/Apache/Canonical/RHEL/SUSE/Linux" would be more accurate. I'd advise ignoring the last two options and calling the OS by it's original name-- Linux.

"All of that being said, calling it GNU/Linux rather than Linux is better than calling a whole OS Linux when only the kernel and util-linux are actually Linux, or calling it Linux/BSD/GNU or Linux/System5/GNU. None of these are accurate."

The "it" you are referring to doesn't need to be referred to at all. If you want to talk components, there are names for those. Otherwise use the name of the distribution. No need for contrived names for subsets when those names are chosen to satisfy an agenda. Respect

It does when "free Linux-based OS with a multiwindow GUI" takes up the majority of a 50-character "Comment Subject" box. You need the "multiwindow GUI" part to distinguish the multitude of Linux distributions for desktop and laptop PCs from Android and from Linux distributions designed for servers.

I need brevity because there are plenty of Slashdot users who repeatedly point to Google Play Store as evidence that there are plenty of games and other commercial apps "on Linux". So is there a shorter term for Linux-based systems that aren't Android or embedded?

Just call it a frigging Linux based distribution, but only say Linux when you mean the kernel and only the kernel. If you must err on the side of brevity, GNU/Linux is best, since even the source that isn't from GNU has been compiled by GCC up until now, at least.

GNU/Linux might be best, but it's not correct. There are many components of the average distro that are not GNU e.g. the GUI whether it be Gnome or KDE or XFCE or whatever.

"GNU/Linux" has also lost the battle with respect to common usage. When people say "Linux" they are usually referring to the whole distro and if you go round insisting on GNU/Linux, you'll end up having to explain what you mean almost every time.

It's also rather bizarre to name a package after the compiler with which the development tea

Nothing in the GPL stipulates that the FSF gets to rename your project if you use some of their code. It has only occurred in this case due to Stallman's jealousy and no one should entertain his ego. GNU/Linux is a term that should be universally rejected, software freedom demands it.

That is the justification is frequently used, but if you completely replace the toolchain of any unix-like operating system with GNU ones, that does not make that particular system suddenly GNU. The GNU tools were all ported to Linux, but Linux is not and has never been part of the GNU project, so calling it GNU/Linux is a misnomer.

This has nothing to with the kernel itself or the suite of tools it is distributed with. It has to do with calling something GNU that isn't GNU. It compeletely misrepresents what LInux originally was... which is a free replacement for Minix. It was Stallman's decision to call it GNU/Linux, which, I'd like to point out, he did so without asking anybody, effectively trying to appropriate Linux as if it were part of the GNU project all along even... though, again, it never was.

(I hate how "GNU's Not Unix!" is really becoming more and more true. Unix was about minimalism, and sometimes GNU seems like it's about stuffing everything possible into every tool.)

If they didn't, it'd be easier for people from the proprietary and BSD-compliant-license land (like Apple and Google) to circumvent the spirit of the GPL. As long as you have to link into their code, or copy and paste it, they control what you can do with it. If you can just invoke whatever little piece of GPL code you want with arguments, you can progressively replace their tools without adding anything back to the original GCC tools -- if you can code around a bug in GNU project without submitting a patch to that project, that's adverse to the opening of the code in the first place. GCCs middle end is intentionally blurry, and not modularized into a separate tool, to keep people from taking their own parser and bolting GCC's optimizations onto it, just as an example.

It follows that that changes to the tools, and the process of creating the tools, thus stays under the political thumb of various FSF-aligned BDFLs.

I think it was always contingent on who uses the software and who develops it. For a long time GPL-land was superior because it could collect a lot of small developers' free time, and these people were developing tools for their own use, thus they all had incentives to stay within the ecosystem and they all benefited.

In a world where large corporations are pouring hundreds of paid developers into OSS projects, using the OSS licensing not for ideological reasons but for business model reasons, the GPL para

Except that the Gnu project was intended for real-world benefit. The idea was to create a body of GPLed code that would be so useful people would write free software using Gnu code rather than reject that body of code to write proprietary software. Free software is better for the world than equivalent proprietary software, other things being equal. This has had some success.

The Gnu project has made many compromises and concessions from ideological purity. The LGPL was one such. So are the licensing

Sure, but if you diff the GPL and BSD philosophies, the BSD approach is more "we want anyone to be able to use our code for anything" while GPL is "you must give back". The more practical approach will win out in the end.

For whatever anecdotes are worth, every place I've ever worked, the hurdles to using BSD-style licensed code were small, just some process by which legal checks that it really is a BSD-style license. GPL code, on the other hand, was always more effort to grapple with legal to distribute t

On the one hand, I think you're being a bit short-sighted on practicality. On the other, the desire not to maintain a fork seems to be getting lots of people to kick BSD-licensed changes back upstream.

It's a perfect example, it's non-modular and monolithic, requiring recompiles of the entire kernel to include support for new hardware and filesystems. This is their way of making it as hard as possible for hardware vendors to interact with a Linux kernel without publishing driver source, and preferably getting that source into Linus's tree. In Linux's case the intentional blurriness is in the Linux kernel ABI, which is overtly kept informal and unstable.

No, you misunderstand. When someone copies your code, you lose access to any patches they may make. GNU effectively requires patches to be published, as a condition of redistribution. BSD and proprietary licenses do not, so GNU encourages a more free time/community-based approach that consolidates projects, because forking is almost always a waste or energy and it's almost always better to just submit your patches to a single repository.

I hate how "GNU's Not Unix!" is really becoming more and more true. Unix was about minimalism, and sometimes GNU seems like it's about stuffing everything possible into every tool.

GNU is written with emacs, which is written in Lisp, by Richard Stallman. As is the case with most infernal Lispers, they don't subscribe to the Unix way of life. Stallman settled on Linux simply because emacs didn't have a bootloader yet; thank god eMach never came to fruition.:-P

I wish someone would port 4.4BSD-Lite to Linux... I want to run FreeBSD with a Linux kernel.

it really doesn't matter whether clang/llvm catches up to gcc or not in terms of speed or any other feature.

the crucial issue is what's strategically best for the long term interests of free software, and there is no way in hell that a compiler developed by the Lords of the Walled Garden at Apple is ever going to be a good thing for free software.

Apple's agenda is to sabotage copyleft and the GPL, because they want the benefits they can get from free code from tens of thousands of developers but without having to pay the entirely reasonable price of distributing and freely licensing the source along with any modified binaries.

The fact that Apple has been - and still is - smarter than Microsoft in their anti-free-software campaign just highlights how dangerous they are. Microsoft took the stupid head-on approach to attacking free software. Apple's method has been stealthy subversion and erosion of principles. smart, competent evil is far worse than stupid, incompetent evil.

Surely those sort of decisions first need a case where - despite all efforts to the contrary - there's something that can't be done as easily any other way as it could if you moved to a new C++ standard.

That's my biggest problem with most of these standards - quite what they add is hard to define, especially when the problems you point out are taken into account. Is there really anything in such a standard that couldn't be replicated by the programmers quite easily enough (or close enough), and would actua

I think the use case for its advantage would be development time by taking advantage of certain features that didn't exist in the language previously... but saving development time is meaningless if you can't actually develop for your intended target architecture.

Well, I'd say he's ok. IMO it's still to early to write C++11 code for any real products. It's not only the matter of the tools (that's the easy part) but the fact that most C++ aren't going to be familiar with those changes and new ways of doing things yet.

What you seem to mean is it's too early to write C++11 code for any real products if your development team aren't up to speed with it. That's a totally fair statement, but it's clearly not true that it's too early to write C++11 for real products -- we do. We're liable to swap to C++14 as soon as commercial compliant compilers become fully stable and settled (and, of course, that we can convince our IT support to license them, which may take a y

I don't get the argument "It's not universally supported yet". It's a compiler, not an operating system. Unless perhaps you are trying to share a code base across extremely diverse systems. And even then, the backward compatibility is such that it just means you have to be careful about restricting certain features in the shared part of the code base. Most developers can cope with working on multiple dialects of a programming language, even, astonishingly, multiple languages! It doesn't make sense to not us

.. I want to start hacking around with it. But then I remember how much I love and I just don't care anymore.

I gave up on C++ after I realized how much it confuses the problem of writing efficient software with the problem of giving you individually very efficient components that you're free to combine in ways that are either elegant, or efficient, but not both at once. I feel like it's 1970's all over again when I look at C++ code. Virtually *any* piece of C++ code, for that matter. There are better things one can do with one's life than endless psychological self-harm.

I'd never touch Python and C# unless my life depended on it. And as far as "non-trivial projects" are concerned, I think you've misspelled "Common Lisp", at least in this and the following decade. What the more distant future holds I don't profess to know.

Lisp has it's points...but it's not that great. And many of the libraries that I want to use are a real pain to use from it. And the only decent development environment I've found is actually for Scheme (Racket), and while it provides, e.g., futures, it doesn't actually execute the code in parallel. (Yes, I know that's not actually a criticism of Common Lisp. For that stick with the libraries and the lack of a development environment.)P.S.: I once, long ago, bought a copy of Franz Lisp (my first common

Since I tend to find ALL environments, except things like Java, Python, and Ruby, a bit buggy, I can't really say. When did you hear that it was particularly buggy? (As it's under considerable development, the precise version is significant.)

I tend to find more problems with my C++ code than with my D code. Of course, that's probably me, but... And the C++ code is harder to debug. OTOH, if I need to use a library that crosses language boundaries, that does add another problematic element. It may dep

Lisp has it's points...but it's not that great. And many of the libraries that I want to use are a real pain to use from it. And the only decent development environment I've found is actually for Scheme (Racket), and while it provides, e.g., futures, it doesn't actually execute the code in parallel. (Yes, I know that's not actually a criticism of Common Lisp. For that stick with the libraries and the lack of a development environment.)
P.S.: I once, long ago, bought a copy of Franz Lisp (my first common lisp). It was OK, but not great. And updating it would have cost me the other arm and leg. A usabe version isn't cheap. So I formed my opinions recently based on SBCL and CMUCL.

Emacs with SLIME [common-lisp.net] nowadays is pretty nice. I think it's the best language environment I've ever used really, and what keeps me going back to CL despite knowing better (shouldn't have picked up SML...). The debugger is particularly excellent, and the inspector, xref support, etc. aren't too shabby either. As for libraries, there are plenty of excellent libraries for most every problem domain, but finding them over the half-finished ones and then managing your personal library... was unpleasant. Until Quicklis [quicklisp.org]

I haven't used EMACS for decades. When I did my fingers hurt from the contortions. They're LESS willing to accept multiple simultaneous keypresses now than they were. (It's not Lisp, but Bigloo Scheme tried to use SLIME, and I wasn't impressed. Of course, there were lots of problems, and things didn't load properly, but still...geany is much easier to use. It *does* have a Lisp mode, but I'm not sure what it does, and nobody has ever recommended it. As I don't use Lisp currently, I haven't investigate

My memories of Lisp are from projects I was doing and didn't understand what I was doing at a time. It seemed so easy to change things as I went along. I've never seen as good a programming language when you don't know what you're doing.

C and C++ are very different languages nowadays. It's also much easier to learn C++ than it used to be, since the STL stuff is much easier to use than the C-like stuff. Granted, it's harder to learn all the details, but you can look those up when you encounter them.

You are right, one can do much more with templates. In D templates are a turing complete language. I still don't like them. I tend to make decisions at run time, not at compile time, because they depend on the data that is read. This, of course, depends entirely on what you are doing. For some things, templates are great. But from my point of view the only useful part of them is captured by generics. (Don't think of Java generics, think of Ada. Java got it wrong because they kludged them into the la

I'm not familiar with Ada generics. I've been looking at Java and C#. C++ templates seem to do what those generics do about as easily as the generics, while being capable of much more. They do have the disadvantage that you have to be careful of bloat.

I don't see why smart pointers can't release nets of data. They can certainly cascade, since each delete triggers another destructor, which can delete more stuff. C and C++ can be garbage-collected. You have to use a conservative collector, which can

Personally, I tend to use the <quote> and <em> tags in over half the posts I make.

It you don't like Slashdot's default behavior, switch your default posting method to "Extrans" instead of "Plain Old Text," and all your accidental HTML tags will get escaped out approrpriately. (Personally, I'd have used "[ ]" for a substitution indication rather than "< >", but YMMV.)