I'm having a debate with a friend and we're wondering why so many open source projects have decided to go with C instead of C++. Projects such as Apache, GTK, Gnome and more opted for C, but why not C++ since it's almost the same?

We're precisely looking for the reasons that would have led those projects (not only those I've listed but all C projects) to go with C instead of C++. Topics can be performance, ease of programming, debugging, testing, conception, etc.

There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.
If this question can be reworded to fit the rules in the help center, please edit the question.

55

A nit: C++ is far, far, far from "almost the same" as C.
–
Jed SmithOct 11 '09 at 21:44

3

They went with C because it was around at the time. C++ is not even close to C, either.
–
GManNickGOct 11 '09 at 21:49

14

Those aiming to close it as 'subjective and argumentative' are being overly protective - it is being dealt with professionally and non-argumentatively.
–
Jonathan LefflerOct 11 '09 at 21:50

2

C is still used probably because it out-classes (no pun intended) C++ which was a mistake of a language and C# which tried to clean up the mess made by C++. When you read source code from any programming language - check out how much overhead text is required to support the actual functioning code. C is cleaner - but open source code is far from being a good example of how to use C.
–
logoutJan 19 '10 at 1:26

Eric Raymond's wonderful book "The Art of Unix Programming" has some reflections on this issue (the whole book is well worth reading in either the paper or free online editions, I'm just pointing to the relevant section -- Eric was involved with the coining and introduction of the term "open source", and is always well worth reading;-0).

Summarizing that section, Raymond claims that "OO languages show some tendency to suck programmers into the trap of excessive layering" and Unix programmers (and by extension open-source programmers) resist that trap of "thick glue".

Later in the book, you find some considerations specifically about C++, such as "It may be that C++'s realization of OO is particularly problem-prone". Whether you agree or not, the whole text is well worth reading (I can hardly do it justice here!-), and rich with bibliography pointing you to many other relevant studies and publications.

I think it's a fair point on OO. Of course, today, C++ is much less about OO, and this objection pretty much falls away, but when most of these projects (and the Unix philosophy) were being founded, C++ was nonstandard, and it was all about OO.
–
jalfOct 11 '09 at 22:30

4

I've seen excessive data hiding at times. A minor requirement change can result in serious refactoring, or else everything gets made public out of desperation. Also, I can't help noticing that a key OOP principle (often expressed as "you don't draw the shape, the shape draws itself") is violated even by GOF design patterns. That's why I'm not embarrassed to write "tool" classes that do what they do to some other object. IMO, it's just a matter of doing what works rather than building ivory towers - OOP should be treated as a toolkit, not a religion.
–
Steve314Oct 12 '09 at 3:03

@jalf: The unix philosophy was formed some years before Apache was started...
–
gnudOct 14 '09 at 20:35

You can easily find more of these, and while it's in his nature to get a bit flamey about these things, there are some valid points.

One of the more interesting (from where I'm sitting, anyway) is the observation that C++ compilers and libraries were (and to some degree are) a lot more buggy than the corresponding C compilers. This stands to reason given the relative complexities of the two languages.

It smells a little of "not invented here" (NIH) syndrome, but when you have the entire Linux kernel developer base, you can sometimes afford to reinvent things "The Right Way".

I'd like to point out that there's vast differences between 1998 C++ and 2009 C++, in practice anyway. The standard is generally implemented well in modern compilers, aside from corner cases and mistakes like "export". We now understand exception safety. The Boost project filled in some major holes, and some libraries are going into C++0X (X being in hex, of course). The reasons to use C instead of C++ were far stronger back then.
–
David ThornleyOct 12 '09 at 14:04

If C++0X is really hex, it's still about 39 years away.
–
jbcreixOct 13 '09 at 6:14

Since when is X used in hex? Except as a prefix to the real number?
–
gnudOct 14 '09 at 20:38

It's an X as in replace it with the correct number. You'd prefer a "." or maybe a "?". Anyways, the closest 0X year in hex is 2048, however they could keep delaying it until 2063.
–
jbcreixOct 15 '09 at 2:52

A lot of the projects started before C++ was standardized, so C was the obvious choice and a change later would be hard. C was standardized about a decade before C++, and has been more nearly portable for even longer. So, it was largely a pragmatic decision at the time, inspired in part by the Unix heritage of using C for most code.

C++ is a mess. It is overly complicated language, so complicated that only few people can say that they know all the bits. And fewer compilers which really complies to C++ standard.

So I think the reason is simplicity and portability.

If you want higher-level and object-oriented programming, then I think C++ is just competed with others like Python. (Note that I programmed in C++ few years, it's fast and has some features from higher-level languages that speeds up development, no offence.)

+1 because, though this is edging towards flamebait, I agree. C++ tries to do everything well, and ends up doing nothing spectacularly. Thus, for any task, C++ may do a decent job, but some other language always does a better job.
–
Chris LutzOct 11 '09 at 23:10

1

"ends up doing nothing spectacularly" - although there are some things it does which no other (commonly used) language does at all. So whether it does them spectacularly well becomes a difficult argument to have, since there's nothing to compare with. And I'm pretty sure it's true of all languages that there are fewer compliant implementations than there are people who fully understand the language. For it to be otherwise, those who understand the language would have to write more than one implementation each ;-)
–
Steve JessopOct 12 '09 at 11:17

2

I wonder how many people know all the bits of C? Possibly not much either.
–
Johannes Schaub - litbOct 12 '09 at 13:47

2

WG14/N1124 (C99 with corrigenda) is 538 pages. The "C++ Standard" is 782 pages, excluding forewords etc. So sure, there's a difference, but it's a big job to fully absorb either language. I think it's a mistake to say the difference between C99 and K&R is "small" - those extra hundreds of pages aren't just because the committee has been reading too much JK Rowling and thinks books need to be fat to sell.
–
Steve JessopOct 12 '09 at 14:34

1

@ShreevatsaR: You don't need to be familiar with all of either standard to use either language well. You do need to learn more to use C++ effectively, but it's also easier to write a lot of things in C++ than C. It's easier to learn C well enough to write a simple program, but if you're going to work on a few large systems the learning time simply isn't that important compared to development time.
–
David ThornleyOct 12 '09 at 15:12

I have worked on a few C++ projects in my time, all of which have ended in tears one way or the other. At the most fundamental level, the truth is that people can't be trusted. They can't be trusted to write good code, they can't be trusted to debug it, and they certainly can't be trusted to understand it when they have to come back and modify it again weeks/months later.

C code doesn't have a lot of the weird stuff in C++ that makes it hard to debug (constructors/destructors, anything that happens with static global objects during cpp_initialize() time, etc.). That just makes it easier to deal with when developing and maintaining a big project.

Maybe I'm a luddite, but every time someone says "C++" around me I get shivers.

Crappy developers will write crappy code in every language, its hardly C++'s fault when they write crappy C++. I'll counter your experience by saying that every C++ project I've worked on has been successful, the current one about 750,000 lines of code.
–
Brian EnsinkOct 11 '09 at 21:45

7

@Brian: I agree with you, but on the other hand, I'd far rather work with a mediocre C programmer than a mediocre C++ programmer. A mediocre C programmer can write usable code. A mediocre C++ programmer will make your program explode. The steeper learning curve might be a valid reason for preferring C
–
jalfOct 11 '09 at 22:27

2

I agree with @Brian in principle, but in practice I have just seen too many disasters. I shouldn't say that the projects I worked on weren't successful; one in particular is an enormous success. Unfortunately a HUGE number of bugs can be tracked down to plain old awful programming. The fact that said awful programming was done in C++ just made it harder to debug.
–
Carl NorumOct 11 '09 at 22:39

2

My experience on very large C++ projects is that it works just fine, FWIW. I'd strongly recommend mandatory code reviews to catch the bad stuff early.
–
David ThornleyOct 12 '09 at 13:51

What happens if the huge project started off as a small project, and people wrote a bunch of crap that the rest of us are forced to live with for all eternity? Time for refactoring and existing-code improvements is the first thing to crushed out of the schedule when the deadline rears its ugly head.
–
Carl NorumOct 12 '09 at 16:25

Some people have mentioned portability, but in this day, the portability of C++ isn't much of an issue (it runs on anything GCC runs on, which is essentially anything). However, portability is more than just architecture-to-architecture or OS-to-OS. In the case of C++, it includes compiler-to-compiler.

All bets are instantly off. You now have two distinct functions, and the compiler has to make each, and has to give each a unique name. So C++ allows (where I believe C doesn't) name mangling, which means those two functions might get translated to _dostuff_cp_cp and _dostuff_cp_cp_s (so that each version of the function that takes a different number of arguments has a different name).

The problem with this is (and I consider this a huge mistake, even though it's not the only problem with cross-compiler portability in C++) that the C++ standard left the details of how to mangle these names up to the compiler. So while one C++ compiler may do that, another may do _cp_cp_s_dostuff, and yet another may do _dostuff_my_compiler_is_teh_coolest_char_ptr_char_ptr_size_t. The problem is exacerbated (always find a way to sneak this word into anything you say or write) by the fact that you have to mangle names for more than just overloaded functions - what about methods and namespaces and method overloading and operator overloading and... (the list goes on). There is only one standard way to ensure that your function's name is actually what you expect it to be in C++:

extern "C" int dostuff(const char *src, char *dest);

Many applications need to have (or at least find it very useful to have) a standard ABI provided by C. Apache, for example, couldn't be nearly as cross-platform and easily extensible if it was in C++ - you'd have to account for the name mangling of a particular compiler (and a particular compiler version - GCC has changed a few times in its history) or require that everyone use the same compiler universally - which means that, every time you upgrade your C++ compiler with a backwards incompatible name-mangling scheme, you have to recompile all your C++ programs.

This post turned into something of a monster, but I think it illustrates a good point, and I'm too tired to try to trim it down.

I don't see anything there that doesn't back up my core argument that C++ isn't binary compatible from compiler to compiler.
–
Chris LutzOct 12 '09 at 0:32

"The problem is (and I consider this a huge mistake, though I'm not versed in the potential reasons for this decision) that the C++ standard left the details of how to mangle these names up to the compiler. " - this part is wrong, and it's perpetuating a myth. Somewhere there is an interview with Bjarne Stroustrup where he demolishes this criticism but I couldn't find it on the web so I posted the FAQ link. Name mangling is not the problem.
–
user181548Oct 12 '09 at 0:35

I still think that a standardized name mangling scheme, while not the only problem to be solved, would at least be a decent push towards cross-compiler compatibility, and minimize the damage (even if it can't be completely accounted for due to vtables). But I will take out the offending line.
–
Chris LutzOct 12 '09 at 0:42

1

@Kinopiko: Your link actually agrees with Chris, but goes on to say that the problem he's describing is just the tip of the iceberg when it comes to C++'s portability problems.
–
ChuckOct 12 '09 at 0:55

As someone who dislikes C++ and would pick C over it any day, I can at least give you my impressions on the topic. C++ has several attributes that make it unappealing:

Complicated objects. C++ has tons of ability to speed up OO, which makes the language very complex.

Nonstandard syntax. Even today most C++ compilers support quirks that make ensuring successful and correct compilation between compilers difficult.

Nonstandard libraries. Compared to C libraries, C++ libraries are not nearly as standardized across systems. Having had to deal with Make issues associated with this before I can tell you that going with C is a big time saver.

That said, C++ does have the benefits of supporting objects. But when it comes down to it, even for large projects, modularity can be accomplished without objects. When you add in the fact that essentially every programmer who might contribute code to any project can program C, it seems hard to make the choice to go with anything else if you need to write your code that close to the metal.

All that said, many projects jump over C++ and go to languages like Python, Java, or Ruby because they provide more abstraction and faster development. When you add in their ability to support compiling out to/loading in from C code for parts that need the performance kick, C++ loses what edge it could have had.

I don't understand your "complicated objects," "tons of ability to speed up OO" point. Like, not that I disagree — I just don't get what you're saying there.
–
ChuckOct 12 '09 at 0:52

The last point on libraries was true, but if you look at the way things have been recently, you'll observe otherwise. There are compilers available for every imagineable platform that support the full range of ISO C++ libraries, like GCC on anything remotely Unix, and Visual Studio on Windows. It is effectively standardised as far as STL etc. goes.
–
blwy10Oct 12 '09 at 2:04

1

@Chuck - Take for instance the keyword "virtual" - this exists solely because making every function virtual would incur a noticeable performance cost. It is an optimisation to make OO fast.
–
Tom LeysOct 12 '09 at 3:21

@Tom: it’s a perfectly fine language convention. C# does it as well, and Java supports the opposite. Notice that in neither case is it used (primarily) for performance gain, since the JIT doesn’t require this kind of information. It is there to inform the user.
–
Konrad RudolphOct 12 '09 at 16:20

In C++, the virtual keyword is there explicitly so that non-virtual functions don't incur the extra overhead (and complexity). The c++ "mantra" is "don't pay for what you don't use". It also forces you to be explicit. Which is probably the best thing to come from it.
–
gnudOct 14 '09 at 20:43

If you look at recent open source projects, you'll see many of them use C++. KDE, for instance, has all of its subprojects in C++. But for projects that started a decade ago, it was a risky decision. C was way more standardized at the time, both formally and in practice (compiler implementations). Also C++ depends on a bigger runtime and lacked good libraries at that time. You know that personal preference plays a big role in such decision, and at that time the C workforce in UNIX/Linux projects was far bigger than C++, so the probability that the initial developer(s) for a new project were more comfortable with C was greater. Also, any project that needs to expose an API would do that in C (to avoid ABI problems), so that would be another argument to favor C.
And finally, before smart pointers became popular, it was much more dangerous to program in C++. You'd need more skilled programmers, and they would need to be overly cautions. Although C has the same problems, its simpler data structures are easier to debug using bounds checking tools/libraries.

Also consider that C++ is an option only for high-level code (desktop apps and the like). The kernel, drivers, etc. are not viable candidates for C++ development. C++ has too much "under the hood" behavior (constructor/destructor chains, virtual methods table, etc) and in such projects you need to be sure the resulting machine/assembly code won't have any surprises and doesn't depend on runtime library support to work.

"The kernel, drivers, etc. are not viable candidates for C++ development." Not strictly true. The BeOS operating system was written in C++, and has it's devotees, but it did die, and since it was closed source it stayed fairly dead. It's possible to write such low level tools in C++, but not common, because few C++ programmers understand what the code they write will mean in terms of performance (due to all the abstractions). So while it's possible to write C++ as efficiently as C, it's rare, which is why C is used for low-level development.
–
Chris LutzOct 11 '09 at 23:09

1

Speaking of BeOS, there's actually a BeOS clone these days called Haiku, also written in C++. So yes, it's possible.
–
ChuckOct 12 '09 at 1:02

From what I recall, parts of the BeOS kernel were written in C... Not sure about Haiku, though.
–
Benjamin OakesOct 12 '09 at 14:02

I agree, Chris, when I said "not viable" I didn't mean it's not possible, just that it probably wouldn't be the best choice. I think in such cases the C code would be simpler and cleaner than C++, I don't see the advantage for using object orientation in such situations like interface to hardware I/O ports and the like, at least not at that level.
–
Fabio CeconelloOct 22 '09 at 22:54

One important aspect in addition to others that will doubtless be mentioned is that C is easier to interface with other languages, so in the case of a library intended to be widely useful, C may be chosen even nowadays for this purpose.

To take examples I am familiar with, the toolkit GTK+ (in C) has robust OCaml bindings, while Qt and Cocoa (respectively in C++ and Objective C) only have proof-of-concepts for such bindings. I believe that the difficulty to interface languages other than C with OCaml is part of the reason.

One reason might be that the GNU coding standards specifically ask you to use C. Another reason I can think of is that the free software tools work better with C than C++. For example, GNU indent doesn't do C++ as well as it does C, or etags doesn't parse C++ as well as it parses C.

That advice flies totally in the face of everything that GNU, and free software, stands for. Namely, giving maximum control and freedom of choice to the user (in this case: the programmer).
–
Konrad RudolphOct 12 '09 at 16:23

They're talking about people contributing to GNU itself, though. Their project, their rules. For the same reason, there are open source projects using curly-brace-on-a-new-line, not because the authors think it's better or worse than curly-brace-at-end-of-line, but because GNU says so.
–
Steve JessopOct 12 '09 at 17:38

Another example: OpenOffice had dependencies on Java, which caused no end of complaining, so last I heard the project made the Java dependency optional. So if GNU says, "if you want your project to be useful to the greatest number of people, the only installed language you should depend on is C", then they probably mean it.
–
Steve JessopOct 12 '09 at 17:40

C code produces more compact object
code. Try to compile 'Hello World'
as C and C++ program and compare the
size of the executable. May not be too relevant today but definitely was a factor 10+ years ago

It is much easier to use dynamic
linking with C programs. Most of
the C++ libraries still expose entry
points through C interface. So instead of writing a bridge between C++ and C why not to program the whole thing in C?

First of all, some of the biggest open source projects are written in C++: Open Office, Firefox, Chrome, MySQL,...

Having said that, there are also many big projects written in C. Reasons vary: they may have been started when C++ was not standardized yet, or the authors are/were more comfortable with C, or they hoped that the easier learning curve for C would attract more contributors.

You can read Dov Bulka to find what not to do in cpp, you can read tesseract ocr at Google code, you can read lots of things - most of which depend on where you are to determine which code linguistic is superior. Where did you read that c has more source code up in open source than cpp? Well of course you read that in a c forum. That's where. Go to some other programming linguistic. Do the same search, you will find that that code has more open source.

This is why people want to close questions like this as "subjective and argumentative." This is vaguely coherent, and makes unsourced (and unverifiable) claims as facts. -1
–
Chris LutzOct 11 '09 at 23:42

@Chris, noted. It's just that I have to deal with this on massive projects where one's entire existence can be crushed by people with no mechanical skills for reasons any mechanic would abhor. I get cornered on it so many times that I go braveheart at the sight of it. What makes the register the ultimate do-all be-all end-all any more than say roseinda or something? My source is my own experience, my verifier is fifty thousand hours in brutal arena where there are no verifiers. No mercy from the cruel, no rest for the wicked, no room for cream.
–
Nicholas JordanOct 12 '09 at 0:25

@Kinopiko - given, theregister is at the front of may of these discussions, so also Dr. Dobb's and so on - as noted in my original post: Dov Bulka, wherein a canonical discussion of my basis is given. Ditto Jeff Duntemann's intro to assembler - both those works express my basis for going frontline with my experience.
–
Nicholas JordanOct 12 '09 at 0:32

Just a side note about Tesseract OCR. It started off as a proprietary product by Hewlett-Packard, was abandoned, and then released as open source.
–
user181548Oct 13 '09 at 4:00