Because "Back In The Day" when C became the Language of Choice we were expected to be able to handle stuff like that, because we had to. Interpreted or byte-coded languages were too slow because the processors of the day were so much slower. (Today I can buy a low-end desktop PC with a 2+ GHz multi-core CPU and 4 GB of memory from Dell for $279. You have NO IDEA how absolutely incredible this appears to a guy like me for whom a 4 MHz PC with 640 kilobytes of memory was bliss...). Face it - Moore's Law won. Game.Over!
– Bob JarvisJun 10 '16 at 14:11

6

@Bob Jarvis: Game not over. If you think your 2+GHz, 4GB PC - or for that matter, your cluster of several hundred 4 GHz PCs with the latest CUDA GPUs, or whatever - is fast enough, you simply aren't working on hard enough problems :-)
– jamesqfJun 11 '16 at 23:03

16 Answers
16

C predates many of the other languages you're thinking of. A lot of what we now know about how to make programming "safer" comes from experience with languages like C.

Many of the safer languages that have come out since C rely on a larger runtime, a more complicated feature set and/or a virtual machine to achieve their goals. As a result, C has remained something of a "lowest common denominator" among all the popular/mainstream languages.

C is a much easier language to implement because it's relatively small, and more likely to perform adequately in even the weakest environment, so many embedded systems that need to develop their own compilers and other tools are more likely to be able to provide a functional compiler for C.

Because C is so small and so simple, other programming languages tend to communicate with each other using a C-like API. This is likely the main reason why C will never truly die, even if most of us only ever interact with it through wrappers.

Many of the "safer" languages that try to improve on C and C++ are not trying to be "systems languages" that give you almost total control over the memory usage and runtime behavior of your program. While it's true that more and more applications these days simply do not need that level of control, there will always be a small handful of cases where it is necessary (particularly inside the virtual machines and browsers that implement all these nice, safe languages for the rest of us).

Today, there are a few systems programming languages (Rust, Nim, D, ...) which are safer than C or C++. They have the benefits of hindsight, and realize that most of the times, such fine control is not needed, so offer a generally safe interface with a few unsafe hooks/modes one can switch to when really necessary.

Even within C, we've learned a lot of rules and guidelines that tend to drastically reduce the number of insidious bugs that show up in practice. It's generally impossible to get the standard to enforce these rules retroactively because that would break too much existing code, but it is common to use compiler warnings, linters and other static analysis tools to detect these sorts of easily preventable issues. The subset of C programs that pass these tools with flying colors is already far safer than "just C", and any competent C programmer these days will be using some of them.

"lowest common denominator" sounds disparaging. I'd say that, unlike the many far heavier languages, C doesn't enforce a tonne of extra baggage you don't need and gives you a lightning-fast base on which to implement stuff you do need. Is that what you meant?
– underscore_dJun 9 '16 at 11:57

9

"You can't really have one without the other.": actually many new languages (Rust, Nim, D, ...) attempt to. This basically boils down to matching a "safe" subset of the language with a couple "unsafe" primitives for when this level of control is absolutely necessary. However, all of those build on the knowledge accumulated from C and C++, so maybe it should be said that at the time C and C++ were developed, you could not have one without the other, and nowadays there are language that attempt to partition off their "unsafe" bits, but they have not caught up yet.
– Matthieu M.Jun 9 '16 at 12:00

6

1) is not a valid point! C took features from Algol 68, which is a "safe" language in this sense; so the authors knew about such languages. The other points are great.
– reinierpostJun 9 '16 at 15:44

4

@MatthieuM. You might want to look at Microsoft Research as well. They've produced a few managed OSes over the past decade or two, including some that are faster (completely accidentally - the primary goal of most of these research OSes is safety, not speed) than an equivalent unmanaged OS for certain real-life workloads. The speedup is mostly due to the safety constraints - static and dynamic executable checking allows optimizations that aren't available in unmanaged code. There's quite a bit to look at, all in all, and sources are included ;)
– LuaanJun 10 '16 at 13:13

First, C is a systems programming language. So, for example, if you write a Java virtual machine or a Python interpreter, you will need a systems programming language to write them in.

Second, C provides performance that languages like Java and Python do not. Typically, high performance computing in Java and Python will use libraries written in a high-performance language such as C to do the heavy lifting.

Third, C has a much smaller footprint than languages like Java and Python. This makes it usable for embedded systems, which may not have the resources necessary to support the large run-time environments and memory demands of languages like Java and Python.

A "systems programming language" is a language suitable to build industrial-strength systems with; as they stand, Java and Python are not systems programming languages. "Exactly what makes a systems programming language" is outside the scope of this question, but a systems programming language does need to provide support for working with the underlying platform.

On the other hand (in response to comments), a systems programming language does not need to be self-hosting. This issue came up because the original question asked "why do people use C", the first comment asked "why would you need a language like C" when you have PyPy, and I noted that PyPy does in fact use C. So, it was originally relevant to the question, but unfortunately (and confusingly) "self-hosting" is not actually relevant to this answer. I'm sorry I brought it up.

So, to sum up: Java and Python are unsuited to systems programming not because their primary implementations are interpreted, or because natively compiled implementations are not self-hosted, but because they don't provide the necessary support for working with the underlying platform.

"if you write a Java virtual machine or a Python interpreter, you will need a systems programming language to write them in." Uh? How do you explain PyPy? Why would you need a language like C to write a compiler or an interpreter or a virtual machine?
– Vincent SavardJun 7 '16 at 19:03

10

I do believe you claimed you need a system programming language to write a Java virtual machine or a Python interpreter when you said "if you write a Java virtual machine or a Python interpreter, you will need a system programming language to write them in". If PyPy doesn't satisfy you, you can also look any interpreter or compiler written in Haskell. Or really, just add a reference supporting your claim.
– Vincent SavardJun 7 '16 at 19:18

16

Even if it's turtles all the way down, something like PyPy could not exist without a Python interpreter written in some other language.
– BlrflJun 7 '16 at 20:04

18

@Birfl But that's not really saying much. C couldn't have been written without an assembly compiler, and you can't write assembly without hardware that implements it.
– gardenheadJun 8 '16 at 0:15

11

If PyPy is not a "systems programming language" because a C compiler is involved someplace, then C is not a systems programming language either because an assembler is used someplace. Actually, isn't it popular to translate C into some other language these days? e.g. LLVM
– HurkylJun 8 '16 at 6:30

Sorry to add yet another answer, but I don't think any of the existing answers directly address your first sentence stating:

'I am considering learning C'

Why? Do you want to do the kinds of things C is usually used for today (e.g. device drivers, VMs, game engines, media libraries, embedded systems, OS kernels)?

If yes, then yeah, sure learn C or C++ depending on which of those you're interested in. Do you want to learn it so you'll have a deeper understanding of what your high-level language is doing?

You then go on to mention the safety concerns. You don't necessarily need a deep understanding of safe C to do the latter, in the same way that a code example in a higher-level language might give you the gist without being production ready.

Write some C code to get the gist. Then put it back on the shelf. Don't worry too much about safety unless you want to write production C code.

Great job answering what seems to be the real question! A great way to appreciate C/C++ and "safer" languages for all they really are is to try writing something like a simple database engine. You'll get a good feel for each of the approaches you try, and see where that leads you. You'll see what feels natural in either, and you'll find where the natural approach fails (e.g. it's very easy to "serialize" raw data in C - just write the data!; but the result isn't portable, so it may be of limited use). Understanding safety is tricky, because most of the issues may be hard to encounter.
– LuaanJun 8 '16 at 12:44

1

@Luaan exactly, learning how to copy a string using pointers gives you the idea, learning how to do so safely is another level, and depending on one's goals a perhaps unnecessary one.
– Jared SmithJun 8 '16 at 13:09

2

This was not really the question. But it was helpfull for making a dissision. I decided to do it. I am willing to learn more about the inner working of everything is build on. And I just like programming. This way I hoping to understand the inner workings of computers better. It is just for fun.
– TristanJun 8 '16 at 18:57

10

I disagree with the last sentence. Learn how do do it right before developing bad habits.
– glglglJun 9 '16 at 7:20

2

@glglgl IDK, if I read (or write) a JavaScript snippet on the web I do so with the understanding that its not production ready: it won't have exception handling, it might be O(n^2), etc. None of that is necessary to get the point across. All of it is necessary for production code. Why is this different? I can write naive C for my own edification while understanding intellectually that if I wanted to put it out there I'd need to do a lot more work.
– Jared SmithJun 9 '16 at 14:15

This is a HUGE question with tons of answers, but the short version is that each programming language is specialized for different situations. For example, JavaScript for web, C for low level stuff, C# for anything Windows, etc. It helps to know what you want to do once you know programming to decide what programming language to pick.

To address your last point, why C/C++ over Java/Python, it often comes down to speed. I make games, and Java/C# are just recently reaching speeds that are good enough for games to run. After all, if you want your game to run at 60 frames per second, and you want your game to do a lot (rendering is particularly expensive), then you need the code to run as fast as possible. Python/Java/C#/Many others run on "interpreters", an extra layer of software that handles all the tedious stuff that C/C++ doesn't, such as managing memory and garbage collection. That extra overhead slows things down, so nearly every large game you see was done (in the last 10 years, anyway) in C or C++. There are exceptions: the Unity game engine uses C#*, and Minecraft uses Java, but they're the exception, not the rule. In general, big games running on interpreted languages are pushing the limits of how fast that language can go.

*Even Unity is not all C#, huge chunks of it are C++ and you just use C# for your game code.

EDIT
To respond to some of the comments that showed up after I posted this:
Perhaps I was oversimplifying too much, I was just giving the general picture. With programming, the answer is never simple. There are interpreters for C, Javascript can run outside the browser, and C# can run on just about anything thanks to Mono. Different programming languages are specialized for different domains, but some programmer somewhere probably figured out how to get any language to run in any context. Since the OP appeared to not know much programming (assumption on my part, sorry if I'm wrong), I was trying to keep my answer simple.

As for the comments about C# being nearly as fast as C++, the key word there is nearly. When I was in college, we toured many game companies, and my teacher (who had been encouraging us to move away from C# and into C++ the whole year) asked programmers at every company we went to why C++ over C#, and every single one said C# is too slow. In general it runs fast, but the garbage collector can hurt performance because you can't control when it runs, and it has the right to ignore you if it doesn't want to run when you recommend it does. If you need something to be high performance, you don't want something as unpredictable as that.

To respond to my "just reaching speeds" comment, yeah, much of C#'s speed increases come from better hardware, but as the .NET framework and C# compiler have improved, there have been some speedups there.

About the "games are written in the same language as the engine" comment, it depends. Some are, but many are written in a hybrid of languages. Unreal can do UnrealScript and C++, Unity does C# Javascript and Boo, many other engines written in C or C++ use Python or Lua as scripting languages. There isn't a simple answer there.

And just because it bugged me to read "who cares if your game runs at 200fps or 120fps", if you're game is running faster than 60fps, you're probably wasting cpu time, since the average monitor doesn't even refresh that fast. Some higher end and newer ones do, but its not standard (yet...).

And about the "ignoring decades of tech" remark, I'm still in my early 20's, so when I'm extrapolating backwards, I'm mostly echoing what older and more experienced programmers have told me. Obviously that'll be contested on a site like this, but its worth considering.

"C# for anything Windows" - Oh, that's such a fallacy. And you even provide an example. Unity. AFAIK It's not written it provides C# API because the language is nice an adaptable. It's really well designed. And I like c++ more, but the credit should be given where it's due. Maybe you mixed C# with .NET? They hang out together quite often.
– luk32Jun 8 '16 at 0:10

2

"Even Unity is not all C#, huge chunks of it are C++" And? Games in Unity often use C# extensively, and have been around for quite some time now. Suggesting that C# is 'just recently reaching speeds' either needs more context, or runs the risk of being blind to this decades tech.
– NPSF3000Jun 8 '16 at 3:12

2

Nearly every large game was written in the language of the engine it used: the amount of work that would have needed duplicating was so large that no other technical consideration was even worth taking into account. Rendering is indeed expensive, but nowadays that's all written in shaders and the language of the logic loop is irrelevant.
– Peter TaylorJun 8 '16 at 11:44

2

C# has always been JIT compiled (unlike Java, where your comment is correct), and it was quite capable of very similar execution speeds to C++ from the get go if you knew what you were doing. That's 2003 - not something I'd consider recent. Raw speed isn't the main issue for games (especially with programmable shaders on the GPU), there are other things that made languages like C# more or less popular at times. Two main issues are APIs (which are heavily C-oriented, and the interfacing may be expensive) and GC (mostly for latency issues, not raw throughput).
– LuaanJun 8 '16 at 12:22

1

@gbjbaanb It's not just CPUs being faster - a big deal is that C++ and C had decades to perfect their compilers and runtime, while Java started basically from zero (being designed as a multi-platform platform primarily). As the VM improved (e.g. the switch from interpreter to a JIT compiler, improved GC...), so did performance of Java applications. A lot of the edge C/C++ still has is in the "let's hope nothing breaks" approach - avoiding a lot of checks that are deemed "unnecessary". But it's still a huge memory hog - in fact, improvements to CPU usage often meant worse memory performance :)
– LuaanJun 8 '16 at 12:28

It is funny that you claim C is unsafer because "it has pointers". The opposite is true: Java and C# have practically only pointers (for non-native types). The most common error in Java is probably the Null Pointer Exception (cf. https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare). The second most common error is probably holding hidden references to unused objects (e.g. closed Dialogues are not disposed of) which therefore cannot be released, leading to long-running programs with an ever-growing memory foot print.

There are two basic mechanisms which make C# and Java safer, and safer in two different ways:

Garbage collection makes it less likely that the program attempts to access discarded objects. This makes the program less likely to terminate unexpectedly. As opposed to C, Java and C# by default allocate non-native data dynamically. This makes the program logic actually more complex, but the built-in garbage collection -- at a cost -- takes over the hard part.

Recent C++' smart pointers make that job easier for programmers.

Java and C# compile to an intermediate code which is interpreted/executed by an elaborate run time. This adds a level of security because the run time can detect illicit activities of a program. Even if the program is coded insecurely (which is possible in both languages), the respective run time in theory prevents "breaking out" into the system.
The run time does not protect against e.g. attempted buffer overruns, but in theory does not allow exploits of such programs. With C and C++, by contrast, the programmer has to code securely in order to prevent exploits. This is usually not achieved right away but needs reviews and iterations.

It is worth noting though that the elaborate run time is also a security risk. It appears to me that Oracle is updating the JVM every couple of weeks because of newly discovered safety issues. It is, of course, much harder to verify the JVM than a single program.

The safety of an elaborate run time is therefore ambiguous and to a degree deceiving: Your average C program can, with reviews and iterations, be made reasonably secure. Your average Java program is only as secure as the JVM; that is, not really. Never.

The article about gets() that you link to reflects historical library decisions which would be made differently today, not the core language.

I think the point of the original author is that in C you're able to take unchecked action on those pointers. In java, you get a nice exception right away - in C, you might not realize you're reading an invalid location until your application state is corrupted.
– Sam DufelJun 8 '16 at 15:28

I've seen all kinds of interesting bugs result from attempts to avoid explicit pointers. Usually the fundamental issue is confusion between copy and reference. In C, if I pass you a pointer to something, you know that modifying it will affect the original. Some of the attempts to avoid pointers obfuscate this distinction, leading to a maze of deep copies, shallow copies, and general confusion.
– Arlie StephensJun 8 '16 at 21:28

Asserting that Oracle's JVM sucks (from the standpoint of hemorrhaging exploitable security vulnerabilities) and therefore the runtimes of managed languages in general introduce more security concerns than using a managed language avoids is like saying that Adobe Flash is horrific as a source of insecurity and program that does video & animation playback from a web site must inherently be ridiculously insecure. Not all Java runtimes are nearly as bad as Oracle/Sun's 1990's-vintage JVM abomination, and not all managed languages are Java. (Well, obviously.)
– mostlyinformedJun 10 '16 at 5:50

@halfinformed Well; I was saying that a program is only as secure as its runtime, and that "your average" (read: small-ish) program can with an effort be made safer than any large runtime like a byte code interpreter. That statement seems undeniable. Whether a particular stand-alone program or a particular runtime is more or less secure than the other depends on their respective complexity and design, coding and maintenance quality. For example, I wouldn't say that sendmail is more secure than Oracle's Java VM; but qmail may be.
– Peter A. SchneiderJun 10 '16 at 8:50

Because "safety" costs speed, the "safer" languages perform at a slower speed.

You ask why use a "dangerous" language like C or C++, have somebody write you a video driver or the like in Python or Java, etc. and see how you feel about "safety" :)

Seriously though, you have to be as close to the core memory of the machine to be able to manipulate pixels, registers, etc... Java or Python cannot do this with any type of performance-worthy speed... C and C++ both allow you to do this through pointers and the like...

It's not true in general that safety costs speed. The safest available languages do most of the checks at compile time. O'Caml, Ada, Haskell, Rust all don't trail far behind C in terms of average runtime speed. What they do usually incur is significant overhead in program size, memory efficiency, latency, and obviously compile time. And, yes, they have difficulties with close-to-the-metal stuff. But that's not really a speed issue.
– leftaroundaboutJun 8 '16 at 8:34

6

Also, C doesn't do what you think it does. C is an abstract machine. It doesn't give you direct access to anything - that's a good thing. You can't even look at modern assembly to see how much C is hiding from you - modern assembly (e.g. TASM) would have been considered a high level language back when C was developed. I'd be very happy if someone wrote drivers in a "safe" language, thank you - that would quite help avoiding plenty of those BSODs and freezes, not to mention security holes :) And most importantly, there's systems languages that are much safer than C.
– LuaanJun 8 '16 at 12:35

3

@Wintermute You really want to look up Rust before making comments about how safety features necessarily cost speed. Hell, C's low level type system actually inhibits many very useful optimisations that compilers could otherwise do (particularly when considering that pretty much no large c project manages to avoid to not violate strict aliasing somewhere).
– VooJun 8 '16 at 16:25

7

@Wintermute Yeah the myth that you can't make C/C++ any safer without introducing a performance overhead is very persistent which is why I take this rather serious (there are some areas where this is certainly true [bounds checking]). Now why is Rust not more widespread? History and complexity. Rust is still relatively new and many of the largest systems written in C existed before Rust was ever invented - you're not going to rewrite a million LOC in a new language even if it were much safer. Also every programmer and their dog knows C, Rust? Good luck finding enough people.
– VooJun 8 '16 at 16:52

3

@Voo Dogs doing C?... no wonder I have seen so much bad code out there...j/k Point taken about Rust (I just downloaded and installed it, so you may have another convert to add) BTW in regards to Rust doing it right... I could make the same augment for "D" :)
– Wintermut3Jun 8 '16 at 17:21

Besides all the above, there is also one pretty common use case, which is using C as a common library for other languages.

Basically, nearly all the languages have an API interface to C.

Simple example, try to create a common application for Linux/IOS/Android/Windows. Besides all the tools that are out there, what we ended up was doing a core library in C, and then changing the GUI for each environment, that is:

IOS: ObjectiveC can use C libraries natively

Android: Java + JNI

Linux/Windows/MacOS: With GTK/.Net you can use native libraries. If you use Python, Perl, Ruby each of them have native APIs interfaces. (Java again with JNI).

One of the reasons I love to use, say, PHP, is because almost all of its libraries are, indeed, written in C — thankfully so, or PHP would be unbearably slow :) PHP is great to write sloppy code without the fear of doing anything 'dangerous' (and that's why I tend to write much more PHP code than anything else — I like sloppy code! :D ) , but it's nice to know that beneath a lot of those function calls there are the good ol' trusty C libraries to give it some performance boost ;-) By contrast, writing sloppy code in C is a big no-no...
– Gwyneth LlewelynJun 12 '16 at 22:04

A fundamental difficulty with C is that the name is used to describe a number of dialects with identical syntax but very different semantics. Some dialects are much safer than others.

In C as originally designed by Dennis Ritchie, C statements would generally be mapped to machine instructions in predictable fashion. Because C could run on processors which behaved differently when things like signed arithmetic overflow occurred, a programmer who didn't know how a machine would behave in case of arithmetic overflow wouldn't know what C code running on that machine would behave either, but if a machine was known to behave a certain way (e.g. silent two's-complement wraparound) then implementations on that machine would typically do likewise. One of the reasons that C got a reputation for being fast was that in cases where programmers knew that a platform's natural behavior in edge-case scenarios would fit their needs, there was no need for the programmer or compiler to write code to generate such scenarios. It was vital that any code which used pointers to access memory make certain that pointers were never used to access things they shouldn't, which would typically require ensuring that computations involving pointers didn't overflow, but would not require paranoia about things like arithmetic overflow in other contexts.

Unfortunately, compiler writers have taken the view that since the Standard imposes no requirements on what implementations must do in such cases (laxity which was intended to allow for hardware implementations that might not behave predictably), compilers should feel free to generate code which
negates laws of time and causality.

Hyper-modern (but fashionable) compiler theory would suggest that the compiler should output
"QUACK!" unconditionally, since in any case where the condition was
false the program would end up invoking undefined behavior performing
a multiply whose result was going to be ignored anyway. Since the
Standard would allow a compiler to do anything it likes in such a case,
it allows the compiler to output "QUACK!".

While C used to be safer than assembly language, when using hyper-modern
compilers the reverse is true. In assembly language, integer overflow
may cause a calculation to yield meaningless result, but on most platforms
that will be the extent of its effects. If the results end up being
ignored anyway, the overflow won't matter. In hyper-modern C, however,
even what would normally be "benign" forms of Undefined Behavior (such
as an integer overflow in a calculation which ends up being ignored)
can cause arbitrary program execution.

even in a hyper-modern compiler, C does not bounds-check on arrays. if it did so, that would not be compatible with the definition of the language. i use that fact sometimes to make arrays with an additional pointer into the middle of the array to have negative indices.
– robert bristow-johnsonJun 9 '16 at 3:07

1

I'd like to see evidence of your example producing "QUACK!" unconditionally. x certainly can be greater than 1000000 at the point of comparison, and later evaluation which would result in overflow does not prevent that. More so, if you have inlining enabled which allows the overflowing multiply to be removed, your argument about implicit range restrictions does not hold.
– GrahamJun 9 '16 at 11:35

2

@robertbristow-johnson: Actually, the Standard quite explicitly says that given e.g. int arr[5][[5], an attempt to access arr[0][5] will yield Undefined Behavior. Such a rule makes it possible for a compiler which is given something like arr[1][0]=3; arr[0][i]=6; arr[1][0]++; to infer that arr[1][0] will equal 4, without regard for the value of i.
– supercatJun 9 '16 at 17:12

2

@robertbristow-johnson: Even if the compiler allocates arrays within a struct sequentially without gaps, that does not guarantee that indexing one of the arrays is guaranteed to affect another. See godbolt.org/g/Avt3KW for an example of how gcc will treat such code.
– supercatJun 9 '16 at 22:06

1

@robertbristow-johnson: I commented the assembly to explain what it's doing. The compiler sees that code stores 1 into s->arr2[0] and then increments s->arr2[0], so gcc combines those two operations by having code simply store the value value 2, without considering the possibility that the intervening write to s->arr1[i] might affect the value of s->arr1[0] (since, according to the Standard, it can't).
– supercatJun 10 '16 at 5:15

Historical reasons. I don't often get to write brand new code, mostly I get to maintain and extend the old stuff which has been running for decades. I'm just happy it's C and not Fortran.

I can get irritated when some student says, "but why on earth do you do this awful X when you could be doing Y?". Well, X is the job I've got and it pays the bills very nicely. I have done Y on occasion, and it was fun, but X is what most of us do.

The claim that C is "dangerous" is a frequent talking point in language flame wars (most often in comparison to Java). However, the evidence for this claim is unclear.

C is a language with a particular set of features. Some of these features may allow certain types of errors that are not allowed by other types of languages (the risk of C's memory management are typically highlighted). However, this is not the same as an argument that C is more dangerous than other languages overall. I'm not aware of anyone providing convincing evidence on this point.

Also, "dangerous" depends on context: what are you trying to do, and what kinds of risks are you worried about?

In many contexts I would consider C more "dangerous" than a high-level language, because it requires you to do more manual implementation of basic functionality, increasing the risk of bugs. For example, doing some basic text processing or developing a website in C would usually be dumb, because other languages have features that make this a lot easier.

However, C and C++ are widely used for mission-critical systems, because a smaller language with more direct control of hardward is considered "safer" in that context. From a very good Stack Overflow answer:

Although C and C++ were not specifically designed for this type of
application, they are widely used for embedded and safety-critical
software for several reasons. The main properties of note are control
over memory management (which allows you to avoid having to garbage
collect, for example), simple, well debugged core run-time libraries
and mature tool support. A lot of the embedded development tool chains
in use today were first developed in the 1980s and 1990s when this was
current technology and come from the Unix culture that was prevalent
at that time, so these tools remain popular for this sort of work.

While manual memory management code must be carefully checked to avoid
errors, it allows a degree of control over application response times
that is not available with languages that depend on garbage
collection. The core run time libraries of C and C++ languages are
relatively simple, mature and well understood, so they are amongst the
most stable platforms available.

I'd say hyper-modern C is also more dangerous than assembly language or genuine low-level dialects of C which consistently behave as though they consistently translate C operations into machine code operations without regard for edge-cases where the natural machine code operations would have defined behavior but the C Standard would impose no requirements. The hyper-modern approach where an integer overflow can negate the rules of time and causality seems far less amenable to generation of safe code.
– supercatJun 10 '16 at 18:09

To add to the existing answers, it's all well and good saying that you're going to choose Python or PHP for your project, because of their relative safety. But somebody's got to implement those languages and, when they do, they are probably going to do it in C. (Or, well, something like it.)

So that's why people use C — to create the less dangerous tools that you want to use.

But why do people use [tool] (or [related tool]) if [they] can be used 'dangerously'?

Any interesting tool can be used dangerously, including programming languages. You learn more so you can do more (and so that less danger is created when you use the tool). In particular, you learn the tool so that you can do the thing that tool is good for (and perhaps recognize when that tool is the best tool of the tools you know).

For instance, if you need to put a 6 mm diameter, 5 cm deep, cylindrical hole in a block of wood, a drill is a much better tool than an LALR parser. If you know what these two tools are, you know which is the right tool. If you already know how to use a drill, voila!, hole.

C is just another tool. It's better for some tasks than for others. The other answers here address this. If you learn some C, you will come to recognize when it is the right tool and when it is not.

There is no specific reason not to learn C but I would suggest C++. It offers much of what C does (since C++ is a super set of C), with a large amount of "extras". Learning C prior to C++ is unnecessary -- they are effectively separate languages.

Put another way, if C were a set of woodworking tools, it would likely be:

hammer

nails

hand saw

hand drill

block sander

chisel (maybe)

You can build anything with these tools -- but anything nice potentially requires a lot of time and skill.

C++ is the collection of power tools at your local hardware store.

If you stick with basic language features to start, C++ has relatively little additional learning curve.

But why do people use C (or C++) if it can be used 'dangerously'?

Because some people don't want furniture from IKEA. =)

Seriously though, while many languages that are "higher" than C or C++ may have things that make them (potentially) "easier" to use in certain aspects, this isn't always a good thing. If you don't like the way something is done or a feature isn't provided, there likely isn't much you can do about it. On the other hand, C and C++ provide enough "low-level" language features (including pointers) that you can access many things fairly directly (esp. hardware or OS-wise) or build it yourself, which may not be possible in other languages as implemented.

More specifically, C has the following set of features that make it desirable for many programmers:

Speed - Because of it's relative simplicity and compiler optimizations over the years, it is natively very fast. Also, a lot of people have figured out a lot of shortcuts to specific goals when using the language, which makes it potentially even faster.

Size - For similar reasons as the ones listed for speed, C programs can be made very small (both in terms of executable size and memory usage), which is desirable for environments with limited memory (i.e embedded or mobile).

Compatibility - C has been around for a long time and everyone has tools and libraries for it. The language itself is not picky either - it expects a processor to execute instructions and memory to hold stuff and that is about it.

Furthermore, there is something known as an Application Binary Interface (ABI). In short, it is a way for programs to communicate on a machine-code level, which can have advantages over an Application Programming Interface (API). While other languages such as C++ can have an ABI, typically these are less uniform (agreed upon) than C's, so C makes a good foundation language when you want to use an ABI to communicate with another program for some reason.

Why do programmers not just use Java or Python or another compiled language like Visual Basic?

Directly accessing memory with pointers introduces a lot of neat (usually quick) tricks when you can put your grubby paws on the little ones and zeros in your memory cubbyholes directly and not have to wait for that mean ol' teacher to hand out the toys just at playtime then scoop them up again.

Regarding scripted languages and that ilk, you have to work hard to get languages requiring secondary programs to run as efficiently as C (or any compiled language) natively does. Adding an on-the-fly interpreter inherently introduces the possibility for decreased execution speed and increased memory usage because you are adding another program to the mix. Your programs efficiency relies as much on the efficiency of this secondary program as how well (poorly =) ) you wrote your original program code. Not to mention your program is often completely reliant on the second program to even execute. That second program doesn't exist for some reason on a particular system? Code no go.

In fact, introducing anything "extra" potentially slows or complicates your code. In languages "without scary pointers", you are always waiting for other bits of code to clean up behind you or otherwise figure out "safe" ways to do things - because your program is still doing the same memory access operations as might be done with pointers. You just aren't the one handling it (so you can't f*ck it up, genius =P ).

"It remained an official part of the language up to the 1999 ISO C standard, but it was officially removed by the 2011 standard. Most C implementations still support it, but at least gcc issues a warning for any code that uses it."

The notion that because something can be done in a language, it must be done is silly. Languages have flaws that get fixed. For compatibility reasons with older code, this construct can still be used. But there is nothing (likely) forcing a programmer to use gets() and, in fact, this command was essentially replaced with safer alternatives.

More to the point, the issue with gets() isn't a pointer issue per se. It's a problem with a command that doesn't necessarily know how to use memory safely. In an abstract sense, this is all pointer issues - reading and writing stuff your not supposed to. That isn't a problem with pointers; it's a problem with pointer implementation.

To clarify, pointers aren't dangerous until you accidentally access a memory location that you weren't intending to. And even then that doesn't guarantee your computer will melt or explode. In most cases, your program will just cease to function (correctly).

That said, because pointers provide access to memory locations and because data and executable code exist in memory together, there is enough of a real danger of accidental corruption that you want to manage memory correctly.

To that point, because truly direct memory access operations often provide less benefit in general than they might have years ago, even non-garbage collected languages like C++ have introduced things such as smart pointers to help bridge the gap between memory efficiency and safety.

I don't agree with the suggestion to learn C++ instead of C. Writing good C++ is harder than writing good C and reading C++ is much harder than reading C. So the learning curve of C++ is much steeper. "C++ is a super set of C" This is more or less like saying that boots are a superset of slippers. They have different advantages and usage and each one has features that the other doesn't.
– martinkunevJun 10 '16 at 17:50

"Writing good C++ is harder than writing good C" - Absolutely. =) "[R]eading C++ is much harder than reading C" - Any advanced programming is likely indistinguishable from magic ;-) My two cents is that this is much more programmer dependent than language dependent, though C++ does nothing much to help itself in this category. "So the learning curve of C++ is much steeper." - In the long run, yes. In the short term, less so (my opinion). Anecdotally, most basic language courses in C and C++ are likely to cover roughly the same general types of material, excepting classes for C++.
– AnaksunamanJun 11 '16 at 14:33

2

"They have different advantages and usage and each one has features that the other doesn't." - As mentioned "There is no specific reason not to learn C[.]" C is a fine language and I stick by that. If it suits OP or anyone else, I fully support learning it. =)
– AnaksunamanJun 11 '16 at 14:39

Learning C teaches you how the machine works (not all the way down, but still a step closer to the metal). That's a very good reason for learning it.
– Agent_LJun 13 '16 at 9:01

As always, programming language is only a consequence of problem solving. You should in fact learn not just C but many different languages (and other ways of programming a computer, be it GUI tools or command interpreters) to have a decent toolbox to use when solving problems.

Sometimes you will find that a problem lends itself well to something that is included in the Java default libraries, in such a case you may choose Java to leverage that. In other cases it may be that you need to do something on Windows that is a lot simpler in the .NET runtime, so you may use C# or VB. There could be a graphical tool or command script that does solve your problem, then you may use these. Maybe you need to write a GUI application on multiple platforms, Java could be an option, given the included libraries in the JDK, but then again, one target platform may lack a JRE so maybe you instead choose C and SDL (or similiar).

C has an important position in this toolset, as it is general, small and fast and also compiles to machinecode. It is also supported on every platform under the sun (not without recompile however).

Bottom line is, you should learn as many tools, languages and paradigms as you possibly can.

A programmer solves problems and designs algorithms by instructing machines to perform the workload. End of story. This is irrelevant to the language. Your most important skill is problem solving and logical breakdown of structured problems, language skill/choice is ALWAYS secondary and/or a consequence of the nature of the problem.

An interesting path if you are interested in C is to extend your skillset with Go. Go is really an improved C, with garbage collection and interfaces, as well as a nice built in threading model/channels, that also bring many of the benefits of C (such as pointer arithmetic and compiling to machine code).

It depends on what you intend to do with it. C was designed as a replacement for assembly language and is the high level language that is closest to the machine language. Thus it has low overheads in size and performance and is suitable for systems programming and other tasks that require a small footprint and getting close to the underlying hardware.

When you're working at the level of bits and bytes, of memory as raw homogeneous collection of data, as would often be required to effectively implement the most efficient allocators and data structures, there is no safety to be had. Safety is predominantly a strong data type-related concept, and a memory allocator doesn't work with data types. It works with bits and bytes to pool out with those same bits and bytes potentially representing one data type one moment and another later on.

It doesn't matter if you use C++ in that case. You'd still be sprinkling static_casts all over the code to cast from void* pointers and still working with bits and bytes and just dealing with more hassles related to respecting the type system in this context than C which has a much simpler type system where you're free to memcpy bits and bytes around without worrying about bulldozing over the type system.

In fact it's often harder to work in C++, an overall safer language, in such low-level contexts of bits and bytes without writing even more dangerous code than you would in C, since you could be bulldozing over C++'s type system and doing things like overwriting vptrs and failing to invoke copy constructors and destructors at appropriate times. If you take the proper time to respect these types and use placement new and manually invoke dtors and so forth, you then get exposed to the world of exception-handling in a context too low-level for RAII to be practical, and achieving exception-safety in such a low-level context is very difficult (you have to pretend like just about any function can throw and catch all possibilities and roll back any side effects as an indivisible transaction as though nothing happened). The C code can often "safely" assume that you can treat any data type instantiated in C as just bits and bytes without violating the type system and invoking undefined behavior or running into exceptions.

And it would be impossible to implement such allocators in languages that don't allow you to get "dangerous" here; you'd have to lean on whatever allocators they provide (implemented most likely in C or C++) and hope it is good enough for your purposes. And there is almost always more efficient but less general allocators and data structures suitable for your specific purposes but much more narrowly applicable since they're specifically tailored for your purposes.

Most people don't need the likes of C or C++ since they can just call code originally implemented in C or C++ or possibly even assembly already implemented for them. Many might benefit from innovating at the high-level, like stringing together an image program that just uses libraries of existing image processing functions already implemented in C where they're not innovating so much at the lowest level of looping through individual pixels, but maybe offering a very friendly user interface and workflow never seen before. In that case, if the point of the software is just to make high-level calls into low-level libraries ("process this entire image for me, not for each pixel, do something"), then it might arguably be a premature optimization to even attempt to start writing such an application in C.

But if you're doing something new at the low level where it helps to access data in a low-level way like a brand new image filter never seen before that's fast enough to work on HD video in realtime, then you generally have to get a little bit dangerous.

It's easy to take this stuff for granted. I remember a facebook post with someone pointing out how it's feasible to create a 3D video game with Python with the implication that low-level languages are becoming obsolete, and it was certainly a decent-looking game. But Python was making high-level calls into libraries implemented in C to do all the heavy-lifting work. You can't make Unreal Engine 4 by just making high-level calls into existing libraries. Unreal Engine 4 is the library. It did all kinds of things that never existed in other libraries and engines from lighting to even its nodal blueprint system and how it can compile and run code on the fly. If you want to innovate at the kind of low engine/core/kernel level, then you have to get low-level. If all game devs switched to high-level safe languages, there would be no Unreal Engine 5, or 6, or 7. It would likely be people still using Unreal Engine 4 decades later because you can't innovate at the level required to come out with a next-gen engine by just making high-level calls into the old one.

Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).