Why did the C master Dennis Ritchie introduce pointers in C? And why did the other programming languages like VB.NET or Java or C# eliminate them? I have found some points in Google, and I want to listen your comments too. Why are they eliminating pointer concepts in modern languages?

People say C is the basic language and pointers is the concept that makes C powerful and outstanding and makes C still to compete with more modern languages. Then why did they eliminate pointers in more modern languages?

Do you think knowledge of pointers is still important for new programmers? People are using VB.NET or Java these days, which supports more highly advanced features than C (and does not use any pointer concepts) and many people as I see now (my friends) choose these languages ignoring C as they support advanced features. I tell them to start with C. They say it's a waste to learn the concepts of pointers when you're doing the advanced things in VB.NET or Java which are not possible in C.

What do you think?

Updated:

The comments I read on Google are:

The earlier computers were too slow and not optimized.

Using pointers makes it possible to access an address directly and this saves time instead of making a copy of it in function calls.

Security is significantly worse using pointers, and that's why Java and C# did not include them.

These and some more what I found. I still need some valuable answers. That would be greatly appreciated.

While Java doesn't allow for explicit use of pointers, C# does as unmanaged code blocks.
–
Joe InternetSep 5 '11 at 12:44

12

@quaint_dev: Well, Java really does not have pointers. References cannot do everything pointers can do, so attempting to understand pointers in terms of references is not the way to go (and a mistake a lot of programmers learning C or C++ make). Pointers can do arithmetic. References cannot. (A limitation that really stinks every time I'm forced to use Java)
–
Billy ONealSep 5 '11 at 16:52

16 Answers
16

Back in those days, developers were working much closer to the metal. C was essentially a higher level replacement for assembly, which is almost as close to the hardware as you can get, so it was natural you needed pointers to be efficient in solving coding problems. However, pointers are sharp tools, which can cause great damage if used carelessly. Also, direct use of pointers open up the possibility to many security problems, which weren't an issue back then (in 1970, the internet consisted of about a few dozen machines across a couple of universities, and it was not even called like that...), but became more and more important since. So nowadays higher level languages are consciously designed to avoid raw memory pointers.

Saying that "advanced things done in VB.Net or Java are not possible in C" shows a very limited point of view, to say the least :-)

First of all, all of these languages (even assembly) are Turing complete so in theory whatever is possible in one language, is possible in all. Just think about what happens when a piece of VB.Net or Java code is compiled and executed: eventually, it is translated into (or mapped to) machine code, because that is the only thing which the machine understands. In compiled languages like C and C++, you can actually get the full body of machine code equivalent to the original higher level source code, as one or more executable files/libraries. In VM based languages, it is more tricky (and may not even be possible) to get the entire equivalent machine code representation of your program, but still eventually it is there somewhere, within the deep recesses of the runtime system and the JIT.

Now, of course, it is an entirely different question whether some solution is feasible in a specific language. No sensible developer would start writing a web app in assembly :-) But it is useful to bear in mind that most or all of those higher level languages are built on top of a huge amount of runtime and class library code, a large chunk of which is implemented in a lower level language, typically in C.

So to get to the question,

Do you think knowledge on pointers to the young people [...] is important?

The concept behind pointers is indirection. This is a very important concept and IMHO every good programmer should grasp it on a certain level. Even if someone is working solely with higher level languages, indirection and references are still important. Failing to understand this means being unable to use a whole class of very potent tools, seriously limiting one's problem solving ability in the long run.

So my answer is yes, if you want to become a truly good programmer, you must understand pointers too (as well as recursion - this is the other typical stumbling block for budding developers). You may not need to start with it - I don't think C is optimal as a first language nowadays. But at some point one should get familiar with indirection. Without it, we can never understand how the tools, libraries and frameworks we are using actually work. And a craftsman who doesn't understand how his/her tools work is a very limited one. Fair enough, one may get a grasp of it in higher level programming languages too. One good litmus test is correctly implementing a doubly linked list - if you can do it in your favourite language, you can claim you understand indirection well enough.

But if not for anything else, we should do it to learn respect for the programmers of old who managed to build unbelievable things using the ridiculously simple tools they had (compared to what we have now). We are all standing on the shoulders of giants, and it does good to us to acknowledge this, rather than pretending we are the giants ourselves.

This is a good answer but it doesn't really answer the question: "Do the young minds need to learn the pointer concepts?"
–
FalconSep 5 '11 at 9:37

9

+1 Good answer. I'd drop the turing completeness argument though - for practical programming, it's a red herring, as you too note later on. It's computability theory, i.e. turing complete only means there's a program in the (for many languages, infinite) space of potential programs that implements the same algorithm, not whether it's actually feasible or even humanly possible. Just pointing out it's all machine code at the end proves the point just as well without planting the stupid "I can do everything in one language as they're all the same, harhar!" seed.
–
delnanSep 5 '11 at 10:19

5

+1 for "And a craftsman who doesn't understand how his/her tools work is a very limited one. "
–
quickly_nowSep 5 '11 at 10:55

6

Also, not understanding the mechanics of pointers (and by extension references) consequently means you don't understands the concepts of shallow/deep datastructure copy, which can cause serious hard-to-track bugs. Even in "modern" high-level languages.
–
MavrikSep 5 '11 at 13:11

1

C was designed to be portable assembler for Unix, i.e. close to the metal.
–
user1249Sep 5 '11 at 17:04

Java and other higher level languages did not remove pointers. What they did was to remove plain pointer arithmetic.

In fact, Java still allows a protected and restricted pointer arithmetic: the array access. In plain old C, array access is nothing but dereferencing. It is a different notation, a syntactic sugar, if you will, to clearly communicate, what you're doing.
Still, array[index] is equivalent to *(array+index). Because of that it is also equivalent to index[array] although I suppose some C compilers might give you a warning, if you do that.
As a corollary, pointer[0] is equivalent to *pointer. That's simply because the "pointer to an array" is the address of the first entry of the array and the addresses of the subsequent elements are computed by adding the index.

In Java, plain pointer arithmetics (referencing and dereferencing) don't exist anymore. However pointers exist. They call them references, but it doesn't change what it is. And array access still is exactly the same thing: Look at the address, add the index and use that memory location. However in Java, it will check whether or not that index is within the bounds of the array you originally allocated. If not, it will throw an exception.

Now the advantage of the Java approach is, that you don't have code, that just blindly writes arbitrary bytes into arbitrary memory locations. This improves safety and also security, because if you fail to check buffer overflows and such, the runtime will do it for you.

The disadvantage of this is, that it's simply less powerful. It is possible to do memory safe programming in C. It is impossible to benefit from the speed and the possibilities of unsafe programming in Java.

Actually, there is nothing hard about pointers or pointer arithmetic. They are just normally explained in convoluted ways, whereas all a pointer is, is an index to one giant array (your memory space), all referencing a value does is giving you the index where to find it, all what dereferencing does is to lookup the value at a given index. (This is just a bit simplified, because it doesn't take into account that values are of different size in memory, depending on their type. But that's a circumstantial detail, rather than a part of the actual concept)

IMHO, everybody in our job should be able to understand that, or they are simply in the wrong field.

+1 Java and C# still have pointers, and of course NullPointerExceptions
–
jk.Sep 5 '11 at 16:41

5

Also note that references may point to different areas over time, as the garbage collector moves stuff around. Pointers are usually static.
–
user1249Sep 5 '11 at 20:54

3

+1: this! And I think there are two hard things to grasp about pointers (in general): indirection (which happens in C, C#, Java, ...) and pointer arithmetic (which doesn't happen in Java in the same way). In my opinion both are important concepts to learn and both are major stumbling blocks for beginners. But they should not be confused: indirection can happen without pointer arithmetic.
–
Joachim SauerSep 6 '11 at 6:52

2

Actually, back2dos was right the first time, since (array + index) already takes into account the size of the objects (in C).
–
Matthew FlaschenSep 6 '11 at 15:10

3

@CyberSkull, the answer was giving the syntactic equivalent of array[index], and that's *(array+index). If you want to show how the compiler does things internally, you can explicitly talk about bytes or give the assembly.
–
Matthew FlaschenSep 7 '11 at 15:22

The concept of pointers is important in the general computer programming body of knowledge.
Understanding the concept is good for to-be-programmers or programmers of any language, even if the language does not directly support it.

You are correct in a way, but when you pass data to methods, chances are that in some cases you are passing a pointer to the variable. However, the concept of a pointer is useful (in my opinion) regardless of its implementation in a software language.
–
Emmad KareemSep 5 '11 at 13:15

1

@SirTapTap: That's because if you learned C++ in some kind of course, they teach you C++. Not the best way to use C++. Pointer arithmetic is usually glossed over because it's something that you can have a passing knowledge of C++ with and not know. But even things like iterating over a general collection are done with pointers in real/idiomatic C++. (As it's the foundation of how the Standard Template Library works)
–
Billy ONealSep 6 '11 at 0:28

1

@SirTapTap: Practical use: every collection and algorithm in the STL. std::sort, std::partition, std::find, etc. They work with pointers, and they work with objects that act like pointers (iterators). And they work over any general collection; linked lists, dynamic arrays, deques, trees, or any other kind of user defined collection. You can't that kind of an abstraction without pointers.
–
Billy ONealSep 6 '11 at 14:42

If you don't know the basics you will NEVER be able to solve the really hard, strange, difficult and complicated problems that come your way.

And if you do understand the basics really well, you are MUCH more marketable in the job market.

I worked once with a chap who had been programming for 10 years, and had no idea how pointers worked. I (much more junior) spent hours at a whiteboard educating him. That opened my eyes. He had NO IDEA about so many basic things.

Whilst your general point of "know as much as you possibly can" is a sound one, I would question the idea that you will "NEVER be able to solve the really hard, strange, difficult and complicated problems that come your way" if you don't understand pointers. This somehow implies that all difficult problems can be solved using these "magic" pointers, which is not the case. The concepts between pointers are useful to know, but they are not directly essential to many fields of programming.
–
Dan DiploSep 5 '11 at 14:10

4

@Idsa: no, even more basic, many programmer's nowadays don't even know how transistors and logic gates works in the goo' ole' chips, and they surely should have known how electrons moves about and the effect of quantum uncertainty on miniaturization; I haven't even started on quacks, liptons, and bison! and the Hiccups bison particles!
–
Lie RyanSep 5 '11 at 15:34

2

Basics.... things like how stuff is stored. The difference between a byte, a word, how signed and unsigned work. How pointers work. What a character is. How things are coded in ASCII (and these days, Unicode). How a linked list can be created in memory using simple structures only. How strings REALLY work. From these little things, bigger things grow.
–
quickly_nowSep 6 '11 at 1:29

3

Know as much as you possibly can is a good principle, but I think you have the cart before the horse. Good developers strive to learn whatever they can because they're good developers. The yearning for knowledge is a trait of a good developer. It is not the cause of a good developer. Going out and learning as much as you can will not make you a good developer. It will make you a walking encyclopedia, nothing more. If you're a good developer, THEN you can APPLY that knowledge you attained to solve problems. But if you weren't already a good developer, the knowledge won't get you much.
–
corsiKaSep 6 '11 at 18:01

A few months back I was programming in C#, and I wanted to make a copy of a list. Of course what I did was NewList = OldList; and then started to modify NewList. When I tried to print out both lists, they were both the same, since NewList was just a pointer to OldList and not a copy, so I was actually changing OldList all along. It didn't take me too long to figure that one out, but some of my classmates weren't that quick and had to be explained why this is happening.

Nowadays you need to know just the basic concept of references, not pointer syntax/math. I learned pointers (with arithmetic&syntax) in C back in the day. The languages I now program in don't deal with C-style pointers that let you do unsafe things. One can understand why in python a=[1,2]; b=a; a.append(3) that both a and b are going to both reference the same object [1,2,3] without knowing stuff like in C the ith element of an array can be referenced by arr[i] or i[arr] as both are *(arr+i). I prefer when the language doesn't let i[arr] be used.
–
dr jimbobSep 6 '11 at 3:31

Because pointers are a very powerful mechanism that can be used in many ways.

And why did the other programming languages like VB.NET or Java or C# eliminate them?

Because pointers are a very dangerous mechanism that can be misused in many ways.

I think programmers should learn about pointers, but from an educational perspective, it is unwise to introduce them early. The reason is that they are used for so many different purposes, it's hard to tell as a beginner why you are using a pointer in a particular circumstance.

Why? You can write a huge system with forms designer and code generator. Isn't it sufficient? (irony)

And now seriously, pointers are not crucial part of programming in many areas, but they allow people to understand how the internals work. And if we will have no-one who understands how internals work, there will be a situation where SQL2020, Windows 15 and Linux 20.04 will be written in garbage collected virtual machine running over 30 layers of abstraction, with code generated via IDE, in JavaScript.

Neither Java nor C# eliminated pointers, they have references which are almost the same. What was eliminated is pointer arithmetics, which can be omitted in an introductory course.
No non-trivial application could be done without the concept of pointers or references, so it is worth teaching (No dynamic memory allocation could be done without them).

Consider the following in C++ and Java, and I guess it's not very different in C#:aClass *x = new aClass();aClass x = new aClass();
There's not really too much difference between pointers and references, right?
Pointer arithmetics should be avoided unless necessary and when programming with high level models, so there's not much problem there.

Variable address pointers are a specific case of the more generalized concept of indirection. Indirection is used in most (all?) modern languages in many constructs such as delegates and callbacks. Understanding the concept of indirection enables you to know when and how to best use these tools.

Pointers are how a large amount of data access gets done in all languages. Pointers are a hardware feature of all microprocessors. High level languages like Java, VB & C# essentially wall off direct access to pointers from the users of the language with references. References refer to objects via the language's memory management scheme (could be a pointer with metadata or just a number for a the memory table, for example).

Understanding how pointers work is fundamental to understanding how computers actually work. Pointers are also more flexible and powerful than references.

For example, the reason why arrays start at index zero is because arrays are actually shorthand for pointer arithmetic. Without learning about how pointers work, many beginning programers don't quite get arrays.

int a, foo[10];
foo[2] = a;

Line 2 in pointer arithmetic would be:

*(foo + sizeof(int) * 2) = a;

Without understanding pointers, one cannot understand memory management, the stack, the heap or even arrays! Additionally, one needs to understand pointers and dereferencing to understand how functions and objects are passed.

I think it boils down to the fact that the need to deal with pointers fell away as programmers dealt less with the direct hardware they were running on. For example, allocating a linked list data structure in a way that fit perfectly onto the sequence of 640 byte memory modules that the specialised hardware had.

Dealing with pointers manually can be error-prone (leading to memory leaks and exploitable code) and is time consuming to get right. So Java and C# etc all now manage your memory and your pointers for you via their Virtual Machines (VMs). This is arguably less efficient than using raw C/C++, although the VMs are constantly improving.

C (and C++) are still widely used languages, especially in the High Performance Computing, Gaming and embedded hardware spaces. I'm personally thankful I learned about pointers as the transition to Java's references (a similar concept to pointers) was very easy and I wasn't lost when I saw my first NullPointerException (which should really be called a NullReferenceException, but I digress).

I would advise learning about the concept of pointers as they still underpin a lot of data structures etc. Then go on to chose a language that you love to work in, knowing that if something like a NPE comes up, you know what's really going on.

Some languages support direct memory access (pointers), some don't. There are good reasons for each case.

As someone said here, back in the days of C, automatic memory management was not as elaborate as it is today. And people had been used to it, anyhow. The good programmers back then had a much deeper understanding of computer programs than of our generation (I'm 21). They've been using punch-cards and waiting days for some compile-time on the mainframe. They probably knew why every bit in their code existed.

The obvious advantage of languages like C is that they allow you to have finer control over your program. When do you actually need it, these days? Only when you're creating infrastructural applications, such as OS-related programs and runtime environments.
If you want to just develop good, fast, robust and reliable software, then automatic memory management is most often your better choice.

The fact is that direct memory access has mostly been abused over the course of software development history. People had been creating programs that leaked memory, and were actually slower because of redundant memory allocation (in C it's easy and common to extend the process' virtual memory space for every single allocation).

These days, virtual machines/runtimes do a much better job than 99% of the programmers at allocating and releasing memory. On top of that, they allow you extra flexibility in the flow you want your program to have, because you're (mostly) not occupied with freeing allocated memory at the right time and place.

As for knowledge. I think it's admirable that programmers know how the environments they program in are implemented. Not necessarily to the smallest details, but the big picture.

I think that knowing how pointers work (at the very least) is interesting. Same as knowing how polymorphism is implemented. Where your process gets its memory from, and how.
These are things that have always been of interest to me, personally. I can honestly say that they have made me a better programmer, but I can not say that they're an educational necessity for anyone who wants to become a good programmer. In either case, knowing more is often going to make you better at your work.

The way I see it, if all you're asked to do is create an application in Java or C# or something like that, then you need to focus on proper design and implementation techniques. Testable code, clean code, flexible code. In that order.

Because even if you don't know all the small details, someone who does will be able to change what you'll have created into something that simply performs better. And that's often not a difficult job, once you have a proper, clean and testable design in place (and that's usually most of the work).

If I were an interviewer looking to hire someone for a high-level-language application, those would be the things I'd be most interested in.

Low-level knowledge is a bonus. It's good for debugging and occasionally creating slightly better solutions. It makes you an interesting person, professionally. It grants you some respect in your workplace.

For most practical purposes in high level OO languages understanding references is enough, you don't really need to understand how how these languages implement references in terms of pointers.

There are a lot more functional and modern multi-paradigm approaches I'd value much higher than being able to do fancy pointer arithmetic to say, write the 1000th optimized string copy function probably performing worse than String.copy of your std lib any way.

I'd advise to learn a lot more different, higher level concepts first, and branch out to learn languages of different designs to broaden your horizon before attempting to specialize in close-to-hardware stuff.

I often see totally failed attempts to micro-optimize web servlet or similiar code for 5% gain, when caching (memoization), SQL-optimizing, or just webserver config tuning can yield 100% or more with little effort. Fiddling with pointers is premature optimization in most cases.

Definitely, we need to have a thorough concept of pointer, if you really want to be a good programmer. The reason for Pointer concept was a direct access to you value which becomes more efficient and effective with time constraints...

Also, Now a days, considering Mobile Applications, which have very limited Memory we need to use it very carefully so that its operation is very quick with user response... So for this purpose we need a direct reference to the value...

Consider Apple's devices, or Objective C language, which is totally working with pointer concept only. All the variable declared in Objective C is having Pointer. You need to go through Objective C's wiki