In the UNIX world, it's well understood that you need to prefix binaries in the current directory with ./ in order to run them: . is not in $PATH by default. This is to avoid evil binaries overriding system utils like ls and getting code exec.

I would like to know the history of this decision. Was it in UNIX from the beginning? Certain shells?

I want to be clear: I'm not looking for reasons in general why it was implemented; I understand those. I'm looking for the event that caused it or at least when the change was made.

Citations would be much appreciated.

Interestingly, it looks like bash actually intends to include ., at least that's my reading of this source (from config-top.h):

(Some have questioned the relevance of this question. There's been some talk about a number of executable installers on Windows that are vulnerable to a similar attack: executables by default will search for DLLs in ., so by planting an evil DLL in say Downloads, you get code exec, often as admin. It's a "Those who don't learn history are doomed to repeat it" moment.)

This question came from our site for students, researchers and practitioners of computer science.

3

en.wikipedia.org/wiki/PATH_(variable) provides some details. What I consider interesting is that the original design only had a single /bin directory, and the PATH concept came from Multics only because the development team's /bin became too large. It doesn't mention anything about .. I don't think it has much significance. It's likely that it was experimented with and then it occurred to someone that it was a security hole.
– PSkocikJan 11 '16 at 19:50

1

@PSkocik: That's exactly the event I'm looking for. The history of UNIX is pretty well documented in specs and there are plenty of people alive today who may have been involved in the decision. I don't think it's a huge ask.
– nfirvineJan 13 '16 at 19:06

1 Answer
1

It will be no easy task to find "historic documents" about this decision, but quoting another answer from the StackExchange umbrella, you will find what I also think is the most appropriate answer to your question: to avoid "sh## happens" scenario:

On UNIX-like systems over the years (most relevantly to me, Linux),
I've noticed that . (current dir) is never in the $PATH by default.
Why is this? I recall reading years ago that it was a security
problem, but the article I read didn't explain what exactly the
problem was. Is it because someone could leave a malicious version of
'ls' or 'cp' in a directory, and I'd end up running it without
realizing it was there?

Answer:

You answered correctly your own question, that's exactly why dot isn't
in the path: To protect against childish viruses or honest mistakes.

Of course, this is a very lame and useless anti-virus measure, and
nothing stops you from adding dot to the path yourself.

And on this link, you can find an additional example why is dangerous to put the dot on your $PATH.

Footnote:

An extreme example would be the situation in which an ordinary user
created a shell script such as rm -r /, which would delete all files
and directories in the system for which the user had writing
permission, and named this script ls. Were the system administrator to
navigate to the directory in which this script was located and attempt
to run the standard ls command in order to view the contents of that
directory, the shell would instead run the script with the same name
and thereby remove the contents of all currently mounted partitions on
the computer!

Also, the purpose of PATH is to complete the path of a command in such way you don't need to remember or type the full path of your binaries. There is no need to index commands of a directory you are in. It's not useful at all if you know where these programs are stored.

EDIT:

However, after some research, I found that the idea behind the search path came from Multics, but the limitation was that it only searched inside /bin. Unix v3 was the first to implement it without this one-directory limitation. And taking a look at the security pages of the following manual, they already have shown what are the issues while using . in a PATH variable. This applies not only to su and it reinforces the security matter of injecting malicious code on the environment.

Trojan Horse tricks and countermeasures were discovered in an ongoing
game that has been recounted by Morris and Fred Grampp. Notice, for
example, the removal of login(1) to chapter 8 and the intrusion of
/etc/ into the synopsis for su (Grampp, v8). These fillips defeated
the old chestnut of leaving programs named login or su lying around in
hopes of capturing a password typed by an unwary system administrator.
Other subtle features of the modern su: dot is excluded from the shell
search path and the burglars’ favorite shell variable IFS is reset.

Since its conception, this feature has proved dangerous if using dot as part of the $PATH.

That's not what I'm looking for. I understand why in general it's a bad idea, but I want to know the specific event that triggered this thinking or when it happened.
– nfirvineJan 11 '16 at 19:50

3

Maybe this event never existed. This is as broad as asking "When does the humanity knew that was bad to stab people with knifes"...
– user34720Jan 11 '16 at 19:56

Maybe it doesn't exist. But it's not that broad: it probably happened in the last 50 years, so there's maybe a first-hand account of the decision. And maybe it's part of a spec somewhere. CS is pretty well-documented.
– nfirvineJan 11 '16 at 20:03

2

take a look at my answer, please. It seems that i found a suitable answer to your historical needs :)
– user34720Jan 11 '16 at 20:12

@nwilder: I was reading that too. Was following the citation, but it's locked away in an academic journal. Anyway, that's for su, not shells in general, but good enough; shows that's about the timeframe they were aware of it. Accepted.
– nfirvineJan 11 '16 at 20:18