I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).

alias rm='rm -i' and --preserve-root by default didn't save me, so are there any automatic safeguards for this?

Yeah, I wasn't root and cancelled the command immediately, but there were some relaxed permissions somewhere or something because I noticed that my Bash prompt broke already. I don't want to rely on permissions and not being root (I could make the same mistake with sudo), and I don't want to hunt for mysterious bugs because of one missing file somewhere in the system, so, backups and sudo are good, but I would like something better for this specific case.

About thinking twice and using the brain. I am using it actually! But I'm using it to solve some complex programming task involving 10 different things. I'm immersed in this task deeply enough, there isn't any brain power left for checking flags and paths, I don't even think in terms of commands and arguments, I think in terms of actions like 'empty current dir', different part of my brain translates them to commands and sometimes it makes mistakes. I want the computer to correct them, at least the dangerous ones.

Thanks for the answers. I'll try them and accept one when my box finishes restoring from backup tomorrow!

FYI, you can also do rm -rf . /mydir instead of rm -rf ./mydir and kill whatever directory you were in. I find this happens more often.
–
user606723Dec 2 '11 at 18:01

18

To use a gun analogy, this question says please make the gun recognize that I am aiming at my foot and not fire, but I don't want to have any responsibility for not aiming the gun at my foot in the first place. Guns, and computers, are stupid and if you do a stupid thing then you will get these results. Following along the gun analogy, nothing will keep you from hurting yourself except vigilance and practice.
–
slillibriDec 2 '11 at 19:01

50

@slillibri Except that rm is not a gun, it is a computer program, it could be smart enough to determine that the user is going to delete some important files and issue a warning (like it actually does if you try to do rm -rf / without star).
–
Valentin NemcevDec 2 '11 at 19:40

54

@slillibri Guns have safeties. Asking how to put better safeties on the rm command is a perfectly legitimate sysadmin question.
–
GillesDec 2 '11 at 20:13

26 Answers
26

One of the tricks I follow is to put # in the beginning while using the rm command.

root@localhost:~# #rm -rf /

This prevents accidental execution of rm on the wrong file/directory. Once verified, remove # from the beginning. This trick works, because in Bash a word beginning with # causes that word and all remaining characters on that line to be ignored. So the command is simply ignored.

OR

If you want to prevent any important directory, there is one more trick.

Create a file named -i in that directory. How can such a odd file be created? Using touch -- -i or touch ./-i

Safe-rm is a safety tool intended to prevent the accidental deletion
of important files by replacing /bin/rm with a wrapper, which checks
the given arguments against a configurable blacklist of files and
directories that should never be removed.

Users who attempt to delete one of these protected files or
directories will not be able to do so and will be shown a warning
message instead:

The creating file named -i is absolutely pure genius. I could've used that about a year ago when I accidentally ran an rm -rf /etc/* on VPS... (fortunately, I take nightly snapshots, so was able to restore in under 45 minutes).
–
David WDec 3 '11 at 4:42

5

It is genius. Sorcery would be touch -- -rf
–
Mircea VutcoviciDec 5 '11 at 19:03

I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).

The solution: Don't do that! As a matter of practice, don't use ./ at the beginning of a path. The slashes add no value to the command and will only cause confusion.

./* means the same thing as *, so the above command is better written as:

rm -rf *

Here's a related problem. I see the following expression often, where someone assumed that FOO is set to something like /home/puppies. I saw this just today actually, in the documentation from a major software vendor.

rm -rf $FOO/

But if FOO is not set, this will evaluate to rm -rf /, which will attempt to remove all files on your system. The trailing slash is unnecessary, so as a matter of practice don't use it.

The following will do the same thing, and is less likely to corrupt your system:

rm -rf $FOO

I've learned these tips the hard way. When I had my first superuser account 14 years ago, I accidentally ran rm -rf $FOO/ from within a shell script and destroyed a system. The 4 other sysadmins looked at this and said, 'Yup. Everyone does that once. Now here's your install media (36 floppy disks). Go fix it.'

Other people here recommend solutions like --preserve-root and safe-rm. However, these solutions are not present for all Un*xe-varients and may not work on Solaris, FreeBSD & MacOSX. In addition, safe-rm requires that you install additional packages on every single Linux system that you use. If you rely on safe-rm, what happens when you start a new job and they don't have safe-rm installed? These tools are a crutch, and it's much better to rely on known defaults and improve your work habits.

My friend told me he never uses rm -rf *. He always changes the directory first, and uses a specific target. The reason is that he uses the shell's history a lot, and he is worried that having such a command in his history might pop up at the wrong time.
–
haggai_eDec 4 '11 at 15:56

@haggai_e: Good tip. When I was new to Unix, I ran once ran into a bug where rm -rf * also removed . and ... I was root, and this traversed the into lower directories like ../../.., and was quite destructive. I try to be very careful with rm -rf * ever since.
–
Stefan LasiewskiDec 6 '11 at 19:38

@VictorSergienko, with bash, how about specifying ${FOO:?}, as in rm -rf ${FOO:?}/ and rm -rf ${FOO:?}/${BAR:?}. It will prevent it from ever translating into rm -rf /. I have some more info about this in my answer here.
–
A-B-BFeb 10 at 20:48

@haggai_e: I find this one of the best advises on this topic. I burned my finger by using rm -rf * in a for loop which changed to the wrong directory by mistake and ended up deleting something else. If I would have used a specific target it would have had a lot smaller chance to delete the wrong thing.
–
richkMar 30 at 14:29

And you should use a VM or spare box to practice recoveries - find out what didn't work and refine said plan. We are getting into a fortnightly reboot - because there have been power outtages in our building, and every time it has been painful. BY doing a few planned shutdowns of all the racks, we've cut it from a few days of running around to about 3 hours now - each time we learn which bits to automate/fix init.d scripts for etc.
–
Danny StapleDec 3 '11 at 10:19

And try this command on a VM. It's interesting! But take a snapshot first.
–
Stefan LasiewskiDec 4 '11 at 1:57

The best solutions involve changing your habits not to use rm directly.

One approach is to run echo rm -rf /stuff/with/wildcards* first. Check that the output from the wildcards looks reasonable, then use the shell's history to execute the previous command without the echo.

Another approach is to limit the echo command to cases where it's blindingly obvious what you'll be deleting. Rather than remove all the files in a directory, remove the directory and create a new one. A good method is to rename the existing directory to DELETE-foo, then create a new directory foo with appropriate permissions, and finally remove DELETE-foo. A side benefit of this method is that the command that's entered in your history is rm -rf DELETE-foo.

If you really insist on deleting a bunch of files because you need the directory to remain (because it must always exist, or because you wouldn't have the permission to recreate it), move the files to a different directory, and delete that directory.

Deleting a directory from inside would be attractive, because rm -rf . is short hence has a low risk of typos. Typical systems don't let you do that, unfortunately. You can to rm -rf -- "$PWD" instead, with a higher risk of typos but most of them lead to removing nothing. Beware that this leaves a dangerous command in your shell history.

Whenever you can, use version control. You don't rm, you cvs rm or whatever, and that's undoable.

Zsh has options to prompt you before running rm with an argument that lists all files in a directory: rm_star_silent (on by default) prompts before executing rm whatever/*, and rm_star_wait (off by default) adds a 10-second delay during which you cannot confirm. This is of limited use if you intended to remove all the files in some directory, because you'll be expecting the prompt already. It can help prevent typos like rm foo * for rm foo*.

There are many more solutions floating around that involve changing the rm command. A limitation of this approach is that one day you'll be on a machine with the real rm and you'll automatically call rm, safe in your expectation of a confirmation… and next thing you'll be restoring backups.

And if you need the directory to remain you can do that quite simply by using find somedir -type f -delete which will delete all files in somedir but will leave the directory and all subdirectories.
–
aculichFeb 26 '12 at 15:45

This is standard of mine specifically for regexps in the context of rm, but it would have saved you in this case.

I always do echo foo*/[0-9]*{bar,baz}* first, to see what the regexp is going to match. Once I have the output, I then go back with command-line editing and change echo to rm -rf. I never, ever use rm -rf on an untested regexp.

OK, what am I looking for? Are you making the point that the regexp syntax for file-matching is different (and sometimes called by a different name) from that used in eg perl? Or some other point that I've missed? I apologise for my slowness of thought, it's first thing Saturday morning here!
–
MadHatterDec 3 '11 at 7:52

6

These things that you're calling "regexp" are in fact globs. It's not a different regex syntax; it's not a regex.
–
bukzorDec 3 '11 at 16:59

That argument could certainly be made; however, from the wikipedia article on regular expressions, I find that "Many modern computing systems provide wildcard characters in matching filenames from a file system. This is a core capability of many command-line shells and is also known as globbing" - note the use of "also known as", which seems to me to indicate that calling tokens containing metacharacters to match one or more file names regexps isn't wrong. I agree that globbing is a better term because it doesn't mean anything other than the use of regular expressions in filename matching.
–
MadHatterDec 4 '11 at 15:11

There's some really bad advice in this thread, luckily most of it has been voted down.

First of all, when you need to be root, become root - sudo and the various alias tricks will make you weak. And worse, they'll make you careless. Learn to do things the right way, stop depending on aliases to protect you. One day you'll get root on a box which doesn't have your training wheels and blow something up.

Second - when you have root, think of yourself as driving a bus full of school children. Sometimes you can rock out to the song on the radio, but other times you need to look both ways, slow things down, and double check all your mirrors.

Third - You hardly ever really have to rm -rf - more likely you want to mv something something.bak or mkdir _trash && mv something _trash/

The simplest way to prevent accidental rm -rf /* is to avoid all use of the rm command! In fact, I have always been tempted to run rm /bin/rm to get rid of the command completely! No, I'm not being facetious.

Note, in modern versions of find if you leave out the name of a directory, it will implicitly use the current directory, so the above is the equivalent of:

find . | less

Once you're sure these are the files you want to delete you can then add the -delete option:

find path/to/files -delete

So, not only is findsafer to use, it is also more expressive, so if you want to delete only certain files in a directory hierarchy that match a particular pattern you could use an expression like this to preview, then delete the files:

The solution to this problem is to take regular backups. Anytime you produce something you don't want to risk losing, back it up. If you find backing up regularly is too painful, then simplify the process so that it's not painful.

For example, if you work on source code, use a tool like git to mirror the code and keep history on another machine. If you work on documents, have a script that rsyncs your documents to another machine.

One important key factor to avoid such type of mistakes is to not login using root account. When you login using normal non-privileged user, you need to use sudo for each command. So, you should be more careful.

If you are still not convinced about using sudo and backups. Have a look at this page: forum.synology.com/wiki/index.php/…. It talks about creating a recycle bin. Hope this helps!
–
KhaledDec 2 '11 at 17:44

1

@Khaled I'm using sudo and backups, I just want something better for this specific problem
–
Valentin NemcevDec 2 '11 at 17:54

It may be complicated, but you can setup roles within SELinux so that even if the user becomes root via sudo su - (or plain su), the ability to delete files can be limited (you have to login directly as root in order to remove files). If you are using AppArmor, you may be do something similar.

Of course, the other solution would be to make sure that you have backups. :)

When I delete a directory recursively, I put the -r, and -f if applicable, at the end of the command, e.g. rm /foo/bar -rf. That way, if I accidentally press Enter too early, without having typed the whole path yet, the command isn't recursive so it's likely harmless. If I bump Enter while trying to type the slash after /foo, I've written rm /foo rather than rm -rf /foo.

That works nicely on systems using the GNU coreutils, but the utilities on some other Unixes don't allow options to be placed at the end like that. Fortunately, I don't use such systems very often.

It seems like the best way to reduce this risk is to have a two-stage delete like most GUIs. That is, replace rm with something that moves things to a trash directory (on the same volume). Then clean that trash after enough time has gone by to notice any mistake.

One such utility, trash-cli, is discussed on the Unix StackExchange, here.

Avoid using globbing. In Bash, you can set noglob. But again, when you move to a system where noglob is not set, you may forget that and proceed as if it were.

Set noclobber to prevent mv and cp from destroying files too.

Use a file browser for deletion. Some file browsers offer a trashcan (for example, Konqueror).

Another way of avoiding globbing is a follows. At the command line, I echo filenamepattern >> xxx. Then I edit the file with Vim or vi to check which files are to be deleted, (watch for filename pattern characters in filenmates.) and then use %s/^/rm -f/ to turn each line into a delete command. Source xxx. This way you see every file that is going to be deleted before doing it.

Move files to an 'attic' directory or tarball. Or use version control (as said before me).

It is good you suggest using find, but I recommend a safer way of using it in my answer. There is no need to use xargs rm since all modern versions of find have the -delete option. Also, to safely use xargs rm you also need to use find -print0 and xargs -0 rm otherwise you'll have problems when you encounter things like filenames with spaces.
–
aculichFeb 26 '12 at 5:54

My point wasn't about the nuances about xargs but rather using find first, without deleting files and then continuing..
–
thiniceFeb 26 '12 at 7:33

Yes, I think that scoping out files using find is a good suggestion, however the nuances of xargs are important if you suggest using it, otherwise it leads to confusion and frustration when encountering files with spaces (which is avoided by using the -delete option).
–
aculichFeb 26 '12 at 8:05

I usually use the -v flag to see what is being deleted and have a chance to ^C quickly if I have the slightest doubt. Not really a way to prevent bad rm's, but this can be useful to limit the damage in case something goes wrong.

If you're not in the mood to acquire new habits right now, .bashrc/.profile is a good place to add some tests to check if you are about to do something stupid.
I figured in a Bash function I could grep for a pattern that might ruin my day and came up with this:

alias rm='set -f; myrm' #set -f turns off wildcard expansion need to do it outside of
#the function so that we get the "raw" string.
myrm() {
ARGV="$*"
set +f #opposite of set -f
if echo "$ARGV" | grep -e '-rf /*' \
-e 'another scary pattern'
then
echo "Do Not Operate Heavy Machinery while under the influence of this medication"
return 1
else
/bin/rm $@
fi
}

The good thing about it is that it's only Bash.

It's clearly not generic enough in that form, but I think it has potential, so please post some ideas or comments.

It's good you're trying to preview your files before deleting them, however this solution is overly-complicated. You can instead accomplish this very simply in a more generic way using the find command. Also, I don't understand why you say "the good thing about it is that it's only Bash"? It is recommended to avoid bash-isms in scripts.
–
aculichFeb 26 '12 at 7:17

To prevent us from "rm -rf /*" or "rm -rf dir/ *" when we mean "rm -rf ./*" and "rm -rf dir/*" we have to detect the patterns " /*" and " *" (simplistically). But we can't just pass all the command line arguments through grep looking for some harmful pattern,because bash expands the wildcard arguments before passing them on (star will be expanded to all the contents of a folder). We need the "raw" argument string.That's done with set -f before we invoke the "myrm" function which is then passed the raw argument string and grep looks for predefined patterns. *
–
klnFeb 26 '12 at 17:57

I understand what you are trying to do with set -f which is equivalently set -o noglob in Bash, but that still doesn't explain your statement that "The good thing about it is that it's only Bash". Instead you can eliminate the problem entirely and in a generic way for any shell by not using rm at all, but rather using the find command. Have you actually tried that suggestion to see how it compares with what you suggest here?
–
aculichFeb 26 '12 at 18:27

@aculich by only bash I mean no python or perl dependencies, everything can be done in bash. Once I amend my .bashrc I can continue working without having to break old habits. Every time I invoke rm bash will make sure I don't do something stupid. I just have to define some patterns that I want to be alerted of. Like " *" which would remove everything in the current folder.Every now and again that will be exactly what I want but with a bit more work interactivity can be added to "myrm".
–
klnFeb 26 '12 at 18:29

@aculich OK gotcha.No I haven't tried it.I think it requires significant change in workflow. Just checked here on Mac OS X my .bash_history is 500 and 27 of those commands are rm. And these days I don't use a terminal very often.
–
klnFeb 26 '12 at 18:40

Realistically, you should have a mental pause before executing that sort of operation with a glob, whether you're running as root, prepending "sudo" to it, etc. You can run an "ls" on the same glob, etc., but, mentally, you should stop for a sec, make sure you've typed what you wanted, make sure what you want is actually what you want, etc. I suppose this is something that's mainly learned by destroying something in the first year as a Unix SA, in the same way that the hot burner is a good teacher in telling you that something on the stove may be hot.

Sadly, I cannot leave a comment above due to insufficient karma, but wanted to warn others that safe-rm is not a panacea for accidental mass-deletion nightmares.

The following was tested in a Linux Mint 17.1 virtual machine (warning to those unfamiliar with these commands: DON'T DO THIS! Actually, even those familiar with these commands should/would probably never do this to avoid catastrophic data loss):