Preventing Bash Script Disasters

So you just wrote a bash script and you’re scared of running it on your system due to a lurking rm -rf statement in the script. Here is some stuff you can use to prevent disasters due to unhandled errors in your scripts:

1) Use set -u (Just put set -u at the top of the script):

In this mode, the script will exit if we try to use an uninitialised variable. Useful for preventing rm -rf $uninitialized_variable/ type disasters

This translates to rm -rf / if this mode is not on and will destroy your whole system.

2) Use set -o pipefail:

This mode causes a pipeline to produce a failure return code if any command results in an error. Normally, pipelines only return a failure if the last command results in an error.

This can be useful in preventing issues which occur due to an error which occurs in the non-last command in the pipeline.

3) Use set -e:

In this mode, any command our script runs which returns a non-zero exit code will cause our script to itself terminate immediately with an error. Basically script won’t continue if an error occurs.

4) The above mode is fine but if an error is detected, the script basically halts. Therefore, you won’t to be notified that an error occurred in your script. So, I basically comment(not use set -e) this mode and use some manual error checking after every command.

$? contains the exit code of the last executed command. If it is not 0, then it means some error occurred. So I pass this exit code as well as a tag describing the previous command to a function which checks if the return code is not equal to 0, and if yes, then e-mails us the command which failed.

So as soon as I detect that an error occurred, I terminate the script with the ‘exit’ statement then and there. Because error cascading can produce further errors and might do some irreparable harm.

Feel free to share more tips on how can bash script disasters be prevented.