Run the above in bash or zsh and you'll get the same results; only one of retval_bash and retval_zsh will be set. The other will be blank. This would allow a function to end with return $retval_bash $retval_zsh (note the lack of quotes!).

@JanHudec: Perhaps you should read the first five words of my answer. Also kindly point out where the question requested a POSIX-only answer.
– camhDec 16 '14 at 8:27

10

@JanHudec: Nor was it tagged POSIX. Why do you assume the answer must be POSIX? It was not specified so I provided a qualified answer. There is nothing incorrect about my answer, plus there are multiple answers to address other cases.
– camhDec 17 '14 at 9:15

Pipefail

The first way is to set the pipefail option (ksh, zsh or bash). This is the simplest and what it does is basically set the exit status $? to the exit code of the last program to exit non-zero (or zero if all exited successfully).

$ false | true; echo $?
0
$ set -o pipefail
$ false | true; echo $?
1

$PIPESTATUS

Bash also has an array variable called $PIPESTATUS ($pipestatus in zsh) which contains the exit status of all the programs in the last pipeline.

@Patrick the pipestatus solution is works on bash , just more quastion in case I use ksh script you think we can find something similar to pipestatus ? , ( meenwhile I see the pipestatus not supported by ksh )
– yaelApr 21 '13 at 14:32

2

@yael I don't use ksh, but from a brief glance at it's manpage, it doesn't support $PIPESTATUS or anything similar. It does support the pipefail option though.
– PatrickApr 21 '13 at 15:30

I decided to go with pipefail as it allows me to get the status of the failed command here: LOG=$(failed_command | successful_command)
– vmrobFeb 8 '14 at 9:20

Note: the child process inherits the open file descriptors from the parent. That means someprog will inherit open file descriptor 3 and 4. If someprog writes to file descriptor 3 then that will become the exit status. The real exit status will be ignored because read only reads once.

If you worry that your someprog might write to file descriptor 3 or 4 then it is best to close the file descriptors before calling someprog.

A subshell is created with file descriptor 4 redirected to stdout. This means that whatever is printed to file descriptor 4 in the subshell will end up as the stdout of the entire construct.

A pipe is created and the commands on the left (#part3) and right (#part2) are executed. exit $xs is also the last command of the pipe and that means the string from stdin will be the exit status of the entire construct.

A subshell is created with file descriptor 3 redirected to stdout. This means that whatever is printed to file descriptor 3 in this subshell will end up in #part2 and in turn will be the exit status of the entire construct.

A pipe is created and the commands on the left (#part5 and #part6) and right (filter >&4) are executed. The output of filter is redirected to file descriptor 4. In #part1 the file descriptor 4 was redirected to stdout. This means that the output of filter is the stdout of the entire construct.

Exit status from #part6 is printed to file descriptor 3. In #part3 file descriptor 3 was redirected to #part2. This means that the exit status from #part6 will be the final exit status for the entire construct.

someprog is executed. The exit status is taken in #part5. The stdout is taken by the pipe in #part4 and forwarded to filter. The output from filter will in turn reach stdout as explained in #part4

This does not work in my BASH 3.2.25(1)-release. At the top of /tmp/ff I have #!/bin/bash -o pipefail. Error is: /bin/bash: line 0: /bin/bash: /tmp/ff: invalid option name
– Felipe AlvarezMar 24 '14 at 6:01

2

@FelipeAlvarez: Some environments (including linux) don't parse spaces on #! lines beyond the first one, and so this becomes /bin/bash-o pipefail/tmp/ff, instead of the necessary /bin/bash-opipefail/tmp/ff -- getopt (or similar) parsing using the optarg, which is the next item in ARGV, as the argument to -o, so it fails. If you were to make a wrapper (say, bash-pf that just did exec /bin/bash -o pipefail "$@", and put that on the #! line, that would work. See also: en.wikipedia.org/wiki/Shebang_%28Unix%29
– lindesDec 5 '15 at 0:09

After this $exit_codes is usually foo:X bar:Y, but it could be bar:Y foo:X if bar quits before reading all of its input or if you're unlucky. I think writes to pipes of up to 512 bytes are atomic on all unices, so the foo:$? and bar:$? parts won't be intermixed as long as the tag strings are under 507 bytes.

If you need to capture the output from bar, it gets difficult. You can combine the techniques above by arranging for the output of bar never to contain a line that looks like an exit code indication, but it does get fiddly.

And, of course, there's the simple option of using a temporary file to store the status. Simple, but not that simple in production:

If there are multiple scripts running concurrently, or if the same script uses this method in several places, you need to make sure they use different temporary file names.

Creating a temporary file securely in a shared directory is hard. Often, /tmp is the only place where a script is sure to be able to write files. Use mktemp, which is not POSIX but available on all serious unices nowadays.

I think this is best explained from the inside out – command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print command1's exit code on its stdout, but that stdout is redirected to file descriptor 3.

While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 – because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 – because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.

The final bit of magic is that first exec 4>&1 we did as a separate command – it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it – but, since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it – however, once it gets "out" of the command substitution, it is effectively still going to the script's overall file descriptor 1.

(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)

You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.

Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out – this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.

Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:

Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)

Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.

I'm not sure how often things use file descriptor three and four directly – I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.

Looks interesting, but I can't quite figure out what you expect this command to do, and my computer can't, either; I get -bash: 3: Bad file descriptor.
– G-ManJun 5 '15 at 7:03

@G-Man Right, I keep forgetting bash has no idea what it's doing when it comes to file descriptors, unlike the shells I typically use (the ash that comes with busybox). I'll let you know when I think of a workaround that makes bash happy. In the meantime if you've got a debian box handy you can try it in dash, or if you've got busybox handy you can try it with the busybox ash/sh.
– mtraceurJun 5 '15 at 12:09

@G-Man As to what I expect the command to do, and what it does do in other shells, is redirect stdout from command1 so it doesn't get caught by the command substitution, but once outside the command substitution, it goes drops fd3 back to stdout so it's piped as expected to command2. When command1 exits, the printf fires and prints its exit status, which is captured into the variable by the command substitution. Very detailed breakdown here: stackoverflow.com/questions/985876/tee-and-exit-status/… Also, that comment of yours read as if it was meant to be kinda insulting?
– mtraceurJun 5 '15 at 12:17

Where shall I begin? (1) I’m sorry if you felt insulted. “Looks interesting” was meant earnestly; it would be great if something as compact as that worked as well as you expected it to. Beyond that, I was saying, simply, that I didn’t understand what your solution was supposed to be doing. I’ve been working/playing with Unix for a long time (since before Linux existed), and, if I don’t understand something, that’s a red flag that, maybe, other people won’t understand it either, and that it needs more explanation (IMNSHO). … (Cont’d)
– G-ManJun 6 '15 at 3:14

(Cont’d) … Since you “… like to think … that [you] understand just about everything more than the average person”, maybe you should remember that the objective of Stack Exchange is not to be a command-writing service, churning out thousands of one-off solutions to trivially distinct questions; but rather to teach people how to solve their own problems. And, to that end, maybe you need to explain stuff well enough that an “average person” can understand it. Look at lesmana’s answer for an example of an excellent explanation. … (Cont’d)
– G-ManJun 6 '15 at 3:15

lesmana's solution above can also be done without the overhead of starting nested subprocesses by using { .. } instead (remembering that this form of grouped commands always has to finish with semicolons). Something like this:

This works really well with most shells I've tried it with, including NetBSD sh, pdksh, mksh, dash, bash. However I can't get it to work with AT&T Ksh (93s+, 93u+) or zsh (4.3.9, 5.2), even with set -o pipefail in ksh or any number of sprinkled wait commands in either. I think it may, in part at least, be a parsing issue for ksh, as if I stick to using subshells then it works fine, but even with an if to choose the subshell variant for ksh but leave the compound commands for others, it fails.
– Greg A. WoodsAug 3 '17 at 0:42

This doesn't work for several reasons. 1. The temporary file may be read before it's written. 2. Creating a temporary file in a shared directory with a predictable name is insecure (trivial DoS, symlink race). 3. If the same script uses this trick several times, it'll always use the same file name. To solve 1, read the file after the pipeline has completed. To solve 2 and 3, use a temporary file with a randomly-generated name or in a private directory.
– GillesJun 8 '11 at 23:00

+1 Well the ${PIPESTATUS[0]} is easier but the basic idea here do work if one know about the problems that Gilles mentions.
– JohanJun 9 '11 at 6:36

You can save a few subshells: (s=/tmp/.$$_$RANDOM;{foo;echo $?>$s;}|bar; exit $(cat $s;rm $s)). @Johan: I agree it's easier with Bash but in some contexts, knowing how to avoid Bash is worth it.
– dubiousjimAug 29 '12 at 22:25

A tempfile is created with mktemp. This usually immediately creates a file in /tmp

This tempfile is then redirected to FD 9 for write and FD 8 for read

Then the tempfile is immediately deleted. It stays open, though, until both FDs go out of existence.

Now the pipe is started. Each step adds to FD 9 only, if there was an error.

The wait is needed for ksh, because ksh else does not wait for all pipe commands to finish. However please note that there are unwanted sideffects if some background tasks are present, so I commented it out by default. If the wait does not hurt, you can comment it in.

Afterwards the file's contents are read. If it is empty (because all worked) read returns false, so true indicates an error

This can be used as a plugin replacement for a single command and only needs following:

Unused FDs 9 and 8

A single environment variable to hold the name of the tempfile

And this recipe can be adapted to fairly any shell out there which allows IO redirection

Also it is fairly platform agnostic and does not need things like /proc/fd/N

BUGs:

This script has a bug in case /tmp runs out of space. If you need protection against this artificial case, too, you can do it as follows, however this has the disadvantage, that the number of 0 in 000 depends on the number of commands in the pipe, so it is slightly more complicated:

How about to cleanup like jlliagre? Don't you leave a file behind called foo-status?
– JohanJun 8 '11 at 18:39

@Johan: If you prefer my suggestion, don't hesitate to vote it up ;) In addition not to leaving a file, it has the advantage of allowing multiple processes to run this simultaneously and the current directory need not to be writable.
– jlliagreJun 8 '11 at 20:40

Which will run haconf -makerw and store its stdout and stderr to "$haconf_out". If the returned value from haconf is true, then the 'if' block will be executed and grep will read "$haconf_out", trying to match it against "Cluster already writable".

Notice that pipes automatically clean themselves up; with the redirection you'll have to be carefull to remove "$haconf_out" when done.

Not as elegant as pipefail, but a legitimate alternative if this functionality is not within reach.

(With bash at least) combined with set -e one can use subshell to explicitly emulate pipefail and exit on pipe error

set -e
foo | bar
( exit ${PIPESTATUS[0]} )
rest of program

So if foo fails for some reason - the rest of program will not be executed and script exits with corresponding error code.
(This assumes that foo prints its own error, which is sufficient to understand the reason of failure)