The coproc keyword

Synopsis

coproc [NAME] command [redirections]

Description

Bash 4.0 introduced coprocesses, a feature certainly familiar to ksh users. The coproc keyword starts a command as a background job, setting up pipes connected to both its stdin and stdout so that you can interact with it bidirectionally. Optionally, the co-process can have a name NAME. If NAME is given, the command that follows must be a compound command. If no NAME is given, then the command can be either simple or compound.

The process ID of the shell spawned to execute the coprocess is available through the value of the variable named by NAME followed by a _PID suffix. For example, the variable name used to store the PID of a coproc started with no NAME given would be COPROC_PID (because COPROC is the default NAME). The wait builtin command may be used to wait for the coprocess to terminate. Additionally, coprocesses may be manipulated through their jobspec.

Return status

The return status of a coprocess is the exit status of its command.

Redirections

The optional redirections are applied after the pipes have been set up. Some examples:

Redirecting the output of a script to a file and to the screen

#!/bin/bash# we start tee in the background# redirecting its output to the stdout of the script{ coproc tee{tee logfile ;}>&3 ;}3>&1# we redirect stding and stdout of the script to our coprocessexec>&${tee[1]}2>&1

Portability considerations

The coproc keyword is not specified by POSIX(R)

The coproc keyword appeared in Bash version 4.0-alpha

The -p option to Bash's print loadable is a NOOP and not connected to Bash coprocesses in any way. It is only recognized as an option for ksh compatibility, and has no effect.

The -p option to Bash's read builtin conflicts with that of all kshes and zsh. The equivalent in those shells is to add a \?prompt suffix to the first variable name argument to read. i.e., if the first variable name given contains a ? character, the remainder of the argument is used as the prompt string. Since this feature is pointless and redundant, I suggest not using it in either shell. Simply precede the read command with a printf %s prompt >&2.

Other shells

ksh93, mksh, zsh, and Bash all support something called "coprocesses" which all do approximately the same thing. ksh93 and mksh have virtually identical syntax and semantics for coprocs. A list operator: |& is added to the language which runs the preceding pipeline as a coprocess (This is another reason not to use the special |& pipe operator in Bash – its syntax is conflicting). The -p option to the read and print builtins can then be used to read and write to the pipe of the coprocess (whose FD isn't yet known). Special redirects are added to move the last spawned coprocess to a different FD: <&p and >&p, at which point it can be accessed at the new FD using ordinary redirection, and another coprocess may then be started, again using |&.

zsh coprocesses are very similar to ksh except in the way they are started. zsh adds the shell reserved word coproc to the pipeline syntax (similar to the way Bash's time keyword works), so that the pipeline that follows is started as a coproc. The coproc's input and output FDs can then be accessed and moved using the same read/print-p and redirects used by the ksh shells.

It is unfortunate that Bash chose to go against existing practice in their coproc implementation, especially considering it was the last of the major shells to incorporate this feature. However, Bash's method accomplishes the same without requiring nearly as much additional syntax. The coproc keyword is easy enough to wrap in a function such that it takes Bash code as an ordinary argument and/or stdin like eval. Coprocess functionality in other shells can be similarly wrapped to create a COPROC array automatically.

Only one coprocess at a time

The ability to use multiple coprocesses in Bash is considered "experimental". Bash will throw an error if you attempt to start more than one. This may be overridden at compile-time with the MULTIPLE_COPROCS option. However, at this time there are still issues – see the above mailing list discussion.