Description

proc_open() is similar to popen()
but provides a much greater degree of control over the program execution.

Parameters

cmd

The command to execute

descriptorspec

An indexed array where the key represents the descriptor number and the
value represents how PHP will pass that descriptor to the child
process. 0 is stdin, 1 is stdout, while 2 is stderr.

Each element can be:

An array describing the pipe to pass to the process. The first
element is the descriptor type and the second element is an option for
the given type. Valid types are pipe (the second
element is either r to pass the read end of the pipe
to the process, or w to pass the write end) and
file (the second element is a filename).

The file descriptor numbers are not limited to 0, 1 and 2 - you may
specify any valid file descriptor number and it will be passed to the
child process. This allows your script to interoperate with other
scripts that run as "co-processes". In particular, this is useful for
passing passphrases to programs like PGP, GPG and openssl in a more
secure manner. It is also useful for reading status information
provided by those programs on auxiliary file descriptors.

pipes

Will be set to an indexed array of file pointers that correspond to
PHP's end of any pipes that are created.

cwd

The initial working dir for the command. This must be an
absolute directory path, or NULL
if you want to use the default value (the working dir of the current
PHP process)

env

An array with the environment variables for the command that will be
run, or NULL to use the same environment as the current PHP process

other_options

Allows you to specify additional options. Currently supported options
include:

suppress_errors (windows only): suppresses errors
generated by this function when it's set to TRUE

bypass_shell (windows only): bypass
cmd.exe shell when set to TRUE

Return Values

Returns a resource representing the process, which should be freed using
proc_close() when you are finished with it. On failure
returns FALSE.

Changelog

Version

Description

5.2.1

Added the bypass_shell option to the
other_options parameter.

Examples

Example #1 A proc_open() example

<?php$descriptorspec = array(0 => array("pipe", "r"), // stdin is a pipe that the child will read from1 => array("pipe", "w"), // stdout is a pipe that the child will write to2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to);

Notes

Note:

Windows compatibility: Descriptors beyond 2 (stderr) are made available to
the child process as inheritable handles, but since the Windows
architecture does not associate file descriptor numbers with low-level
handles, the child process does not (yet) have a means of accessing those
handles. Stdin, stdout and stderr work as expected.

Note:

If you only need a uni-directional (one-way) process pipe, use
popen() instead, as it is much easier to use.

User Contributed Notes 43 notes

The call works as should. No bugs.But. In most cases you won't able to work with pipes in blocking mode.When your output pipe (process' input one, $pipes[0]) is blocking, there is a case, when you and the process are blocked on output.When your input pipe (process' output one, $pipes[1]) is blocking, there is a case, when you and the process both are blocked on own input.So you should switch pipes into NONBLOCKING mode (stream_set_blocking).Then, there is a case, when you're not able to read anything (fread($pipes[1],...) == "") either write (fwrite($pipes[0],...) == 0). In this case, you better check the process is alive (proc_get_status) and if it still is - wait for some time (stream_select). The situation is truly asynchronous, the process may be busy working, processing your data.Using shell effectively makes not possible to know whether the command is exists - proc_open always returns valid resource. You may even write some data into it (into shell, actually). But eventually it will terminate, so check the process status regularly.I would advice not using mkfifo-pipes, because filesystem fifo-pipe (mkfifo) blocks open/fopen call (!!!) until somebody opens other side (unix-related behavior). In case the pipe is opened not by shell and the command is crashed or is not exists you will be blocked forever.

If you are going to allow data coming from user input to be passed to this function, then you should be using escapeshellarg() or escapeshellcmd() to make sure that users cannot trick the system into executing arbitrary commands.

If you have a CLI script that prompts you for a password via STDIN, and you need to run it from PHP, proc_open() can get you there. It's better than doing "echo $password | command.sh", because then your password will be visible in the process list to any user who runs "ps". Alternately you could print the password to a file and use cat: "cat passwordfile.txt | command.sh", but then you've got to manage that file in a secure manner.

If your command will always prompt you for responses in a specific order, then proc_open() is quite simple to use and you don't really have to worry about blocking & non-blocking streams. For instance, to run the "passwd" command:

// It wil prompt for existing password, then new password twice.// You don't need to escapeshellarg() these, but you should whitelist// them to guard against control characters, perhaps by using ctype_print()fwrite($pipes[0], "$oldpassword\n$newpassword\n$newpassword\n");

pipe communications may break brains off. i want to share some stuff to avoid such result.for proper control of the communications through the "in" and "out" pipes of the opened sub-process, remember to set both of them into non-blocking mode and especially notice that fwrite may return (int)0 but it's not an error, just process might not except input at that moment.

so, let us consider an example of decoding gz-encoded file by using funzip as sub-process: (this is not the final version, just to show important things)

It took me a long time (and three consecutive projects) to figure this out. Because popen() and proc_open() return valid processes even when the command failed it's awkward to determine when it really has failed if you're opening a non-interactive process like "sendmail -t".

I had previously guess that reading from STDERR immediately after starting the process would work, and it does... but when the command is successful PHP just hangs because STDERR is empty and it's waiting for data to be written to it.

The solution is a simple stream_set_blocking($pipes[2], 0) immediately after calling proc_open().

Note that when you call an external script and retrieve large amounts of data from STDOUT and STDERR, you may need to retrieve from both alternately in non-blocking mode (with appropriate pauses if no data is retrieved), so that your PHP script doesn't lock up. This can happen if you waiting on activity on one pipe while the external script is waiting for you to empty the other, e.g:

Please note that if you plan to spawn multiple processes you have to save all the results in different variables (in an array for example). If you for example would call $proc = proc_open..... multiple times the script will block after the second time until the child process exits (proc_close is called implicitly).

I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:

$cmd can actually be multiple commands by separating each command with a newline. However, due to this it is not possible to split up one very long command over multiple lines, even when using "\\\n" syntax.

For example, if you want to execute "C:\Program Files\nodejs\node.exe", you will get the error that the command could not be found.Try this:<?php$cmd = 'C:\\Program Files\\nodejs\\node.exe';if (strtolower(substr(PHP_OS,0,3)) === 'win') {$cmd = sprintf('cd %s && %s', escapeshellarg(dirname($cmd)), basename($cmd));}?>

The pty option is actually disabled in the source for some reason via a #if 0 && condition. I'm not sure why it's disabled. I removed the 0 && and recompiled, after which the pty option works perfectly. Just a note.

The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".

If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.

A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.

fwrite($pipes[0], $in); /* fwrite writes to stdin, 'cat' will immediately write the data from stdin * to stdout and blocks, when the stdout buffer is full. Then it will not * continue reading from stdin and php will block here. */

If script A is spawning script B and script B pushes a lot of data to stdout without script A consuming that data, script B is likely to hang but the result of proc_get_status on that process seems to continue to indicate it's running.

So either don't write to stdout i the spawned process (I write to log files instead now) or try to read in the stdout in a non-blocking way if your script A is spawning many instances of script B, I couldn't get this second option to work sadly.

I managed to make a set of functions to work with GPG, since my hosting provider refused to use GPG-ME.Included below is an example of decryption using a higher descriptor to push a passphrase.Comments and emails welcome. :)

@joachimb: The descriptorspec describes the i/o from the perspective of the process you are opening. That is why stdin is read: you are writing, the process is reading. So you want to open descriptor 2 (stderr) in write mode so that the process can write to it and you can read it. In your case where you want all descriptors to be pipes you should always use:

I'm confused by the direction of the pipes. Most of the examples in this documentation opens pipe #2 as "r", because they want to read from stderr. That sounds logical to me, and that's what I tried to do. That didn't work, though. When I changed it to w, as in<?php $descriptorspec = array(0 => array("pipe", "r"), // stdin1 => array("pipe", "w"), // stdout2 => array("pipe", "w") // stderr);

Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.

<?function authenticate($user,$password) { $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("file","/dev/null", "w") // stderr is a file to write to );

Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).

These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.

However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.

Apart from that, one caveat is that the child process inherits anything that is preserved over fork from the parent (apart from the file descriptors which are explicitly closed).

Importantly, it inherits the signal handling setup, which at least with apache means that SIGPIPE is ignored. Child processes that expect SIGPIPE to kill them in order to get sensible pipe handling and not go into a tight write loop will have problems unless they reset SIGPIPE themselves.

Similar caveats probably apply to other signals like SIGHUP, SIGINT, etc.

Other things preserved over fork include shared memory segments, umask and rlimits.

Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.

Stream functions can be used on pipes like : - pipes from popen, proc_open - pipes from fopen('php://stdin') (or stdout) - sockets (unix or tcp/udp) - many other things probably but the most important is here

if your writing a function that processes a resource fromanother function its a good idea not only to check whethera resource has been passed to your function but also if itsof the good type like so:

<?phpfunction workingonit($resource){ if(is_resource($resource)){ if(get_resource_type($resource) == "resource_type"){// resource is a resource and of the good type. continue}else{ print("resource is of the wrong type."); return false; } }else{ print("resource passed is not a resource at all."); return false; }

// do your stuff with the resource here and return}?>

this is extra true for working with files and process pipes.so always check whats being passed to your functions.

here's a small snipppet of a few resource types:files are of type 'file' in php4 and 'stream' in php5'prossess' are resources opened by proc_open.'pipe' are resource opened by popen.

btw the 'prossess' resource type was not mentioned in the documentation. i make a bug-report for this.

[Again, please delete my previous comment, the code still contained bugs (sorry). This version now includes Frederick Leitner&#039;s fix from below, and also fixes another bug: If an empty file was piped into the process, the loop would hang indefinitely.]

The following code works for piping large amounts of data through a filtering program. I find it very weird that such a lot of code is needed for this task... On entry, $stdin contains the standard input for the program. Tested on Debian Linux with PHP 5.1.2.

One can learn from the source code in ext/standard/exec.c that the right-hand side of a descriptor assignment does not have to be an array ('file', 'pipe', or 'pty') - it can also be an existing open stream.

<?php$p = proc_open('myfilter', array( 0 => $infile, ...), $pipes);?>

I was glad to learn that because it solves the race condition in a scenario like this: you get a file name, open the file, read a little to make sure it's OK to serve to this client, then rewind the file and pass it as input to the filter. Without this feature, you would be limited to <?php array('file', $fname) ?> or passing the name to the filter command. Those choices both involve a race (because the file will be reopened after you have checked it's OK), and the last one invites surprises if not carefully quoted, too.