User Contributed Notes 40 notes

The call works as should. No bugs.But. In most cases you won't able to work with pipes in blocking mode.When your output pipe (process' input one, $pipes[0]) is blocking, there is a case, when you and the process are blocked on output.When your input pipe (process' output one, $pipes[1]) is blocking, there is a case, when you and the process both are blocked on own input.So you should switch pipes into NONBLOCKING mode (stream_set_blocking).Then, there is a case, when you're not able to read anything (fread($pipes[1],...) == "") either write (fwrite($pipes[0],...) == 0). In this case, you better check the process is alive (proc_get_status) and if it still is - wait for some time (stream_select). The situation is truly asynchronous, the process may be busy working, processing your data.Using shell effectively makes not possible to know whether the command is exists - proc_open always returns valid resource. You may even write some data into it (into shell, actually). But eventually it will terminate, so check the process status regularly.I would advice not using mkfifo-pipes, because filesystem fifo-pipe (mkfifo) blocks open/fopen call (!!!) until somebody opens other side (unix-related behavior). In case the pipe is opened not by shell and the command is crashed or is not exists you will be blocked forever.

It took me a long time (and three consecutive projects) to figure this out. Because popen() and proc_open() return valid processes even when the command failed it's awkward to determine when it really has failed if you're opening a non-interactive process like "sendmail -t".

I had previously guess that reading from STDERR immediately after starting the process would work, and it does... but when the command is successful PHP just hangs because STDERR is empty and it's waiting for data to be written to it.

The solution is a simple stream_set_blocking($pipes[2], 0) immediately after calling proc_open().

pipe communications may break brains off. i want to share some stuff to avoid such result.for proper control of the communications through the "in" and "out" pipes of the opened sub-process, remember to set both of them into non-blocking mode and especially notice that fwrite may return (int)0 but it's not an error, just process might not except input at that moment.

so, let us consider an example of decoding gz-encoded file by using funzip as sub-process: (this is not the final version, just to show important things)

Please note that if you plan to spawn multiple processes you have to save all the results in different variables (in an array for example). If you for example would call $proc = proc_open..... multiple times the script will block after the second time until the child process exits (proc_close is called implicitly).

If you are going to allow data coming from user input to be passed to this function, then you should be using escapeshellarg() or escapeshellcmd() to make sure that users cannot trick the system into executing arbitrary commands.

$cmd can actually be multiple commands by separating each command with a newline. However, due to this it is not possible to split up one very long command over multiple lines, even when using "\\\n" syntax.

Note that when you call an external script and retrieve large amounts of data from STDOUT and STDERR, you may need to retrieve from both alternately in non-blocking mode (with appropriate pauses if no data is retrieved), so that your PHP script doesn't lock up. This can happen if you waiting on activity on one pipe while the external script is waiting for you to empty the other, e.g:

Apart from that, one caveat is that the child process inherits anything that is preserved over fork from the parent (apart from the file descriptors which are explicitly closed).

Importantly, it inherits the signal handling setup, which at least with apache means that SIGPIPE is ignored. Child processes that expect SIGPIPE to kill them in order to get sensible pipe handling and not go into a tight write loop will have problems unless they reset SIGPIPE themselves.

Similar caveats probably apply to other signals like SIGHUP, SIGINT, etc.

Other things preserved over fork include shared memory segments, umask and rlimits.

If script A is spawning script B and script B pushes a lot of data to stdout without script A consuming that data, script B is likely to hang but the result of proc_get_status on that process seems to continue to indicate it's running.

So either don't write to stdout i the spawned process (I write to log files instead now) or try to read in the stdout in a non-blocking way if your script A is spawning many instances of script B, I couldn't get this second option to work sadly.

I managed to make a set of functions to work with GPG, since my hosting provider refused to use GPG-ME.Included below is an example of decryption using a higher descriptor to push a passphrase.Comments and emails welcome. :)

@joachimb: The descriptorspec describes the i/o from the perspective of the process you are opening. That is why stdin is read: you are writing, the process is reading. So you want to open descriptor 2 (stderr) in write mode so that the process can write to it and you can read it. In your case where you want all descriptors to be pipes you should always use:

I'm confused by the direction of the pipes. Most of the examples in this documentation opens pipe #2 as "r", because they want to read from stderr. That sounds logical to me, and that's what I tried to do. That didn't work, though. When I changed it to w, as in<?php $descriptorspec = array(0 => array("pipe", "r"), // stdin1 => array("pipe", "w"), // stdout2 => array("pipe", "w") // stderr);

<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a file to write to
);
?>

if your writing a function that processes a resource fromanother function its a good idea not only to check whethera resource has been passed to your function but also if itsof the good type like so:

<?phpfunction workingonit($resource){ if(is_resource($resource)){ if(get_resource_type($resource) == "resource_type"){// resource is a resource and of the good type. continue}else{ print("resource is of the wrong type."); return false; } }else{ print("resource passed is not a resource at all."); return false; }

// do your stuff with the resource here and return}?>

this is extra true for working with files and process pipes.so always check whats being passed to your functions.

here's a small snipppet of a few resource types:files are of type 'file' in php4 and 'stream' in php5'prossess' are resources opened by proc_open.'pipe' are resource opened by popen.

btw the 'prossess' resource type was not mentioned in the documentation. i make a bug-report for this.

[Again, please delete my previous comment, the code still contained bugs (sorry). This version now includes Frederick Leitner&#039;s fix from below, and also fixes another bug: If an empty file was piped into the process, the loop would hang indefinitely.]

The following code works for piping large amounts of data through a filtering program. I find it very weird that such a lot of code is needed for this task... On entry, $stdin contains the standard input for the program. Tested on Debian Linux with PHP 5.1.2.

I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:

The pty option is actually disabled in the source for some reason via a #if 0 && condition. I'm not sure why it's disabled. I removed the 0 && and recompiled, after which the pty option works perfectly. Just a note.

Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.

<?function authenticate($user,$password) { $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("file","/dev/null", "w") // stderr is a file to write to );

Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).

These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.

However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.

One can learn from the source code in ext/standard/exec.c that the right-hand side of a descriptor assignment does not have to be an array ('file', 'pipe', or 'pty') - it can also be an existing open stream.

<?php$p = proc_open('myfilter', array( 0 => $infile, ...), $pipes);?>

I was glad to learn that because it solves the race condition in a scenario like this: you get a file name, open the file, read a little to make sure it's OK to serve to this client, then rewind the file and pass it as input to the filter. Without this feature, you would be limited to <?php array('file', $fname) ?> or passing the name to the filter command. Those choices both involve a race (because the file will be reopened after you have checked it's OK), and the last one invites surprises if not carefully quoted, too.

The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".

If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.

A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.

fwrite($pipes[0], $in); /* fwrite writes to stdin, 'cat' will immediately write the data from stdin * to stdout and blocks, when the stdout buffer is full. Then it will not * continue reading from stdin and php will block here. */

Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.

Stream functions can be used on pipes like : - pipes from popen, proc_open - pipes from fopen('php://stdin') (or stdout) - sockets (unix or tcp/udp) - many other things probably but the most important is here

I had trouble with this function as my script always hung like in a deadlock until I figured out that I had to strictly keep the followingorder. Trying to close all at the end did not work! proc_open(); fwrite(pipes[0]); fclose(pipes[0]); # stdin fread(pipes[1]); fclose(pipes[1]); # stdout fread(pipes[2]); flcose(pipes[2]); # stderr proc_close();