In his most recent post Joe Watkins talks briefly about concurrency in PHP and some of the issues that can come along with it. This includes one of the most glaring: the stress it can put on the host system with even a small number of threads being introduced.

Before we start to cover the topic of how to achieve parallel concurrency in PHP, we should first think about when it is appropriate. You may hear veterans of programming say (and newbies parrot) things like: "Threading is not web scale." This is enough to write off parallelism as something we shouldn't do for our web applications, it seems obvious that there is simply no need to multi-thread the rendering of a template, the sending of email, or any other of the laborious tasks that a web application must carry out in order to be useful. But rarely do you see an explanation of why this is the case: Why shouldn't your blog be able to multi-thread a response ?

He gives an example of a controller request that spawns off just eight threads and imagines what might happen if that controller was requested even just one hundred times (resulting in 800 threads). He does point out at least one place where it could be useful, though: separating out the portions of the application that need to use the parallelism from the rest.

Parallelism is one of the most powerful tools in our toolbox, multicore and multiprocessor systems have changed computing forever. But with great power comes great responsibility; don't abuse it, remember the story of the controller that created 800 threads with a tiny amount of traffic, whatever you do, ensure this can never happen.

In this new post to the PHPWomen site, Kim Rowan shows one way that you can effectively handle concurrency in your applications (in her case, a Symfony app).

Concurrent user activity on the web can take many forms. For example, two online shoppers may simultaneously try to buy the last pair of ‘gotta-have-em’ shoes in stock. Presumably one potential outcome in this scenario is to place the shoes on back-order for the slower shopper. The concurrency challenge I faced recently, however, was a bit different...

She uses a "last updated" data field in her form to see when the record in question was last changed. When the form is submitted the script checks against the updated date on the record to see if it's later than the one submitted. If it's more recent, the user's request could cause errors, so it fails.

On the TechnoSophos blog there's a recent post looking at how the swapping of a few technologies has made for a huge performance jump for a Drupal-based website.

With a clever hack utilizing Memcache, Nginx, and Drupal, we have been able to speed the delivery time of many of our major pages by 53,900% (from 8,100 msec to 15 msec, according to siege and AB benchmarks). Additional, we went from being able to handle 27 concurrent requests to being able to handle 3,334 concurrent requests (a 12,248% increase). While we performed a long series of performance optimizations, this article is focused primarily on how we managed to serve data directly from Memcached, via Nginx, without invoking PHP at all.

They describe how, by just changing out the web server to mginx and a highly tuned memcached installation, they could get huge jumps in response times. They pushed it even more when they changed the nginx configuration to directly interact with the memacahed server instead of having to rely on PHP's interface. Details on how to get this setup working and an overall view of how it works are also included in the post.

On the Perplexed Labs blog there's a recent post looking at how to fork processes in PHP with the help of the pcntl_fork function and the process management extension.

Let's say you want to take advantage of more than one core for a given process. Perhaps it performs many intensive computations and on a single core would take an hour to run. Since a PHP process is single threaded you won't optimally take advantage of the available multi-core resources you may have. Fortunately, via the Process Control (PCNTL) extension, PHP provides a way to fork new child processes.

He gives a quick snippet of code showing how to spawn off a few new processes, get their process IDs and watches a max number of children until one dies (then starts another).

Brandon Savage has posted the next article in his "Scaling Up" series, a look at reducing the amount of "drag" your application makes through its processing path and some tips to help increase its "lift" out of some common problems.

The intuitive will note that many if not most of these suggestions are performance enhancements, not scaling techniques. Why then are they in an series about scaling? Scaling is about more than just adding hardware. It’s also about making sure your system runs better. You can add lots and lots of hardware but you will someday be unable to compensate for bad queries and poor optimization.

Some of his suggestions include taking care of any sort of errors or notices (anything that could slow the script down by writing to a log), defining virtual hosts instead of making excessive use of .htaccess files and installing caching software to maximize code and information reuse.