Some performance information from that last link (which appears to be a bit different setup than the others):

So I decided to put a proxy in front
of wordpress to static cache as much
as possible. ALL non-authenticated
traffic is served directly from the
nginx file cache, taking some requests
(such as RSS feed generation) from 6
pages/second to 7000+ pages/second.
Oof. Nginx also handles logging and
gzipping, leaving the heavier backend
apaches to do what they do best: serve
dynamic wordpress pages only when
needed.

...

On nginx – it’s so efficient it’s
scary. I’ve never seen it use more
than 10 to 15 meg of RAM and a blip of
CPU, even under our heaviest load. Our
ganglia graphs don’t lie: we halved
our memory requirements, doubled our
outgoing network throughput and
completely leveled out our load. We
have had basically no problems since
we set this up.

Anyone have any stats on the speed savings of using Nginx?
–
Mike LeeAug 18 '10 at 15:07

Mike, I added another link, and some information from that post.
–
tnorthcuttAug 18 '10 at 15:14

I moved my main blog from a 1G server running Apache to a 512M server runing Nginx. Runs more smoothly, despite the decrease in RAM. Admittedly, I have other services running on the 1G server, though (email, imap, mailman, several other low-traffic web sites).
–
Dougal CampbellJun 7 '12 at 20:26

NB running WordPress on nginx is different from using nginx as a proxy cache in front of Wordpress.
–
samMay 2 '13 at 21:08

Set client-side expiries for things like css, images, JavaScript etc which don't need to be redownloaded for each page view. This, by far, made the biggest difference to my site loading times. The fastest download is the download that never happened ...

You may pre-gzip everything you reasonably can (7-zip is a good tool for this) & upload it in the same place as the file you just gzipped. Change .htaccess to serve the pre-gzipped files, as below. The caveat here is you need to remember to re-gzip them if/when you update things. This cuts out the CPU overhead, apart from parsing .htaccess.

RewriteEngine on
#Check to see if browser can accept gzip files. If so and we have it - serve it!
ReWriteCond %{HTTP:accept-encoding} gzip
RewriteCond %{HTTP_USER_AGENT} !Safari
#make sure there's no trailing .gz on the url
ReWriteCond %{REQUEST_FILENAME} !^.+\.gz$
#check to see if a .gz version of the file exists.
RewriteCond %{REQUEST_FILENAME}.gz -f
#All conditions met so add .gz to URL filename (invisibly)
RewriteRule ^(.+) $1.gz [QSA,L]

This is just a raw answer. There are a lot of variations on this theme. I blogged about this and added quite a few references to more in-depth articles at http://icanhazdot.net/2010/03/23/some-wordpress-stuff/. Read that and, more importantly, the references I point to - they are good resources.

Be aware that if you tinker often then users will need to refresh their cache.

A plugin I found very useful too is wp-minify. The thing to watch with this one is that you should exclude page-specific items (contact form, front page slider etc) so you're not re-downloading the whole set of css, JS etc for each page. It is a good way to minify, combine & compress your baseline CSS, JS etc. It cuts down on http requests a lot. Wp-minify plays well with supercache and also with expiry headers that I detailed above.

Use Yslow in Firebug (Firefox) or similar to monitor your http requests and what is and isn't compressed. Have a look at expiry headers in there too. You will soon see what you can improve.

Minimize the number of plugins you run to only what you really need. Especially be aware of plugins that add javascript and CSS code on every page load, even when that code isn't being used on the page.

If you are creating your own theme from scratch, break your CSS down so that features that are only need for particular page templates or view types (single post, archives, category, etc) are only loaded when needed.

Configure W3TC to use a CDN (like Amazon CloudFront, or any of the others supported by W3TC).

See if the Minify options work for you (some plugins generate js/css that won't minify nicely, so be sure to test your site after activating the minify feature).

If you have full control of your MySQL server, make sure that you have the query_cache turned on. Use a MySQL tuning script to find other ways to optimize your database config.

If using a CDN is problematic for some reason, configure mod_expires in your apache setup. Set expiration times as long as reasonable for static types like images, css, javascript, video, audio, etc.

In addition to using a disk caching plugin like wp-cache, put your blog on a host volume that has the "noatime" property set on it. Otherwise, SSH into your host (if your webhost provides that) and routinely run this command on your files every few days:

chattr -R +A ~/*

The ~/* means "my files under my home directory". You can change that path as you see fit. You can also set this up on a cron job in cpanel if your webhost provides that.

For more info about atime property, see this. It speeds up Linux disk read performance greatly.

Sometimes your site is being hammered by spiders. You can use a tool like SpyderSpanker or Chennai Central to filter out spiders who don't help bring more page rank to your site and merely slow it down, and then throttle good spiders (like Google, Bing, etc.) by sending them random HTTP 304 Not Modified messages.

Another thing I see is just poorly written plugins. If you learn how to make plugins, you begin to see how some plugins are inefficiently coded, or even find timebombs, such as a database table that fills and fills and never gets cleaned out, storing things such as incoming connection data.

Beyond all the other solutions here, you can also create a WordPress web farm of your blog by hosting it on several web node PCs that all connect back to one single database and one single disk volume for the files (such as a volume mounted over NFS). Check out Ultra Monkey for how to get that all going.

Use a database class that is trimmed for optimization. We made good experiences with own code to reduce memory usage and database access speed. Next to that, you can optimize the database structure itself by some small changes that do a lot as well.

Part of the database class code can be found in the wordpress trac, it did not made it into core (Ticket #11799 and related).

This seems like a really cool idea. I'm having a problem with the code however. When we're clearing out the transient, the $nav_menu_selected_id is a number, while when calling the get_cached_menu() the menu_id is a string variable, because that parameter becomes the CSS ID for the <ul> element.
–
helgathevikingDec 28 '12 at 15:40

Thank you very much @helgatheviking I corrected this mistake and added functionality for theme_position as well.
–
fischiDec 28 '12 at 16:25

Awesome! This is working for me now as intended.
–
helgathevikingDec 28 '12 at 17:06

Yeah. So this is my late Christmas Present for you ;)
–
fischiDec 28 '12 at 17:11

IMHO, you cannot plan good settings for my.cnf without knowing the amount of data to configure for. You would have to periodically load a current dataset from production into a staging environment, perform optimizations and come away with the numbers to configure in the my.cnf of the production server.

you could enable global output compression. this will gzip everything going out automatically if the browser supports it. This drastically reduces the size of files transferred, but does increase your CPU load.

This will tend to make your site "feel" much slower. The Yahoo! technical documents suggest flushing you code right after the end of head and before the beginning of body so that scripts and styles can start loading. By buffering the entire page, you prevent this from happening, and so the page "feels" slow because the user has to wait for WordPress to render the entire page before the user sees anything.
–
WhIteSidEAug 12 '10 at 15:53

Scott was not speaking about buffering the whole page but using apache output compression. That's something different, only if you use the PHP output compression via the output buffer this would have the deficiencies you describe vaguely. But not per-se anyway because in the end, buffering output can make things faster. This has something to do with I/O on your server.
–
hakreAug 18 '10 at 8:51

I recently spoke about this subject at WordCamp Houston. All of the above recommendations are great and the important thing is to make sure all the front end stuff is fully optimized then you can start working on the caching and server performance issues.

Progressive rendering will make your pages feel faster because the user will see the page content before it is fully loaded. To do this make sure any blocking js is at the very bottom of the page and css is at the top.

Also if you use a lot of social media buttons you can customize the scripts to make them load in an iframe after the page is fully loaded. I wrote a tutorial on how to do it with the TweetMeMe re tweet button (now obsolete since Twitter released their own retweet button) but can still be applied to other share buttons.

For server performance look into Nginx as a front end proxy for static content with Apache handling the heavy PHP and MySQL lifting.

I've tried this before, but I've never been able to get a stable apache worker + fcgi environment running. If anybody knows of some good setup instructions for this under Ubuntu, please post them. I'd especially be grateful for instructions that detail some of the Apache config directives that affect the FCGI behavior, and explain how tweaking them might affect memory usage, performance, etc. Currently, I'm using a forked apache with an nginx front-in proxy cache server.
–
Dougal CampbellFeb 28 '11 at 14:51

Define stable. My installation is running very stable, but you would need 2GB of RAM in my config. You just have to read and tweak. apache's documentation on fcgi is fairly extensive.
–
Achmed DurangiMar 9 '11 at 10:50