Use Varnish & nginx to Serve WordPress over SSL & HTTP on Debian 8

This is a Linode Community guide. If you’re an expert on something for which we need a guide, you too can get paid to write for us.

Varnish is a powerful and flexible caching HTTP reverse proxy. It can be installed in front of any web server to cache its contents, which will improve speed and reduce server load. When a client requests a webpage, Varnish first tries to send it from the cache. If the page is not cached, Varnish forwards the request to the backend server, fetches the response, stores it in the cache, and delivers it to the client.

When a cached resource is requested through Varnish, the request doesn’t reach the web server or involve PHP or MySQL execution. Instead, Varnish reads it from memory, delivering the cached page in a matter of microseconds.

One Varnish drawback is that it doesn’t support SSL-encrypted traffic. You can circumvent this issue by using nginx for both SSL decryption and as a backend web server. Using nginx for both tasks reduces the complexity of the setup, leading to fewer potential points of failure, lower resource consumption, and fewer components to maintain.

Both Varnish and nginx are versatile tools with a variety of uses. This guide uses Varnish 4.0, which comes included in Debian 8 repositories, and presents a basic setup that you can refine to meet your specific needs.

How Varnish and nginx Work Together

In this guide, we will configure nginx and Varnish for two WordPress sites:

www.example-over-http.com will be an unencrypted, HTTP-only site.

www.example-over-https.com will be a separate, HTTPS-encrypted site.

For HTTP traffic, Varnish will listen on port 80. If content is found in the cache, Varnish will serve it. If not, it will pass the request to nginx on port 8080. In the second case, nginx will send the requested content back to Varnish on the same port, then Varnish will store the fetched content in the cache and deliver it to the client on port 80.

For HTTPS traffic, nginx will listen on port 443 and send decrypted traffic to Varnish on port 80. If content is found in the cache, Varnish will send the unencrypted content from the cache back to nginx, which will encrypt it and send it to the client. If content is not found in the cache, Varnish will request it from nginx on port 8080, store it in the cache, and then send it unencrypted to frontend nginx, which will encrypt it and send it to the client’s browser.

Our setup is illustrated below. Please note that frontend nginx and backend nginx are one and the same server:

Before You Begin

This tutorial assumes that you have SSH access to your Linode running Debian 8 (Jessie). Before you get started:

Follow the steps outlined in our LEMP on Debian 8 guide. Skip the nginx configuration section, since we’ll address it later in this guide.

After configuring nginx according to this guide, follow the steps in our WordPress guide to install and configure WordPress. We’ll include a step in the instructions to let you know when it’s time to do this.

Install and Configure Varnish

For all steps in this section, replace 203.0.113.100 with your Linodes public IPv4 address, and 2001:DB8::1234 with its IPv6 address.

Update your package repositories and install Varnish:

sudo apt-get update
sudo apt-get install varnish

Open /etc/default/varnish with sudo rights. To make sure Varnish starts at boot, under Should we start varnishd at boot? set the START to yes:

/etc/default/varnish

1

START=yes

In the Alternative 2 section, make the following changes to DAEMON_OPTS:

This will set Varnish to listen on port 80 and will instruct it to use the custom.vcl configuration file. The custom configuration file is used so that future updates to Varnish do not overwrite changes to default.vcl.

The -s malloc,1G line sets the maximum amount of RAM that will be used by Varnish to store content. This value can be adjusted to suit your needs, taking into account the server’s total RAM along with the size and expected traffic of your website. For example, on a system with 4 GB of RAM, you can allocate 2 or 3 GB to Varnish.

When you’ve made these changes, save and exit the file.

Create a Custom Varnish Configuration File

To start customizing your Varnish configuration, create a new file called custom.vcl:

The sub vcl_backend_response directive is used to handle communication with the backend server, nginx. We use it to set the amount of time the content remains in the cache. We can also set a grace period, which determines how Varnish will serve content from the cache even if the backend server is down. Time can be set in seconds (s), minutes (m), hours (h) or days (d). Here, we’ve set the caching time to 24 hours, and the grace period to 1 hour, but you can adjust these settings based on your needs:

Remember to include in the above series any page that requires cookies to work, for example phpmyadmin|webmail|postfixadmin, etc. If you change the WordPress login page from wp-login.php to something else, also add that new name to this series.

Note

The “WooCommerce Recently Viewed” widget, which displays a group of recently viewed products, uses a cookie to store recent user-specific actions and this cookie prevents Varnish from caching product pages when they are browsed by visitors. If you want to cache product pages when they are only browsed, before products are added to the cart, you must disable this widget.

Special attention is required when enabling widgets that use cookies to store recent user-specific activities, if you want Varnish to cache as many pages as possible.

Change the headers for purge requests by adding the sub vcl_deliver directive:

This concludes the custom.vcl configuration. You can now save and exit the file. The final custom.vcl file will look like this.

Note

You can download the complete sample configuration file using the link above and wget. If you do, remember to replace the variables as described above.

Edit the Varnish Startup Configuration

For Varnish to work properly, we also need to edit the /lib/systemd/system/varnish.service file to use our custom configuration file. Specifically, we’ll tell it to use the custom configuration file and modify the port number and allocated memory values to match the changes we made in our /etc/default/varnish file.

Open /lib/systemd/system/varnish.service and find the two lines beginning with ExecStart. Modify them to look like this:

Install and Configure PHP

Before configuring nginx, we have to install PHP-FPM. FPM is short for FastCGI Process Manager, and it allows the web server to act as a proxy, passing all requests with the .php file extension to the PHP interpreter.

Install PHP-FPM:

sudo apt-get install php5-fpm php5-mysql

Open the /etc/php5/fpm/php.ini file. Find the directive cgi.fix_pathinfo=, uncomment and set it to 0. If this parameter is set to 1, the PHP interpreter will try to process the file whose path is closest to the requested path; if it’s set to 0, the interpreter will only process the file with the exact path, which is a safer option.

/etc/php5/fpm/php.ini

1

cgi.fix_pathinfo=0

After you’ve made this change, save and exit the file.

Open /etc/php5/fpm/pool.d/www.conf and confirm that the listen = directive, which specifies the socket used by nginx to pass requests to PHP-FPM, matches the following:

/etc/php5/fpm/pool.d/www.conf

1

listen = /var/run/php5-fpm.sock

Save and exit the file.

Restart PHP-FPM:

sudo systemctl restart php5-fpm

Open /etc/nginx/fastcgi_params and find the fastcgi_param HTTPS directive. Below it, add the following two lines, which are necessary for nginx to interact with the FastCGI service:

Configure nginx

Open /etc/nginx/nginx.conf and comment out the ssl_protocols and ssl_prefer_server_ciphers directives. We’ll include these SSL settings in the server block within the /etc/nginx/sites-enabled/default file:

The first server block is used to redirect all requests for example-over-http.com to www.example-over-http.com. This assumes you want to use the www subdomain and have added a DNS A record for it.

listen [::]:8080; is needed if you want your site to be also accesible over IPv6.

port_in_redirect off; prevents nginx from appending the port number to the requested URL.

fastcgi directives are used to proxy requests for PHP code execution to PHP-FPM, via the FastCGI protocol.

To configure nginx for the SSL-encrypted website (in our example we called it www.example-over-https.com), you need two more server blocks. Append the following server blocks to your /etc/nginx/sites-available/default file:

For an SSL-encrypted website, you need one server block to receive traffic on port 443 and pass decrypted traffic to Varnish on port 80, and another server block to serve unencrypted traffic to Varnish on port 8080, when Varnish asks for it.

Caution

The ssl_certificate directive must specify the location and name of the SSL certificate file. Take a look at our guide to using SSL on nginx for more information, and update the ssl_certificate and ssl_certificate_key values as needed.

Alternately, if you don’t have a commercially-signed SSL certificate (issued by a CA), you can issue a self-signed SSL certificate using openssl, but this should be done only for testing purposes. Self-signed sites will return a “This Connection is Untrusted” message when opened in a browser.

Now, let’s review the key points of the previous two server blocks:

ssl_session_cache shared:SSL:20m; creates a 20MB cache shared between all worker processes. This cache is used to store SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. 1MB can store about 4000 sessions, so adjust this cache size according to the expected traffic for your website.

ssl_session_timeout 60m; specifies the SSL session cache timeout. Here it’s set to 60 minutes, but it can be decreased or increased, depending on traffic and resources.

ssl_prefer_server_ciphers on; means that when an SSL connection is established, the server ciphers are preferred over client ciphers.

add_header Strict-Transport-Security "max-age=31536000"; tells web browsers they should only interact with this server using a secure HTTPS connection. The max-age specifies in seconds what period of time the site is willing to accept HTTPS-only connections.

add_header X-Content-Type-Options nosniff; this header tells the browser not to override the response content’s MIME type. So, if the server says the content is text, the browser will render it as text.

proxy_pass http://127.0.0.1:80; this directive proxies all the decrypted traffic to Varnish, which listens on port 80.

proxy_set_header directives add specific headers to requests, so SSL traffic can be recognized.

access_log and error_log indicate the location and name of the respective types of logs. Adjust these locations and names according to your setup, and make sure the www-data user has permissions to modify each log.

fastcgi directives present in the last server block are necessary to proxy requests for PHP code execution to PHP-FPM, via the FastCGI protocol.

Optional: To prevent access to your website via direct input of your IP address into a browser, you can put a catch-all default server block right at the top of the file:

After installing WordPress, restart Varnish to clear any cached redirects to the setup page:

sudo systemctl restart varnish

Install the WordPress “Varnish HTTP Purge” Plugin

When you edit a WordPress page and update it, the modification won’t be visible even if you refresh the browser because it will receive the cached version of the page. To purge the cached page automatically when you edit a page, you must install a free WordPress plugin called “Varnish HTTP Purge.”

To install this plugin, log in to your WordPress website and click Plugins on the main left sidebar. Select Add New at the top of the page, and search for Varnish HTTP Purge. When you’ve found it, click Install Now, then Activate.

Test Your Setup

To test whether Varnish and nginx are doing their jobs for the HTTP website, run:

The third line specifies the connection port number: 80. The backend server is correctly identified: Server: nginx/1.6.2. And the traffic passes through Varnish as intended: Via: 1.1 varnish-v4. The period of time the object has been kept in cache by Varnish is also displayed in seconds: Age: 467.

To test the SSL-encrypted website, run the same command, replacing the URL:

wget -SS https://www.example-over-https.com

The output should be similar to that of the HTTP-only site.

Note

If you’re using a self-signed certificate while testing, add the --no-check-certificate option to the wget command:

An additional configuration option is to enable Varnish logging for the plain HTTP website, since now Varnish will be the first to receive the client requests, while nginx only receives requests for those pages that are not found in the cache. For SSL-encrypted websites, the logging should be done by nginx because client requests pass through it first. Logging becomes even more important if you use log monitoring software such as Fail2ban, Awstats or Webalizer.

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.