Dispatcher is a power tool in AEM for caching and/or load balancing. Nevertheless, it has its limitations and here you can learn how to overcome them.

The infrastructure used for Adobe Experience Manager (AEM) tends to be quite similar. A couple of authoring and publishing instances, an HTTP server loaded with the Adobe dispatcher in front of them, and some load balancing according to your needs. This model is probably the one you have been using in most of your AEM projects, and why shouldn’t you? It's what Adobe recommends. The dispatcher allows you to cache pages as rendered by the publish instance and let them be served by the HTTP server directly. You might want to make some tweaks over the dispatcher configuration and the HTTP server, but that's the end of business, right?

Where does the Adobe dispatcher fall short?

Well, the dispatcher has many limitations. It's cache is based entirely in the filesystem. It only caches documents as they were delivered in the AEM instance. This means some important information is never cached; A perfect example of this is the HTTP headers.

Imagine you want to deliver some files as downloads in a web site. You probably want to add a Content-Disposition header in the response to make sure the browser handles the response as a download. Unfortunately, the dispatcher will only preserve the document itself, not the http response, forcing you to create additional rules in your HTTP server to add the header for those requests. This results in twice the effort.

So how can one overcome these limitations?

Here is where an alternative can help, allow me to introduce you to The Varnish Cache. This nifty cache system allows you to cache HTTP requests and their response, not just files. Varnish can be vastly configured to cache whatever we need based on HTTP requests. That means Varnish can do some nice tricks:

Cache HTTP headers:Solving our previous dilemma.

Cache requests with a query string:And requests without a file extension. Great if you ever abused selectors.

Cache redirect responses:Which can drastically reduce load in the instance.

Cache non AEM requests:Useful to integrate external systems.

All these come along with great performance. Varnish is a in-memory cache and its configuration is actually compiled, allowing it to perform at blazingly fast speeds!

High customization of cache flushing:No need to flush your home page every time you flush something below it.

Edge Side Includes.

Of course, these features do not come for free. I won’t enter into the details of setting up a Varnish instance suitable for working with AEM, but it requires great technical knowledge of Varnish, AEM, and HTTP as well as some effort to create a working Varnish configuration. Leveraging Varnish Cache in your infrastructure can be as rewarding as it is challenging.

Does this means I should just ditch my dispatcher settings and set up a Varnish instance? Well, it depends (you were expecting this, weren’t you)? The dispatcher might lack several features available in Varnish, but it is made specifically to work with AEM, so setting it up is easier and cheaper. You need to make an analysis of your current infrastructure to find specific problems or opportunity areas that you wish to engage before making an expensive migration. Also, you might not have the correct set of expertise within your organization, which can be hard and expensive to obtain.

Where can you go for further questions with varnish?

As with most important decisions made within a project, you need to think about what you are trying to achieve. Having said that, jumping on the Varnish bandwagon will prove to be an exciting and valuable experience for you and your team.

Netcentric has been very successful in aiding clients become successful with Varnish adoptions. If you are not sure whether Varnish is a good match for your organization, we would be thrilled to help you figuring it out. If you have any questions, please feel free to send them to info@netcentric.biz and one of our team members will reach out to you shortly.