Refer to my earlierposts on the subject for background. Here are further explorations, not having much to do with clustering as it turns out, but everything to do with proxying.

In a number of situations you might want to set up encrypted communication between the proxy and backend. For this, Tomcat supplies an HTTPS connector (as far as I know, the only way to encrypt requests to Tomcat).

Connector setup

Setup is actually fairly simple with just a few surprises. Mainly, the protocol setting on the connector remains “HTTP/1.1″ not some version of “HTTPS” – the protocol being the language spoken, and SSL encryption being a layer on top of that which you specify with SSLProtocol. Basically, HTTPS connectors look like HTTP connectors with some extra SSL properties specifying the encryption:

If I really wanted to be thorough I would set clientAuth=”true” and set up the proxy with a client certificate signed by this server’s truststore, thereby guaranteeing only the proxy can even make a request. But not right now.

Note the “scheme” and “secure” properties here. These don’t actually affect the connection; instead, they specify what Tomcat should answer when a servlet asks questions about its request. Specifically, request.getScheme() is likely to be used to create self-referential URLs, while request.isSecure() is used to make several security-related decisions. Defaults are for a non-secure HTTP connector but they can be set to whatever makes sense in context – in fact, AFAICS the scheme can be set to anything and the connector will still serve data fine. Read on for clarity on uses of these properties.

Secure or not

You can secure the client’s connection to the front-end (proxy), and you can secure the proxy’s connection to the back-end. I thought I’d try all combinations of front-end http(s) and back-end http(s) –

http -> http

https-> https

https -> http

http -> https

Any of these except the last might make sense in various contexts. I’ll mention a possible scenario for each and what it looks like.

1. Completely cleartext

If there are no security concerns on a request, then standard, non-encrypted HTTP for both connections makes sense. The configuration would look completely vanilla.

Actually, let me talk about the part that’s NOT completely vanilla here: the proxyPort and proxyName settings here. When a Tomcat HTTP connector is being accessed from a proxy, by default the request is no different from a request being made directly by a client. When a self-referential redirect needs to be created, the Tomcat server’s host and port are used to construct it via request.getServerName() and request.getServerPort() (unless the app is behaving badly and reinventing that wheel incorrectly – don’t!).

When the proxy sees such a redirect, it can choose to adjust the response via the ProxyPassReverse (and similar) directives to point to itself, since generally the proxy doesn’t want to give out the back-end URL (which indeed may not be available to the client). That’s a completely valid option in many cases.

But there’s another way to accomplish the same effect, which is to use the settings above. In this case, the Tomcat connector is aware that it’s being proxied, and uses the proxyPort and proxyName settings (and, in fact, the scheme setting) to create self-referential URLs (via the same request methods). Then the proxy doesn’t need to make any adjustments to a redirect response. Note there’s no ProxyPassReverse.

2. Completely secure channel

It’s common to encrypt both connections, from client to proxy and from proxy to back-end. The proxy may live in the DMZ (outside the firewall) and its communications may be considered vulnerable; or, it may be considered wise to keep even internal communications “eyes-only” to minimize potential internal compromises. Whatever the reason, sometimes you need to secure both channels. Here’s how that looks:

Notice that you need to turn on two SSL engines on the proxy- one to offer an SSL connection (SSLEngine) and one to make an SSL connection to the back-end (SSLProxyEngine).

3. Secure in the front only

Often the front-end proxy may only exist for performance reasons – perhaps only to handle the SSL overhead on the connection and handle load balancing. The proxy may be on a secured network with the back-end and there may be no need or desire to incur the overhead of encrypting the proxied connection. In this case, you use a plain HTTP connector in Tomcat, but you need to tell Tomcat that it’s actually secure on the front.

The scheme, quite simply, is for creating self-referential URLs (along with proxyName and proxyPort). The default is “http”, but the proxy is taking https connections, so Tomcat (and the app) need this setting; otherwise, when you make a request for https://proxy/petcare and need to be redirected to /petcare/ (with a trailing slash), you’ll find yourself at an inoperable URL http://proxy:443/petcare/.

The “secure” attribute is a little more subtle. It has two uses that I know of: cookie generation and security constraints.

Cookies sent over a secure connection shouldn’t later be sent on a non-secure connection – that would be leaking secure data. Standard Tomcat session creation uses a cookie that’s marked “Secure” if the connector is marked secure. The response header includes a line like this:

Set-Cookie: JSESSIONID=[...]; Path=/petcare; Secure

The “Secure” on the cookie means that the browser will only give it back on secure (HTTPS) connections. Tomcat would not mark cookies secure on a plain HTTP connector because then the browser wouldn’t send them back; but in this case, because the connection with the browser is actually secure, you actually want this, so you need secure=”true” so Tomcat will know to do this.

An application may specify security constraints in web.xml for Tomcat to enforce (or implement its own) requiring a secure connection; if a request is made on a non-secure connection, it should be refused or redirected to a secure connection. As an example, I modified petcare’s web.xml to add such a constraint:

In order to enforce such a constraint, Tomcat of course needs to know whether the connection with the user is secure. Since the plain HTTP connector defaults to secure=”false” but we know the user connection really is secure, we again need to tell Tomcat to treat this as a secure connector with secure=”true”.

As it happens, this is the only case where you need to specify these properties, as in all other cases the defaults are correct. But let’s look at a final, pathological case:

4. Secure in the back only

Why bother encrypting the backend connection when the front-end is wide open? It creates no security, just extra encryption work. No one would do this in a real scenario. But it’s actually a little interesting to try to set this up.

The scheme and secure properties are set as for a plain HTTP connection here. The connector is in fact encrypted (you can visit it with a browser to confirm), but from the point of view of the client, this is a non-secure HTTP connection.

Scheme is set to “http” because I’m having Tomcat generate self-ref URLs based on the proxy, not itself, and the client is accessing the proxy via cleartext HTTP.

I must set secure=”false” so that standard Tomcat session creation creates a cookie NOT marked “Secure” – otherwise, as the browser is only accessing the proxy via HTTP, the cookie will never be returned to the server; a new session would be generated with every request. The browser may not even bother to store the cookie, as it was sent in a non-secure response.

As it turns out, however, this is all for naught. Self-referential redirects come back to me with the “https” scheme, and cookies come back with the “Secure” setting. It appears Tomcat doesn’t trust my settings here and overrides them when I set SSLEnabled=”true”. Which perhaps no one has noticed or cared about, as there’s no scenario where you’d want this setup. And so, it turns out I didn’t need to set these in my second scenario; they’re not just the defaults for an SSL connector, they’re hardwired.

The secure redirect

The more astute among you may be wondering why I’m cluttering up the back-end configuration in my scenarios with front-end details. Shouldn’t the back-end operate unaware that it’s being proxied? Well, ideally yes, but there’s at least one scenario where you really need it to be proxy-aware, and that’s when you have a security constraint like in the third scenario there, and you want the user to be able to begin with a non-secure connection and be redirected to a secure connection only as necessary.

Currently I know of no way to handle this kind of redirect with Apache HTTPD and an unaware back-end. You can set up a ProxyPassReverse that rewrites back-end redirects to the same proxy origin, but in this case, you actually want the redirect from the non-secure proxy server to go to a different proxy server (the secure one). It will likely be a vhost on the same proxy instance, but to the browser it is a different server and has to be presented as such. The only way I can see to do this is to have the back-end generate the correct proxy-based secure URL when it’s time to switch from HTTP to HTTPS. The backend can do this if the proper proxyName and redirectPort are specified.

That’s the first time I’ve mentioned redirectPort, which is a setting typically cargo-culted in Tomcat connector configs without any understanding of what it’s for. It’s for this scenario (notice there’s no point in having it on a secure connector – it would never be used). When a security constraint requires a redirect to HTTPS, the URL generated is created with the https schema, request.getServerName(), and redirectPort. So you get a session like this: