I created a *.my-domain.ca CA (with my-domain.ca as a alternate name) and cert and I keep getting HSTS errors on a lot of browsers. Seems as though I have not created the SSL stuff correctly. I also cannot add the CA to my trusted CA list without a "password" in my OSX keychain–that is not a parameter in PfSense cert configurator... Haven't tried adding the CA to windows machines yet.

I finally got a few backend servers running via one front end from the outside :)

Most of my servers have their own SSL settings… Would this explain the HSTS error from some browsers? I don't mind removing LAN SSL from them, it just takes a bit of time... I hate to see efforts wasted if I know they wouldn't help.

2.1) Is it normal for servers to be bound to their specific backend from the WAN? Once I create the "dsm" subdomain for my synology, I can no longer access it via its old port (5001). Is this normal?

2.2) If you use synology products at all... I will also include that I am having trouble getting mobile applications to work through the rProxy on 443 from outside my LAN or "on-the-road". This doesn't surprise me, but I am also positive it did work for me in the early stages... Maybe it is just a cert based issue that will be solved by proceeding with (1).

HSTS is a header send by the server, and is cached for X amount of time (1 year is the usual setting..) , and can even be persisted if submitted to a online list, then you might never get rid of it…

Its actually good to have, as it 'forces' all future connections from clients to be over https with a VALID certificate for the url they request in the address bar. (You will need to get a valid certificate, this means installing the CA you used to sign the certificate on all client computers, or get a certificate from a real CA like LetsEncrypt or buy one..)

If you already have a valid certificate then perhaps you forgot to load the intermediate certificate into pfSense that could cause issues.. Check with for example https://www.ssllabs.com/ssltest/ if indeed the chain not complete.?.

2.1) This is due to the setting "transparent client ip", sadly the implementation is not 'perfect' all reply traffic is 'captured' from the server and send to haproxy.. Even if the inital request did not go through haproxy.. (The webgui does warn for this effect, sorry..)

Workaround is possible by making the server listen on a second port or a second ip, but depending on the machine running the website that might be difficult to configure on that side..

2.2) not using synology myself.. But yes if the certificates are not 'valid' that could cause issues..

I believe that works! (Edit: jk, tried IE and no luck…) I used the advanced option and am listening on :80, not sure how to do the 'action' one... I know the advanced statement includes "if", but does this allow HTTP traffic through when I have some eventually, or does it force redirect any urls to HTTPS?

Is there any way to have a single frontend do SSL offloading as well as HTTPS where the SSL Handshake is done by the specific servers (SNI I believe)?

Is the new frontend1 shared with frontend2-- with frontend1 being the primary?
1.1) If so, the backend2 "forwarder" only sees frontend1, rather than frontend2...so I'm stuck with that case. OR,
1.2) If not, do I set frontend2 to listen on 10443 only and frontend1 to be the main :443 :80 listener? When I tried this, at it allows SNI to work, but the forwarding to SSLfrontend2 does not work.
1.25) In the backend that forwards to the frontend2, does the SSL box have to be checked to the right of the "Forwardto" box? It seems to make everything not work when it is checked.

1.3) It's just a chain in my small mind... outside-https-request -> SNIfrontend1 -> backend "forwarder" -> SSLfrontend2 -> server (not working after 1.2, or 1.25)
or the non-offloading scenario, still HTTPS... outside-https-request -> SNIfrontend1 -> server (working after 1.2, not after 1.25)

Is that the correct way of looking at it?

Second issue
2.1) When attempting to make an HTTP request, it says "Server Hangup" which leads me to believe that my Frontend2 is sort of working and your advanced config code is doing something.

2.2) This may resolve itself once the first mess is fixed up. We can work on issue one first… cause it seems like a doozie :)

2.3) Again, I really appreciate all the help with my complex desires ;D This is probably the most ridiculious thing that it's been used for ;)

All 3 frontends should be 'primary'.
Using 1 frontend for both 80 and 443, while using them both in TCP mode, means the backend will receive mixed connections.. Some with plain http other with ssl traffic.. That wont work..
1.1/1.2 checkout my new wiki page :)
1.3) its indeed a chain.

2.1) When requesting a HTTP page, it will first wait 5 seconds in the first fronted for the SSL 'hello'.. Then its forwarded to the second frontend, which also waits for the client to send the 'SSL-HELLO'.. The client never sends this, and the haproxy cannot 'decrypt' the traffic.. caused by 1)

It does exactly what I want it to, but I couldn't quite make mine do it. Very close though!

SSL and SNI works. HTTP does not.

If I make a request on a fresh browser (say pfsense.my-domain.com) it does not forward to https, but rather the 503 service not available page.

I cannot get one of my servers to UP. photo.my-domain.com. That is my http test server. I think it has to do with what frontends it is in.
You kept your www (http) page as the default in the http-frontend1 and for the SSL-offloading-Frontend3… My default www page (called webroot) is HTTPS, so I wasn't quite sure how to implement the HTTP page into the frontends without making it default... regardless it needs to be UP first.

only photo.my-domain.ca is redirected to https, all other requests go to backend photo-http_http_ipvANY
which doesnt seem logical to me.

Perhaps you should add a ! before the aclname?
http-request redirect scheme https if !httpRedirectACL
So that 'photo' can be retrieved over http and everything else like pfsense.my-domain.com causes the redirect.?

I put a "!" before "httpRedirectACL" under Condition acl names (in Frontend1-http) and now they all seem to be redirecting to https… which is better than before as the majority of my servers are https only.

Changing to GET didn't help, although I didn't understand what you mean by "send a version+host header"... so I changed to "basic" check and that makes the server turn green. To get to it I had to remove the recently added "!" in 1). But now it seems to be stuck as HTTPS again.

....hold on... after getting the server UP (with basic health check) and keeping the "!" everything seems to be working!!!! ;D though I haven't tested fully yet.

its the default theme of pfSense 2.3 beta snapshots. Ive converted haproxy to bootstrap for usage on 2.3 only recently..

if you have some time install pfSense 2.3 on a virtual machine, add haproxy package and report any issues that might still exist in the package :).

p.s. The the 5 millisecond on the offloading backend i intended to be a 5000 millisecond timeout.. It might currently be eating some more cpu than needed.. (going to change my wiki screenshot as well..)

It's just a web service running on a rPi2. I turned off https to test http with the rProxy, so it shouldn't be redirecting to https itself.
It appears that the issue is resolved (now that I've moved to Firefox… Chrome loves remembering broken things)... I can access photo (when it's UP) via http without it redirecting. It appears to use my SSL offloading when I type "https://" into the url though... I'm not sure that I want it to do that. All of my other servers are redirecting to https fine, even if I try http:)

With the "Http check version" set to:

HTTP/1.1\r\nHost:\ photo.my-domain.ca\r\nAccept:\ */*

the server goes to "down".

I cannot find chkresult, but here are a few stats that stuck out on the down server:

under photo-http_http_ipv4 and photo-http_http_ipvANY (they are red)
Server Lastchk=L7STS/401 in 10ms
Server chk=1

Is there any issues in running the check method as basic?

"Perhaps its a 'permission denied' response? Workaround for that could be checking a different url or accepting 404 as a 'valid' response.. http-check expect status 404"

I'm not sure If I need to worry about this anymore as the http request does through the frontend and looks for a backend.

Bootstrap FTW!

I'll deploy another VM and give it a go. I'm not to certain how I'll test it all without interrupting others in my house… I need to get better at running a "network lab"

Q1) Can I make https requests to http servers deadend to nothing or an error page?
Q2) Can I have no defaults so that incorrect domain names also go nowhere, or is this poor practice?

if you dont want photo to be reachable over https then remove it from 'Frontend3'

ok so the issue indicated by LastChk is that haproxy gets a 401 response, this is by default considered invalid. But you could configure it to expect that status. Put "http-check expect status 401" into advanced setting of the backend. 'Chkresult' indeed does not exist, i ment the one you found.. Even though basic health check works, doesnt check if the webserver is 'properly working' it only checks if the connection can be made. If a cgi or database backend is not working that might go unnoticed.. And haproxy would declare the backend healthy while it is unable to respond to requests.. Even so the impact is probably small it can mean the diffence between no response, a error response from the backend depending on how functional it is.. And a page from haproxy 'no server available' which could trigger a email alert, and is easy to diagnose. In environments where load-balancing the same domain to multiple servers it is more important to properly detect if 1 server is in a bad state so it can be taken offline and requests will be balanced across the remaining servers.

Q1) It is possible to deny requests using acls+actions. The error returned by haproxy can be changed using the errorfiles. (example on template tab)
Q2) Its possible to have no default. Whether or not that is good practice, i don't really know, it might confuse search-engine crawlers if it finds 10 (mis spelled) url's leading to the same website, but i don't have much experience with that. I personally kinda like to always return 'some' response. Perhaps put redirect location action at the bottom towards the main webroot url..

I can see how the health check can be very important, but at this point I do not desire notifications or for the health do even show up correctly for that matter.

This may be just my last post Q1, but is there a reasonably easy way to disable servers to resolve in an error page? I honesty haven't checked the docs, but I've got a bunch of rProxy config, and I want to simply disable a few backends without affecting others running… Is there an error page for "temporarily unavailable" I can quick deploy? You can point me to docs if it's a lot to explain here, and we may be getting a bit off topic 8)

Q1) 'Disabling' a server can easily be done from the haproxy widged, or the stats page.
By default haproxy will then send a 503 error, but you can change that page using the error files.
On the 'files' tab add a new errorfile (or use the example from the 'Templates' tab as a starting point)
Then assign the errorfile to the backend by adding a line with code 503 and select file available.

There are other more 'advanced' solutions involving sticktables and acls, but those are probably not easy to recreate through the current webgui options.

As haproxy is only listening on one port firewall rules cannot make a much of a difference depending on the domain used.. (maybe thats not truly correct ;) , if using transparent-client-ip you could technically block haproxy from reaching the backend with some floating rules..)

Other way would be to use a acl in haproxy 'source matches ip or alias' and then perform a 'http-requests block' action on that acl.

Is my rProxy config mirrored to within my network at all? As in I have 4 web addresses running on one server via different ports (not vhosts), and I'd like their public domain names, from within my LAN, to resolve to LANIP:WANPUBLICPORT… Is this crazy talk? I do not need this feature, it's just something that would make HAProxy very seamless for me.

HAProxy can be used from the LAN network, but do make sure you keep routing both request and response traffic through pfSense.. This is especially required when using 'transparent client ip'. When doing so the client and server may not be on the same subnet.

Was having a bunch of issues with accessing my Synology NAS as it was using vhosts to redirect standard websites… After much grief, it appears that disabling vhosts and applying your first example to the "Backend pass thru" works!

reqirep ^([^\ :]*)\ /(.*) \1\ /folder-name/\2

From the WAN side:

now by accessing photo.root-domain.ca I get redirected by the index.html in the root folder. Which is how I get redirected to PhotoStation for all you Synology fan boys out there.

by accessing root-domain.ca I get direct access the the index.html file found in the /app1 folder without having to specify it in the original URL. It is sort of hidden if you will. Not sure if navigation to other folders is possible now, but I would like to explore for any introduced security/functionality issues.

In a business situation, if I were running a reverse proxy like this, I would most definitely run it on a VM in a completely different subnet with all of my backend clients in that subnet. I would use routing to make my production LAN talk with the rProxy server in the other LAN. I would imagine VLANs could be configured to do this as well, but I do not know much about configuring them. Maybe someday.

Just got most of the web services working on the synology (DSM 5.2 latest as of this post). Then I upgraded it to the newest software (DSM 6.0 Beta2) and most of my web services on that box behind HAProxy broke. :o

I sort of assumed it was due to settings not porting over after upgrading… Now, I've been fiddling with it for 2 days and still have had no luck getting things back online. I'm sort of glad I did have it working a couple of days ago, cause that made me understand that my crazy setup did indeed work as it was intended.

To my point,

I can access my websites by going to LAN-IP:443 and LAN:443/sub-root-dir and it takes me to the document roots on the NAS, and executes the appropriate index.html files in the specified directory. That's good.

The stats show the websites as DOWN and when attempting to access them from the WAN, it shows "503 Service Unavailable". I have two LAN IP's on the NAS. Both of them work identically from the LAN, but only one is being used behind this rProxy to prevent the weird DNS issues I was getting at the top of this thread. Really just brute force and ignorance there.

Is there any way to see the logging on such issues? Note, the main NAS landing page is on :5001 and it continues to work fine behind the HAproxy:443 from the outside.

Any other advice to get these pages to roll again? There isn't really anything special about the two troublesome websites, other than the box they are on. vhosts are disabled (as far as I know...) but maybe there are some issues there on this beta version.

This works fine for a simple website, but I have one strange creature that only works by landing in the root folder. I'll try to give brief background. I have a photo website that can be accessed by photo.my-domain.com which redirects to photo.my-domain.com/~user1/photo. Currently, the index.html redirects to only the 1 username, and simply changing it out allows it to work with other users.

Is there an easy way to specify an index.html by a URL in the same root, such as user1.photo.my-domain.com will look at root/index1.html and user2.photo.my-domain.com will look at root/index2.html? I tried this yesterday with the above code and placing the index.html inside root/user1/ (redirecting to */~username/photo), but for some reason the server responds with it's own unavailable page, thus making HAProxy keep the 'Online' status for that server… not exactly what I am going for.

I was wondering if you can comment on any updates using this. I have a DSM, photo station, a web server, etc… Things are working OK but I'd like to do certificate authentication, which led me to looking at getting a wildcard ssl cert, which then made me start wondering about my setup.

Are things still working for you OK? If possible, can you post your latest configs so I can use them as an example?

@wiz561– I've moved away from Synology due to lack of integration between applications. Nothing against them, I do really like their products out of the box. I've moved to NextCloud, Google Photos, & dedicated web servers for hosting. I still do use HAProxy as discussed in this thread.

I do believe Synology has since integrated a reverse proxy server right into DSM. You might want to check it out--I have never used it, so I can't officially vouch for it.