There's recently been some discussion (to be generous) about the pros and cons of having a firewall deployed in front of servers. The con mainly being that it's a point of failure in case of a DDoS. Microsoft has been talking about doing away with the standalone firewall and instead basing security on PKI and IPsec, meaning you can't talk to the server if you don't have an appropriate certificate (no source for this, it was a talk I attended).

So, given a simplistic scenario where you have, say, a web server, a mailserver and a database server backing the webserver, what's the current best practices firewall/network design, according to you? I'm well aware most deployments aren't this straight forward, but I think the example works well enough to base a discussion on.

I'm also aware that the traditional, theoretical best practices would be to add as many layers of defense as possible -- but is this the design that results in the maximum availability, all things considered?

(Assume that the servers are appropriately hardened and don't expose any services they shouldn't be running, that the switches don't add anything to the security equation, and that management is either out-of-band or from the cloud part of the drawing. Which ever you prefer, since it adds it's own complications either way...)

Scenario #1, simple routed

A clear, routed, non-firewalled design. The edge router may apply an ACL, some rate limiting. The DB server might be on private addresses that aren't routable from the internet. The hosts would each be running a firewall, such as Windows built-in or iptables etc. The admin running the hosts should consider the environment around the hosts hostile.

Scenario #2, traditional firewalled

The router may or may not provide any functionality above simply forwarding packets. The firewall implements the complete policy and segmentation. The hosts may, optionally, have some kind of firewalling, but probably wont. The admin running the hosts will probably consider the environment safe-ish.

Scenario #3, we really like firewalls

As #2, but more segmented.

I've traditionally been building either #2 or #1, and I'm more and more leaning towards #1. I'm probably lured by the "simplicity" of the design and the "purity" of wanting the hosts to be able to survive in a hostile environment. I'm aware this is in contrast to defense in depth. :)

5 Answers
5

I am sure you are aware that for many security threats and/or compliance requirements there exists both network-based and host-based mitigation controls.

Microsoft, being essentially a host-oriented company, is naturally going to be on the side of host-based security solutions. And when dealing with people and applications, one would traditionally side with a host-based solution. But firewalls have changed. "Next Generation Firewalls," as defined by Gartner, are designed specifically to enable user and application as well as traditional IP, port, and protocol policy control. (I am not always a fan of Gartner, but they have it right on NGFWs.) So a NGFW that can control access from Layer 3 through Layer 7 is going to be a better solution than a host-based approach which of course is blind to lower levels of communications.

My home state, Massachusetts, passed a very strict privacy law. Organizations are placing NGFWs between users and servers in order to meet its requirements. Here are a few of the law's requirements and how a NGFW meets them:

Control access to private data based on applications - Define policies to control which applications are allowed to access the servers containing private data.

Control which users have access to private data - Integrate the firewall with your LDAP servers so you can define policies to control which groups have access to private data.

Detect and block threats - This firewall should also have a threat protection capability to monitor the allowed traffic and block detected threats.

BTW, depending on the firewall you are using, you can create scenario #3 with one physical firewall.

Having said this, if the goal is protecting Personally Identifiable Information (PII), a NGFW by itself is not the complete answer. First you need to find where PII is located.

Second, once you have found all the PII, you may want to take steps to consolidate it to as few servers as possible.

Third you may want to implement controls to prevent PII from "resting" on end user workstations and being transmitted from a user to someone else.

Fourth, you may want to add a specific database access control which monitors 100% of database accesses. This must be a host-based control. There is no other way to make sure you are monitoring everything including Views, Stored Procedures, and Triggers.

My experience is mainly with large global banks, so this may not be appropriate for all, but the typical scenario for them is very like #3, except more segregated, so there are multiple DMZ's, usually segregated by function, risk profile, department owner or other criteria.

Also, everything that connects to the internet is replicated, so you would have at least two connections, typically MPLS services from different providers, with choke routers joining at a load balancer, with VPN acceleration either immediately inside or outside the load balancer depending on structure.

The external firewall set then provides segregated DMZ's for web servers etc, with an internal firewall providing protection for databases. In some cases there is an extra layer, with app servers, but as the applications are being built more and more into the web server these are slowly vanishing.

A separate set of connections, utilising separate firewalls or reverse proxies, would be used for internal users to connect outwards, and yet another set of firewalls for remote access connectivity.

If I had to review a global bank that didn't have at least these components, I would be very concerned as to why they weren't taking appropriate care, and an IT audit would have interesting comments as to why the business couldn't rely on the environment being secure.

I see this across industries, not just banks. The management network is usually firewalled from the other layers too in addition to being a separate physical network.
–
AngerClownFeb 2 '11 at 2:48

I think my simplified example might have been too simplified, and of course the requirements will vary with the kind of business. I would also be concerned if a bank didn't utilize as many layers of protection as possible, whereas I could see a trade-off for a smaller business with less critical data. In the latter case, increased complexity (for example correctly configuring redundant firewalls) might result in worse availability than a less secure but more easily understood solution.
–
Jakob BorgFeb 2 '11 at 8:46

The biggest issue I have with that theory (I've heard similar arguments) is that if you remove the firewall, you can still DoS the application server(s).

A follow up question to that though is: can you successfully use IPsec and PKI on internet facing web servers?

I would also argue that by adding defense-in-depth measures, you are by definition increasing availability, but only to those who should have it if you do it properly.

With that being said, I'm partial to scenario #2, but not seperating the Mail/Web and DB servers by switches. If there are other servers and clients on the network, I would also add a firewall between them creating a DMZ. Following that, I would also recommend adding redundant firewalls and switches.

Every server OS I've ever configured in recent years has some facility for firewalling, and so does nearly every router. Its a no-brainer that it's a good idea to configure restrict access on the box. And of course good practice not to run unnecessary services.

However there are things which might be considered as the functions of a firewall which cannot easily be implemented at this level (e.g. AV filtering). But again there are tools which will run on the server at the expense of extra CPU, vs the expense of another network hop. But if you're in the game of running services which may be compromised / need to maintain availability, then you should already be running a cluster rather than a single node.

IMHO, the MS winsock.dll still exposes an awful lot of complexity on the outside of the firewall - but this is probably a special case.

There's a strong argument for maintaining a bottleneck as protection against a DOS, particularly a DDOS - it makes realtime identifcation and blocking a lot simpler. It doesn't have to be a SPOF.

In larger organisations it is often seen as a benefit to have the security box owned and managed independently of the servers - IMHO that's a false premise - security should be pervasive and everywhere - there are lots of people wanting to sell you a magic box which makes your systems secure - but they won't help with most problems at the application level.

If it were me setting up the servers you describe (leaving aside the fact that each server appears to be a single point of failure) I wouldn't connect the DB to the same network as the the router - I'd run 2 layers of network with the DB on the inside.

+1 for not (necessarily) trusting your network security to the salesman in the swish automobile and fancy jacket trying to sell you a black box you've never heard of.
–
Cosmic OssifrageJul 2 '14 at 13:41

If you are worried about a single point of failure against a DDoS, you can setup multiple firewalls in a load-balanced system where you can process traffic at line speed. With this type of setup, the only DDoS attack that will succeed is one where the incoming line is saturated. At that point, whether you have a firewall or not won't matter.

I would recommend solution 2 as that will still allow communication between hosts even if there is a DDoS attack in place. In solution 1, any unwanted traffic is still being processed through the switch, possibly reducing speed and adding unnecessary traffic.