I am wondering how many of you who work for LARGE companies have a network architecture that enforces the use of three-separate firewalls to get at the data. In other words:
* Separation of external (internet) parties and a presentation tier by a firewall
* Separation of presentation and application tier by a firewall
* Separation of application and data tier by a firewall

In short: Public->Presentation->Application->Data (where each arrow is a firewall)

Here is my problem: I work for a very large US company (75K+ employees) where each environment seems to have a different number of segmentation firewalls. We wanted to standardize our firewall architecture, but:
1) We can't find any real material to justify the need for three firwalls (as opposed to, say, just a single perimeter firewall)
2) We can't qualify the value-add of three layers of firewalls.
3) We can't sort out if this should be an architecture for just internet facing apps, or for ALL applications/appliances/gear.

3 Answers
3

What you're looking for, re: justification for a three firewall architecture, sounds like a bit of a fantasy world that isn't going to map well onto reality. Unless you control all the applications, the harsh reality is that most application vendors are assuming unfiltered and unfettered access between the software components from each tier to the adjacent tier (and, possible, to the non-adjacent tier, too).

I've done some work in environments where management-mandated "security" by way of firewalling server computers away from the LAN and minimizing the number of exposed services was employed. It was a challenge every time any new software, hardware, or vendor became involved because all the "traditional" assumptions of unfettered end-to-end connectivity within the LAN were turned on ear. Implementing anything ended up costing more in such an environment.

My strategy and recommendation for limiting communication and exposure within a LAN has been as follows:

Use access-control lists / firewall rules on internal routers / firewalls to "paint with a broad brush" and exclude types of traffic that are very apparently undesirable (access to the subnet / VLAN that the IP security cameras are attached to from anywhere but the VLAN where the video aggregation servers are installed, access to the Internet from a subnet where only internal-facing server computers are installed, etc).

Enforce more specific access-control rules from firewall software running on the various server computers themselves (Windows Firewall, iptables). Ensure that servers have only the required software installed and running, and that only the desired services / daemons are listening for network traffic on only the desired interfaces. Common-sense approaches to change-control, password / SSO security, and keeping operating systems and applications updated rule the day here.

Firewalls allow you to quantify and arbitrate traffic flows. So-called "layer 7" firewalls stick their nose into the application-layer traffic (and even then, at some arbitrary layer of depth into that traffic) and can enforce even more specialized arbitration rules than "traditional" firewalls. Firewalls do not "provide security", though, and are only as effective as the humans designing the rule sets or monitoring the logs. Invariably, the more tightly constrained the rules are initially, the more compromises end up being made to make the applications work.

I'd be dubious of an effort to add firewalls to "add security", personally. I see increased maintenance cost for all applications on the network without any guarantee of a quantifiable improvement in the environment's resistance against attack or diminished risk profile.

I've only found MS and a very small number of other vendors who assume access to everything.
–
CianSep 28 '09 at 15:12

1

@Cian: That's been the opposite of my experience. Pretty well every little "mom and pop" application that I've dealt with assumes unfettered LAN access, so much so that often my inquiries are met with "blank stares" when I ask for a list of the protocols / ports necessary for a client comptuer to communiate with a server computer for their product. Microsoft, to their credit (and to the EU anti-trust action's credit), has done a reasonably good job enumerating the traffic flows for their applications over the last few years.
–
Evan AndersonSep 28 '09 at 15:22

@Evan Thankfully, I don't have to deal with a large number of "mom and pop" applications, and the few that I do, we've had a reasonably amount of success getting lists of ports from (generally '80/443 and this database these days, whatever they're actually doing over them, unfortunately)
–
CianSep 28 '09 at 15:32

It's about limiting the damage things can do. Your firewall rules will rarely prevent an attack outright, but if something does get compromised, they should prevent it from doing too much damage (as any given box should only be able to access the list of services that you've verified it needs access to). As well as that, they give you the benefit of knowing what traffic is on your network at all times.

Standardising your firewall setup however, may be silly. You don't need the same number of firewalls at all sites, and the firewall architecture at every site should be a trade off between ease of deployment of new services, and security. If your environment is constantly changing, or is relaxed about what employees are allowed do, extensive firewalling is going to be far too much of a pain, for very little benefit. If you're willing to limit your employees freedom, then it's well worth doing, with firewalls between the internet, internet accessible services, and you lan (possibly lans. If one LAN deals with sensitive data, you probably want it on a separate firewalled subnet if possible.).

Essentially, the point of firewalling should be to allow only the traffic that you know you need between servers that you know need to talk to one another. That doesn't mean that you should necessarily have multiple firewalls, although these can be useful (if nothing else, you run out of ports on a single firewall eventually). It does mean that you should have things which require access to similar levels of trust, and to similar systems. If you are going with multiple levels of firewall, it's worth looking at using heterogeneous systems (i.e. Checkpoint to your webservers, with ASAs between webservers and database servers).

We run 1 outside of even the DMZ, just because, what the hell, why not? The DMZ is meant to be exposed, but it's foolish to allow unlimited exposure.

Then we run one between the DMZ and the servers we allow to contact the DMZ. Everything that we allow to connect to the internet connects through a proxy in the DMZ; it's a quick cut-out in the event of an attack, and a machine that can be compromised without compromising the server itself.

Then, finally, we run a firewall between everything that connects out and our internal network, so if something is compromised in the DMZ, it can't spread internally.

There is an exception: the corporate WAN is on the same tier as the internal servers, so an exploit in the WAN is only 1 firewall away from the internal systems, though they have the same firewall layout as we do, so there are still 2 firewalls between the outside world on that vector.

In the sense that you are talking about (vis a vis a specific application allowed external access) we would have the external firewall, a proxy, a firewall, an application server and then a database server that was on the same level. The point of a firewall (other than the usual internal firewalls) between a database server and an application server is hard to fathom: access between the two is required, and sufficient access exists to have a breach regardless of your firewall.

I think standardization between SITES is silly, but you should be standardized as far as exposed applications/servers go. You don't want to leave that up to local discretion: someone WILL make a bad decision. For example our satellite offices have only 1 firewall, and then a VPN to the regular network, but they have nothing to protect, whereas our financial systems aren't accessible except locally.