Designing and Using DMZ Networks to Protect Internet Servers

Mick explains how to care for services that come into contact with untrusted networks.

One of the most useful tools in firewall
engineering today is the DMZ, or DeMilitarized Zone, a network
where all publicly accessible services are placed so they can be
more closely watched and, also, isolated from one's internal
network. DMZs, bastion servers and Linux make a particularly good
combination.

But what, really, is a DMZ? Is there more than one correct
way to design one? Does everyone who hosts internet services need a
DMZ network? These are issues I really haven't addressed yet, so
this month we're going to take a higher-level look at DMZ
security.

By the way, you may decide that your current DMZ-less
firewall system is reasonable for your needs. I hope you keep
reading, regardless: any host or service (whether on a DMZ or not)
that has direct contact with untrusted networks demands particular
care, and many of the techniques and considerations discussed in
this article apply to both non-DMZ and DMZ environments.

Some Terminology

Let's get some definitions cleared up before we proceed.
These may not be the same definitions you're used to or prefer, but
they're the ones I use in this article:

DMZ (DeMilitarized Zone): a
network containing publicly accessible servers that is isolated
from the “internal” network proper but not necessarily from the
outside world.

Internal Network: that which
we're trying to protect: end-user systems, servers containing
private data and all other systems with which we do not wish the
outside world to initiate connection. Also called the protected
network.

Firewall: a system or network
that isolates one network from another. This can be a router, a
computer running special software in addition to or instead of its
standard operating system, a dedicated hardware device (although
these tend to be prepackaged routers or computers), or any other
device or network of devices that performs some combination of
packet filtering, application-layer proxying and other access
control. In this article the term will generally refer to a single
multihomed host.

Multihomed Host: any computer
having more than one network interface.

Bastion Host: a system that
runs publicly accessible services but is not itself a firewall.
Bastion Hosts are what we put on DMZs (although they can be put
anywhere). The term implies that a certain amount of OS-hardening
has been done, but this (sadly) is not always the case.

Packet Filtering: inspecting
the IP headers of packets and passing or dropping them based on
some combination of their Source IP Address, Destination IP
Address, Source Port (Service) and Destination Port (Service).
Application data is not considered, i.e., intentionally malformed
packets are not necessarily noticed, assuming their IP headers can
be read. Packet filtering is part of nearly all firewalls'
functionality but is not considered, in and of itself, to be
sufficient protection against any but the most straightforward
attacks. Most routers (and many low-end firewalls) are limited to
packet filtering when it comes to network security.

Proxying: to act as an
intermediary in all interactions of a given service type (FTP,
HTTP, etc.) between internal hosts and untrusted/external hosts.
This implies, but does not guarantee, sophisticated inspection of
Application-Layer data (i.e., more than simple packet filtering).
Some firewalls possess, and are even built around,
Application-Layer Proxies. Each service to be proxied must be
explicitly supported (i.e., “coded in”); firewalls that rely on
Application-Layer Proxies tend to use packet filtering or rewriting
for services they don't support by default.

Stateful Inspection: at its
simplest, this refers to the tracking of the three-way handshake
(host1:SYN, host2:SYNACK, host1:ACK) that occurs when each session
for a given TCP service is initiated. At its most sophisticated, it
refers to the tracking of this and subsequent (including
application-layer) state information for each session being
inspected. The latter is far less common than the former.

That's a mouthful of jargon, but it's useful jargon (useful
enough, in fact, to make sense of the majority of firewall-vendors'
propaganda). Now we're ready to dig into DMZ architecture.

Types of Firewall and DMZ Architectures

In the world of expensive commercial firewalls (the world in
which I earn my living), the term firewall nearly always denotes a
single computer or dedicated hardware device with multiple network
interfaces. Actually, this definition can apply to much lower-end
solutions as well: network interface cards are cheap, as are PCs in
general.

Regardless, this is different from the old days when a single
computer typically couldn't keep up with the processor overhead
required to inspect all ingoing and outgoing packets for a large
network. In other words, routers, not computers, used to be the
first line of defense against network attacks.

This is no longer the case. Even organizations with
high-capacity Internet connections typically use a multihomed
firewall (whether commercial or OSS-based) as the primary tool for
securing their networks. This is possible thanks to Moore's law,
which has provided us with inexpensive CPU power at a faster pace
than the market has provided us with inexpensive Internet
bandwidth. In other words, it's now feasible for even a relatively
slow PC to perform sophisticated checks on a full T1's-worth
(1.544MBps) of network traffic.

The most common firewall architecture one tends to see
nowadays, therefore, is the one illustrated in Figure 1. In this
diagram, we have a packet-filtering router that acts as the initial
but not sole line of defense. Directly behind this router is a
proper firewall, in this case a Sun SparcStation running, say, Red
Hat Linux with IPChains. There is no direct connection from the
Internet or the external router to the internal network: all
traffic to it or from it must pass through the firewall.

Figure 1. “Multihost Firewall”

By the way, in my opinion, all external routers should use
some level of packet filtering (aka “Access Control Lists” in the
Cisco lexicon). Even when the next hop inward from such a router is
an expensive and/or carefully configured and maintained firewall,
it never hurts to have redundant enforcement points. In fact, when
several Check Point vulnerabilities were demonstrated at the most
recent Black Hat Briefings, no less a personage than a Check Point
spokesperson mentioned that it's foolish to rely solely on one's
firewall!

What's missing or wrong in Figure 1? (I said this
architecture is common, not perfect!) Public services such as SMTP
(e-mail), Domain Name Service (DNS) and HTTP (WWW) must either be
sent through firewall to internal servers or hosted on the firewall
itself.

Passing such traffic doesn't automatically expose other
internal hosts to attack, but it does magnify the consequences of
such a server being compromised. Hosting public services on the
firewall isn't necessarily a bad idea on the face of it, either
(what could be a more secure environment than a firewall?), but the
performance issue is obvious: the firewall should be allowed to use
all its available resources for inspecting and moving packets.
(Although there are some possible exceptions that we'll examine
shortly.)

Where, then, to put public services so that they don't
directly or indirectly expose the internal network and overtax the
firewall? In a DMZ network, of course! At its simplest, a DMZ is
any network reachable by the public but isolated from one's
internal network. Ideally, however, a DMZ is also protected by the
firewall. Figure 2 shows my preferred firewall/DMZ
architecture.

Figure 2. “Multihoned Host” Firewall with DMZ

In Figure 2 we have a three-homed host
as our firewall, placed so that hosts providing publicly accessible
services are in their own network with a
dedicated connection to the firewall, with the rest of the
corporate network facing a different firewall interface. If
configured properly, the firewall uses different rules in
evaluating traffic from the Internet to the DMZ, from the DMZ to
the Internet, from the Internet to the internal network, from the
internal network to the Internet, from the DMZ to the internal
network and from the internal network to the DMZ.

This may sound like more administrative overhead than with
internally-hosted or firewall-hosted services, but actually, it's
potentially much simpler because the DMZ can be treated as a single
entity. In the case of internally hosted services, each host must
be considered individually unless they're all located on a single
IP network otherwise isolated from the rest of the internal
network.

Other architectures are sometimes used, and Figure 3
illustrates two of them. The Screened Subnet architecture is
completely dependent on the security of both the external and
internal routers. There is a direct physical path from the outside
to the inside, a path controlled by nothing more sophisticated than
the router's packet-filtering rules.

The right-hand illustration in Figure 3 shows what I call the
“Flapping in the Breeze” DMZ architecture, in which there
is a full-featured firewall between the
Internet and the internal network but not
between the Internet and the DMZ, which is placed
outside of the firewall and is protected only
by a single packet-filtering router.

Both the Screened Subnet and Flapping in the Breeze
architectures still show up in firewall textbooks (albeit with
different names), but in my opinion, they both place too much trust
in routers. Such trust is problematic for several reasons: first,
in may organizations routers are under a different person's control
than the firewall is, and this person many insist that the router
have a weak administrative password, weak access-control lists or
even a modem attached so that the router's vendor can maintain it;
second, routers are considerably more hackable than well-configured
computers (for example, by default they nearly always support
remote-administration via Telnet, a highly insecure service); and
third, packet filtering is a crude and incomplete means of
regulating network traffic.

Even an OSS/freeware-based firewall can support IPSEC,
application-layer proxies, stateful inspection, RADIUS
authentication and a variety of other sophisticated controls
unavailable on most routers. When all is said and done, routers are
designed to route, not to protect.

What about Cisco PIX? The PIX firewall
is a router but with a hardened and
security-focused version of the Cisco IOS operating system.
Although it relies heavily on simple packet filtering, it supports
enough additional features to be a good firewall if properly
configured. When I question the viability of routers as firewalls,
I'm referring to nonhardened, general-purpose routers.

In summary, what one's DMZ architecture looks like depends on
what one's firewall architecture looks like. A firewall design
built around a multihomed host lends itself to the DMZ architecture
I recommend (see Figure 2), in which the DMZ is connected to its
own interface on the firewall host and, thus, is isolated from both
the Internet and one's internal network.

This article is very fine, actually i want to implement the three hommed dmz. Will you please help me for that... I am using Redhat 6.2. I have three cards in my firewall machine. One for intranet, one for internet (i have the static ip), and another one for DMZ. I want to know how to configure the firewall machine for achive that. Except my linux firewall machine all are the machines are windows NT/2000. I want to communicate the DMZ mc to intranet mc and intranet to DMZ, for that purpose how to allocate the ip address and subnet mask for the internet and dmz machines. Intranet and DMZ machines are connected in seprate switches.

I learnt a lot from this - but still have this question - how do I link internal servers (database usually) to DMZ servers (Web linking to DMZ database). ie the DMZ DB servers contain a subset of internal data - but will also need to update internal db servers with results of web interactions with customers.

Trending Topics

Upcoming Webinar

Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report

August 27, 2015
12:00 PM CDT

DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.