So-called Next Generation Firewalls (NGFW) are those that extend “traditional port firewalls” with the added context of policy with application visibility and control to include user identity while enforcing security, compliance and productivity decisions to flows from internal users to the Internet.

NGFW, as defined, is a campus and branch solution.Campus and Branch NGFW solves the “inside-out” problem — applying policy from a number of known/identified users on the “inside” to a potentially infinite number of applications and services “outside” the firewall, generally connected to the Internet. They function generally as forward proxies with various network insertion strategies.

Campus and Branch NGFW is NOT a Data Center NGFW solution.

Data Center NGFW is the inverse of the “inside-out” problem. They solve the “outside-in” problem; applying policy from a potentially infinite number of unknown (or potentially unknown) users/clients on the “outside” to a nominally diminutive number of well-known applications and services “inside” the firewall that are exposed generally to the Internet. They function generally as reverse proxies with various network insertion strategies.

Campus and Branch NGFWs need to provide application visibility and control across potentially tens of thousands of applications, many of which are evasive.

Data Center NGFWs need to provide application visibility and control across a significantly fewer number of well-known managed applications, many of which are bespoke.

There are wholesale differences in performance, scale and complexity between “inside-out” and “outside-in” firewalls. They solve different problems.

The things that make a NGFW supposedly “special” and different from a “traditional port firewall” in a Campus & Branch environment are largely irrelevant in the Data Center. Speaking of which, you’d find it difficult to find solutions today that are simply “traditional port firewalls”; the notion that firewalls integrated with IPS, UTM, ALGs, proxies, integrated user authentication, application identification/granular control (AVC), etc., are somehow incapable of providing the same outcome is now largely a marketing distinction.

While both sets of NGFW solutions share a valid deployment scenario at the “edge” or perimeter of a network (C&B or DC,) a further differentiation in DC NGFW is the notion of deployment in the so-called “core” of a network. The requirements in this scenario mean comparing the deployment scenarios is comparing apples and oranges.

Firstly, the notion of a “core” is quickly becoming an anachronism from the perspective of architectural references, especially given the advent of collapsed network tiers and fabrics as well as the impact of virtualization, cloud and network virtualization (nee SDN) models. Shunting a firewall into these models is often difficult, no matter how many interfaces. Flows are also asynchronous and often times stateless.

Traditional Data Center segmentation strategies are becoming a blended mix of physical isolation (usually for compliance and/or peace of mind o_O) with a virtualized overlay provided in the hypervisor and/or virtual appliances. Shifts in traffic patterns include a majority of machine-to-machine in east-west direction via intra-enclave “pods” are far more common. Dumping all flows through one (or a cluster) of firewalls at the “core” does what, exactly — besides adding latency and often times obscured or unnecessary inspection.

Add to this the complexity of certain verticals in the DC where extreme low-latency “firewalls” are needed with requirements at 5 microseconds or less. The sorts of things people care about enforcing from a policy perspective aren’t exactly “next generation.” Or, then again, how about DC firewalls that work at the mobile service provider eNodeB, mobile packet core or Gi with specific protocol requirements not generally found in the “Enterprise?”

In these scenarios, claims that a Campus & Branch NGFW is tuned to defend against “outside-in” application level attacks against workloads hosted in a Data Center is specious at best. Slapping a bunch of those Campus & Branch firewalls together in a chassis and calling it a Data Center NGFW invokes ROFLcoptr.

Show me how a forward-proxy optimized C&B NGFW deals with a DDoS attack (assuming the pipe isn’t flooded in the first place.) Show me how a forward-proxy optimized C&B NGFW deals with application level attacks manipulating business logic and webapp attack vectors across known-good or unknown inputs.

They don’t. So don’t believe the marketing.

I haven’t even mentioned the operational model and expertise deltas needed to manage the two. Or integration between physical and virtual zoning, or on/off-box automation and visibility to orchestration systems such that policies are more dynamic and “virtualization aware” in nature…

In my opinion, NGFW is being redefined by the addition of functionality that again differentiates C&B from DC based on use case. Here are JUST two of them:

C&B NGFW is becoming what I call C&B NGFW+, specifically the addition of advanced anti-malware (AAMW) capabilities at the edge to detect and prevent infection as part of the “inside-out” use case. This includes adjacent solutions that include other components and delivery models.

DC NGFW is becoming DC NGFW+, specifically the addition of (web) application security capabilities and DoS/DDoS capabilities to prevent (generally) externally-originated attacks against internally-hosted (web) applications. This, too, requires the collaboration of other solutions specifically designed to enable security in this use case.

There are hybrid models that often take BOTH solutions to adequately protect against client infection, distribution and exploitation in the C&B to prevent attacks against DC assets connected over the WAN or a VPN.

Pretending both use cases are the same is farcical.

It’s unlikely you’ll see a shift in analyst “Enchanted Dodecahedrons” relative to functionality/definition of NGFW because…strangely…people aren’t generally buying Campus and Branch NGFW for their datacenters because they’re trying to solve different problems. At different levels of scale and performance.

A Campus and Branch NGFW is “No Good For Workloads” in the Data Center.

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand. Here are just a few of them:

The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care

Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage

Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity

Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy. They simply require that all sessions terminate on a set of [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance. AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems. Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two. To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination. It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

I could probably just ask this of some of my friends — many of whom are the best in the business when it comes to Red Teaming/Pen Testing, but I thought it would be an interesting little dialog here, in the open:

When a Red Team is engaged by an entity to perform a legally-authorized pentest (physical or electronic) with an explicit “get out of jail free card,” does that change the tactics, strategy and risk appetite of the team were they not to have that parachute?

Specifically, does the team dial-up or dial-down the aggressiveness of the approach and execution KNOWING that they won’t be prosecuted, go to jail, etc.?

Blackhats and criminals operating outside this envelope don’t have the luxury of counting on a gilded escape should failure occur and thus the risk/reward mapping *might* be quite different.

To that point, I wonder what the gap is between an authorized Red Team action versus those that have everything to lose? What say ye?

The results of a long-running series of extremely scientific studies has produced a Metric Crapload™ of anecdata.

Namely, hundreds of detailed discussions (read: lots of booze and whining) over the last 5 years has resulted in the following:

Most in-line security appliances (excluding firewalls) with the ability to actively dispose of traffic — services such as IPS, WAF, Anti-malware — are deployed in “monitor” or “learning” mode are rarely, if ever, enabled with automated blocking. In essence, they are deployed as detective versus preventative security services.

I have many reasons compiled for this.

I am interested in hearing whether you agree/disagree and your reasons for such.

Intel‘s implementation of the TCG-driven TPM — the Trusted Platform Module — often described as a hardware root of trust, is essentially a cryptographic processor that allows for the storage (and retrieval) and attestation of keys. There are all sorts of uses for this technology, including things I’ve written of and spoken about many times prior. Here’s a couple of good links:

But here’s something that ought to make you chuckle, especially in light of current news and a renewed focus on supply chain management relative to security and trust.

The Intel TPM implementation that is used by many PC manufacturers, the same one that plays a large role in Intel’s TXT and Mt. Wilson Attestation platform, is apparently…wait for it…manufactured in…wait for it…China.

In the networking world, we’ve seen how virtualization technologies and operational models such as cloud have impacted the market, vendors and customers in what amounts to an incredibly short span of time.

What’s popped out of that progression is the hugely disruptive impact of Software Defined Networking and corresponding Network Function Virtualization. These issues are forcing both short and long term disruption in the networking space. Behemoths have had to pivot…almost overnight. We haven’t seen this behavior for a while.

I’m curious as to what people see in terms of technology that they feel is truly disruptive to the Security industry. That means you. 🙂

I understand many use cases, trends and operational shifts such as BYOD, Mobility, Cloud, etc. as well as amplification of “older” issues such as DDoS, Malware, WebApp attacks, etc., but I’m curious if you think we are really seeing truly security technology disruption impact that is innovative versus incremental advancement (on either the offensive or defensive side of the coin.)