SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control. If the device gets lost with the data on it, it's "safe and secure" because it's encrypted. They are positioning this as an "endpoint security" solution.

I'm not going to debate the merits/downsides of that approach because I haven't seen their pitch, but suffice it to say, I think it's missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry's dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn't have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn't do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that's DLP.

In today's market taxonomy, I would agree with Larry. However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance. He's describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP -- Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there...}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I'd expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

Data leakage/loss protection (DLP)

Identity and access management (IAM)

Network Admission/Access Control (NAC)

Digital rights/Enterprise rights management (DRM/ERM)

Seamless encryption based upon "communities of interest"

Information classification and profiling

Metadata

Deep Packet Inspection (DPI)

Vulnerability Management

Configuration Management

Database Activity Monitoring (DAM)

Application and Database Monitoring and Protection (ADMP)

etc...

That's not to say they'll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space.

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway. The difficulty is that they're all from different vendors. In the future, we'll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it's safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

June 01, 2007

Now that I've annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context. That means we must get to a point where we can make "security" decisions based upon information and its "value" and classification as it moves about.

It's not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn't characterize the content in context of where it's coming from, where it's going and who (machine, virtual machine and "user") or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the 'type' of content that this data represents. We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I'm on my Jericho Forum kick lately, here's how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

Attributes can be held within the data (DRM/Metadata) or could be a separate system.

Access / security could be implemented by encryption.

Some data may have “public, non-confidential” attributes.

Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality. As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as "compliance" solutions, but there value goes much deeper than that. IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be "routed" based upon policy.

If we're not able to use this file-oriented static metadata, we'd like then for the "network" (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in. ADAPT is an approach that uses some new technology to profile and characterize content (by using signatures, regular expressions and behavioral analysis in hardware) to then apply policy-driven traffic "routing" functionality as flows traverse the network by applying an ADAPT tag-header as a descriptor to each flow as it moves around the network.

The ADAPT tag could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling.

Think of it like a VLAN tag the describes the data within the packet/flow.

This ADAPT tag is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as "confidential" or uses taxonomies such as "HIPAA" or "PCI" to describe what is contained in the flows. One could combine and/or stack the tags, too.

Then, as data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.

Just like an ACL for IP addresses of VLAN policies, ADAPT does the same thing for content routing.

To enable this sort of functionality, either every switch/router in the network would need to be ADAPT enabled (which would be difficult since you'd need every network vendor to support the protocols) OR you could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy.

Since the only device that needs to be ADAPT aware is this UTM security service switch, you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions. Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

I'm going to be self-serving here and demonstrate this "theoretical" solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure. It just so happens to have a wire-speed content security engine installed in it. The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets "routed."

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces. I've made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

This network is chopped up into 4 VLAN segments:

General Clients (VLAN A)

Finance & Accounting Clients (VLAN B)

Financial Servers (VLAN C)

HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let's say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let's say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server. However, let's say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D. The reply containing the requested information is profiled by the content security engine which is able to characterize the data as containing information that matches our definition of either "HIPAA or Confidential" (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content's file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what "deny" means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also. Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application. You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and
look at where consolidation of functions/services are being driven with
virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a "zone" mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

February 21, 2007

One of the benefits of living near Boston is the abundance of amazing museums and historic sites available for visit within 50 miles from my homestead.

This weekend the family and I decided to go hit the Museum of Science for a day of learning and fun.

As we were about to leave, I spied an XP-based computer sitting in the corner of one of the wings and was intrigued by the sign on top of the monitor instructing any volunteers to login:

Then I noticed the highlighted instruction sheet taped to the wall next to the machine:

If you're sharp enough, you'll notice that the sheet instructs the volunteer how to remember their login credentials -- and what their password is ('1234') unless they have changed it!

"So?" you say, "That's not a risk. You don't have any usernames!"

Looking to the right I saw a very interesting plaque. It contained the first and last names of the museum's most diligent volunteers who had served hundreds of hours on behalf of the Museum. You can guess where this is going...

I tried for 30 minutes to find someone (besides Megan Crosby on the bottom of the form) to whom I could suggest a more appropriate method of secure sign-on instructions. The best I could do was one of the admission folks who stamped my hand upon entry and ended up with a manager's phone number written on the back of a stroller rental slip.

August 02, 2006

Mike Farnum and I continue to debate the merits of single-sign-on and his opinion that deploying same makes you more secure.

Rothman's stirring the point saying this is a cat fight. To me, it's just two dudes having a reasonable debate...unless you know something I don't [but thanks, Mike R. because nobody would ever read my crap unless you linked to it! ;)]

Mike's position is that SSO does make you more secure and when combined with multi-factor authentication adds to defense-in-depth.

It's the first part I have a problem with, not so much the second and I figured out why. It's the order of the things that got me bugged when Mike said the following:

But here’s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.

If he had suggested that multi-factor authentication should back up an SSO solution, I'd agree. But he didn't and he continues not to by maintaing (I think) that SSO itself is secure and SSO + multi-factor authentication is more secure.

My opinion is a little different. I believe that strong authentication *does* add to defense-in-depth, but SSO adds only depth of complexity, obfuscation and more moving parts, but with a single password on the front end. More on that in a minute.

Let me clarify a point which is that I think from a BUSINESS and USER EXPERIENCE perspective, SSO is a fantastic idea. However, I still maintain that SSO by itself does not add to defense-in-depth (just the opposite, actually) and does not, quantifiably, make you more "secure." SSO is about convenience, ease of use and streamlined efficiency.

You may cut down on password resets, sure. If someone locks themselves out, however, most of the time resets/unlocks involve then self-service portals or telephone resets which are just as prone to brute force and social engineering as calling the helpdesk, but that's a generalization and I would rather argue through analogy... ;)

Here's the sticky part of why I think SSO does not make you more secure, it merely transfers the risks involved with passwords from one compartment to the next.

While that's a valid option, it is *very* important to recognize that managing risk does not, by definition, make you more secure...sometimes managing risk means you accept or transfer it. It doesn't mean you've solved the problem, just acknowledged it and chosen to accept the fact that the impact does not justify the cost involed in mitigating it. ;)

SSO just says "passwords are a pain in the ass to manage. I'm going to find a better solution for managing them that makes my life easier." SSO Vendors claim it makes you more secure, but these systems can get very complex when implementing them across an Enterprise with 200 applications, multiple user repositories and the need to integrate or federate identities and it becomes difficult to quantify how much more secure you really are with all of these moving parts.

Again, SSO adds depth (of complexity, obfuscation and more moving parts) but with a single password on the front end. Complex passwords on the back-end managed by the SSO system don't do you a damned good when some monkey writes that single password that unlocks the entire enterprise down on a sticky note.

Let's take the fancy "SSO" title out of the mix for a second and consder today's Active Directory/LDAP proxy functions which more and more applications tie into. This relies on a single password via your domain credentials to authenticate directly to an application. This is a form of SSO, and the reality is that all we're doing when adding on an SSO system is supporting web and legacy applications that can't use AD and proxying that function through SSO.

It's the same problem all over again except now you've just got an uber:proxy.

Now, if you separate SSO from the multi-factor/strong authentication argument, I will agree that strong authentication (not necessarily multi-factor -- read George Ou's blog) helps mitigate some of the password issue, but they are mutually exclusive.

Maybe we're really saying the same thing, but I can't tell.

Just to show how fair and balanced I am (ha!) I will let you know that prior to leaving my last employ, I was about to deploy an Enterprise-wide SSO solution. The reason? Convenience and cost.

Transference of risk from the AD password policies to the SSO vendor's and transparency of process and metrics collection for justifying more heads. It wasn't going to make us any more secure, but would make the users and the helpdesk happy and let us go figure out how we were going to integrate strong authentication to make the damned thing secure.

August 01, 2006

I've been following with some hand-wringing the on-going debates regarding the value of two-factor and strong authentication systems in addition to, or supplementing, traditional passwords.

I am very intent on seeing where the use cases that best fit strong authentication ultimately surface in the long term. We've seen where they are used today, but Icub wonder if we, in the U.S., will ever be able to satisfy the privacy concerns raised by something like a smart-card-based national ID system to recognize the benefits of this technology.

[Editor's Note: George Ou from ZDNet just posted a really intersting article on his blog relating how banks are "...cheating their way to [FFIEC] web security guidelines" by just using multiple instances of "something the user knows" and passing it off as "multifactor authentication." His argument regarding multi-factor (supplemental) vs. strong authentication is also very interesting.]

I've owned/implemented/sold/evaluated/purchased every kind of two-factor / extended-factor / strong authentication system you can think of:

Tokens

SMS Messaging back to phones

Turing/image fuzzing

Smart Cards

RFID

Proximity

Biometrics

Passmark-like systems

...and there's very little consistency in how they are deployed, managed and maintained. Those pesky little users always seemed to screw something up...and it usually involved losing something, washing something, flushing something or forgetting something.

The technology's great, but like Chandler Howell says there are a lot of issues that need reconsideration when it comes to their implementation that go well beyond what we think of today as simply the tenents of "strong" authentication and the models of trust we surround them with:

So here are some Real World goals I suggest we should be looking at.

Improved authentication should focus on (cryptographically) strong
Mutual Authentication, not just improved assertion of user Identity.
This may mean shifts in protocols, it may mean new technology. Those
are implementation details at this level.

We need to break the relationship between location & security
assumption, including authentication. Do we need to find a replacement
for “somewhere you are?” And if so, is it another authentication factor?

How does improved authentication get protection closer to the data?
We’re still debating types of deadbolts for our screen door rather than
answering this question.

All really good points, and ones that I think we're just at the tip of discussing.

Taking these first steps is an ugly and painful experience usually, and I'd say that the first footprints planted along this continuum do belong to the token authentication models of today. They don't work for every application and there's a lack of cross-pollinization when you use one vendor's token solution and wish to authenticate across boundaries (this is what OATH tries to solve.)

For some reaon, people tend to evaluate solutions and technology in a very discrete and binary modality: either it's the "end-all, be-all, silver bullet" or it's a complete failure. It's quite an odd assertion really, but I suppose folks always try to corral security into absolutes instead of relativity.

That explains a lot.

At any rate, there's no reason to re-hash the fact that passwords suck and that two-factor authentication can provide challenges, because I'm not going to add any value there. We all understand the problem. It's incomplete and it's not the only answer.

Defense in depth (or should it be wide and narrow?) is important and any DID strategy of today includes the use of some form of strong authentication -- from the bowels of the Enterprise to the eCommerce applications used in finance -- driven by perceived market need, "better security," regulations, or enhanced privacy.

However, I did read something on Michael Farnum's blog here that disturbed me a little. In his blog, Michael discusses the pros/cons of passwords and two-factor authentication and goes on to introduce another element in the Identity Management, Authentication and Access Control space: Single-Sign-On.

Michael states:

But here’s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation. This scenario seems to make a lot of sense for a few
reasons:

It eases the administrative burdens for the IT department because,
if implemented correctly, your password reset burden should go down to
almost nil

It eases (possibly almost eliminates) password complaints and written down passwords

It has the bonus of actually easing the login process to the network and the applications

I know it is not the end-all-be-all, but multi-factor authentication
is definitely a strong layer in your defenses. Think about it.

Okay, so I've thought about it and playing Devil's Advocate, I have concluded that my answer is: "Why?"

How does Single-Sign-On contribute to defense-in-depth (besides adding another hyphenated industry slang) short of lending itself to convenience for the user and the help desk. Security is usually 1/convenience, so by that algorithm it doesn't.

Now instead of writing down 10 passwords, the users only need one sticky -- they'll write that one down too!

Does SSO make you more secure? I'd argue that in fact it does not -- not now that the user has a singular login to every resource on the network via one password.

Yes, we can shore that up with a strong-authentication solution, and that's a good idea, but I maintain that SA and SSO are mutually exclusive and not a must. The complexity of these systems can be mind boggling, especially when you consider the types of priviledges these mechanisms often require in order to reconcile this ubiquitous access. It becomes another attack surface.

There's a LOT of "kludging" that often goes on with these SSO systems in order to support web and legacy applications and in many cases, there's no direct link between the SSO system, the authentication mechanism/database/directory and ultimately the control(s) protecting as close to the data as you can.

This cumbersome process still relies on the underlying OS functionality and some additional add-ons to mate the authentication piece with the access control piece with the encryption piece with the DRM piece...

Yet I digress.

I'd like to see the RISKS of SSO presented along with the benefits if we're going to consider the realities of the scenario in terms of this discussion.

That being said, just because it's not the "end-all-be-all" (what the hell is with all these hyphens!?) doesn't mean it's not helpful... ;)