Thursday, March 04, 2010

A nice blog by Jerome Wendt about how tying in access history of a file with an ability to classify a file as sensitive is a big step to tracking down how data is leaking out of an organization.

I've seen demos of this technology (our team worked on it), and it's quite cool. A big problem our customers have is "OK, I've scanned all my data, I see there is sensitive data out there, now what?". Having a good sense of who is accessing the data most frequently, and therefore who is the likely "owner" of the data, is a huge step to helping answer this question.

2 comments:

David, have you heard of Global Velocity (globalvelocity.com)? They have a hardware, network-speed box that addresses this area. It was started by a couple of friends of mine - I'd be interested to hear what you think of it.

It looks like these guys are focused on "data in motion" - content flying across the network that needs to be detected on. Or am I missing something?

The combination of Data Insight and our Discover product is focused on "data at rest" - content sitting in repositories across the enterprise.

The value of Data Insight here is it gives you a record of who has been accessing a sensitive file, and what the permissions are on the file, which helps you prioritize the risk around the sensitive file.

Since there can be a *lot* of sensitive data, this helps a company focus on the data most at risk (SSNs on a public share vs. legal documents that are only viewed/modified/accessible by the legal department).