Richard Bejtlich's blog on digital security, strategic thought, and military history.

Wednesday, February 25, 2009

Asset Management Assistance via Custom DNS Records

In my post Black Hat DC 2009 Wrap-Up, Day 2 I mentioned enjoying Dan Kaminsky's talk. His thoughts on the scalability of DNS made an impression on me. I thought about the way the Team Cymru Malware Hash Registry returns custom DNS responses for malware researchers, for example. In this post I am interested in knowing if any blog readers have encountered problems similar to the ones I will describe next, and if yes, did you / could you use DNS to help mitigate it?

When conducting security operations to detect and respond to incidents, my team follows the CAER approach. Escalation is always an issue, because it requires identifying a responsible party. If you operate a defensible network it will be inventoried and claimed, but getting to that point is difficult.

The problem is this: you have an IP address, but how do you determine the owner? Ideally you have access to a massive internal asset database, but the problems of maintaining such a system can be daunting. The more sites, departments, businesses, etc. in play, the more difficult it is to keep necessary information in a single database. Even a federated system runs into problems, since there must be a way to share information, submit queries, keep data current, and so on.

Dan made a key point during his talk: one of the reasons DNS scales so well is that edge organizations maintain their own records, without having to constantly notify the core. Also, anyone can query the system, and get results from the (presumably) right source.

With this in mind, would it make sense to internally deploy custom DNS records that identify asset owners?

In other words:

Mandate by policy that all company assets must be registered in the internal company DNS.

Add extensions of some type that provide information like the following, at a minimum:

Asset owner name and/or employee number

Owning business unit

Date record last updated

Periodically, statistically survey IP addresses observed via network monitoring to determine if their custom DNS records exist and validate that they are accurate

These points assume that there is already a way to associate an employee name or number with a contact method such as email address and/or phone number, as would be the case with a Global Address List.

Is anyone doing this? If not, do you have ideas for identifying asset owners when the scale of the problem is measured in the hundreds of thousands?Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Frequent listener; first time caller. Let's say you implement this within your organization. Now let's say an attacker got a foothold into your organization -- perhaps via some sort of client-side malware.

Then, the attacker now has access to all these internal inventory records via the queries made by the malware to your internal DNS servers.

The bottom line: Yes, your idea sounds very usable, but I'm not sure it's a great idea to expose all those asset management records to everyone in the organization via DNS without any type of authentication.

Granted, if you already make this information publicly available to anyone in your organization, then there's no problem.

If you could somehow control access to the information, then you've got an ideal balance. But, tacking on this type of control to custom DNS records isn't easy.

One idea would be to protect/encrypt each of the records using a symmetric key or even a PKI cert... but that isn't an easy solution.

This would be fairly easy to implement in a configuration management system like Opscode's Chef[1]. At the core of the infrastructure is a chef-server (or servers) that nodes register with, and must be validated. They can be searched for various data attributes that can be indexed and stored centrally. Attributes can be updated through a web interface, too. Recipes could be written to do various configuration (keeping a sane "known state" on workstations for example), and the whole central index could also be searched to update DNS.

Currently Chef doesn't support Windows, but its in the works. Full disclosure: I work for Opscode, but Chef is free and open source, and written in Ruby.

@Darien: Fair point. But if the attacker already has a foothold in your company, client-side, they can already query your DNS server for all kinds of useful information (i.e., look for a record called "exchange" or enumerate all the Active Directory records looking for domain controllers).

The additional asset management records that Richard proposes would provide the attacker with more knowledge of the asset owner, which I guess would only really assist them in a social engineering attack. But, again, they're already in your network. They don't need to socially engineer all that much.

And knowing that machine 3.10.11.12 is owned by employee ID 4567 isn't that much information to give away. Heck, even if you did put in LOC records (as suggested by Michael) you would certainly be tipping off the attacker to your data center's location, but again, they're already in your network.

I think the much much MUCH greater danger is if you do not employ split DNS and you have these asset management entries in an externally-visible DNS server.

That's just my two cents, Darien, but I'm usually wrong.

I really like this idea of stashing asset owner information in DNS. I don't have an Active Directory DNS server nearby to see how easy it would be to add custom "TXT" records. Anyone know?

Such kind of information sounds more to me like an LDAP use case. Also can be distributed, with authority over parts handed to "subsidiaries", but allows for SSL for transport, some authentication, limit who has access to what information. True, at a price of larger complexity.

I think there's a tangible difference between LDAP, where things *can* be delegated, and DNS, where everything actually is.

Richard--

Wow. I may have said I hoped DNS (with DNSSEC) could finally solve a bunch of security problems we've been struggling with for years. I hadn't even thought about asset tracking. Yes, that's indeed pretty damn cool.

I'm intrigued by this idea, in general, but I bet it would be too hard to keep up to date in large facilities, not to mention nearly useless in environments that rely heavily on DHCP (unless the asset-specific records were somehow dynamically updatable).

I see more utility in things like SSHFP records (which OpenSSH, at least, supports via the VerifyHostKeyDNS parameter), which is used to store and distribute SSH host keys.

Adding TXT records in AD isn't difficult. (Steven Andres) However, using DNS/LDAP for storing this information is really best kept for smaller businesses. There's a threshold where you'll need to leave this behind and use a database and web front-end to store this data.

The data would likely scale without a problem, but this is why I believe it would be best left to smaller businesses:

1. Some large businesses use a database just to manage access to DNS. This database not only controls all input/output into DNS, but it also provides advanced auditing for changes and finer-grain security ACLs. Daunting...yes. Companies that already have significant DB/App expertise probably won't find additional value in using DNS to store asset information.

2. How can you ensure the uniformity of the data? Some information is better than none, but each team would probably come up with their own naming conventions. You need uniformity to provide for useful reporting, especially knowing that other internal applications will begin depending upon this data. You could build an application front-end, using DNS as the database, but it starts to become just as daunting at that point and you may find a DB/App integrated with LDAP to be more effective.

3. Does the team that manages DNS want to support this type of data? Most likely it would prove to be an uphill battle, depending upon the size and structure of company.

4. Most medium to large businesses are already collecting a lot of information about their assets. Why not leverage the existing database instead of creating and managing multiple? (examples: firmware, hardware models, hardware support agreements, operating system revision/patches/licenses, application revision/patches/licenses, etc.)

5. Accurate asset data provides the foundation for information assurance. If a company doesn't understand what it owns, who's responsible for it, what it's being used for, or what it means to them if it dies, then they are destined for failure. With that being said, if one decides to use an shrink-wrapped app, DNS, or a in-house coded app/DB, it needs to be driven by the upper echelons, all IT teams need to have agreed to it's use, and then it needs to be well-funded (scaled to match the company's size of course).

One thing I think people aren't recognizing is that, to an astonishing extent, if it's on the network it's in DNS. This is true even for dynamic content, which is updated via DHCP helpers or through AD-integrated updates.

Whether DNS is perfect for this, I agree, is up in the air -- what's significant is that we can finally start thinking of how to stop pretending things that aren't really scaling, are scaling.

Everyone's bringing up all sorts of awesome stuff that we've seen repeatedly fail in the field. It's a problem.

I'm assuming your talking about locating a physical owner for the acquisition and/or removal of a machine, and not the identification of the business owner (though our method also works for this, as you'll see below...nothing fancy though!)?

What we do is this (200,000 nodes). Organizations are typically very bad at asset control, but very good at employee control (you pay employees but not computers, seems to be the key difference). Seems like most orgs have a very good database of employees, that somewhere ties into AD correlating with a user ID.

We simply run nbtscan against the entirety of subnets running Windows in intervals during the day, and populate those recs into a DB for querying. That way you get machine name, MAC, logged on ID, and IP which can all be cross referenced. You can also show trends (this guy downloaded an app which got him popped, here are all the machines he has used regularly in the last 30 days).

NetBIOS is a very lightweight protocol, and nbtscan is amazingly fast on a Linux box. We scan at a pretty frequent interval during the day (~every 2hrs).

We just cross-reference the ID of the logged on/responsible user with the employee DB, and voila now we not only have who the employee is, but who they report to, what they work on, etc.

If this is what you were talking about, it works for us, and it scales. And yes, we could easily populate this data into DNS recs for the machines in question, which would I think add a lot of value for us. And this could be done hierarchically among your org, especially if assets are among multiple nets not connected via NetBIOS.

We do this already, have been for years, on a public /16. (We're a university.)

Given the distributed nature of system management and support on campus, it's extremely difficult to enforce standards, especially since we allowed individual org units direct access to create and edit DNS data.

That being said, it's much better than nothing, and the cost is virtually nil.

We require contact information on the subnets themselves, so if everything else fails, I know who to pester to get the rest updated.

mish's idea is good, although it wouldn't work for us - if more than half our used IPs are for Windows boxes, I'd be surprised.