Search

Subscribe

DHS Funding Open Source Security

The U.S. government's Department of Homeland Security plans to spend $1.24 million over three years to fund an ambitious software auditing project aimed at beefing up the security and reliability of several widely deployed open-source products.

The grant, called the "Vulnerability Discovery and Remediation Open Source Hardening Project," is part of a broad federal initiative to perform daily security audits of approximately 40 open-source software packages, including Linux, Apache, MySQL and Sendmail.

The plan is to use source code analysis technology from San Francisco-based Coverity Inc. to pinpoint and correct security vulnerabilities and other potentially dangerous defects in key open-source packages.

Software engineers at Stanford University will manage the project and maintain a publicly available database of bugs and defects.

Anti-virus vendor Symantec Corp. is providing guidance as to where security gaps might be in certain open-source projects.

I think this is a great use of public funds. One of the limitations of open-source development is that it's hard to fund tools like Coverity. And this kind of thing improves security for a lot of different organizations against a wide variety of threats. And it increases competition with Microsoft, which will force them to improve their OS as well. Everybody wins.

What's affected?

In addition to Linux, Apache, MySQL and Sendmail, the project will also pore over the code bases for FreeBSD, Mozilla, PostgreSQL and the GTK (GIMP Tool Kit) library.

There is this little shop within DHS that is doing something good in "cyber security" at the national level. HSARPA within S&T has several activities funded that will prove of benefit to the country, the world for that matter. Unfortunately they get little press coverage and even less money compared to other portfolios within S&T.

The Coverity work out of Stanford is really quite good. It starts software developement Q&A down the road that folks like Dan Wolf at NSA have called for over the last 5+years.

Hopefully the effort will continue with a little partnership from the US Government. And I agree it is good to use public funds this way.

Coverity has been working with FreeBSD (and other oper source projects) for at least a year now, and it has provided very valuable insights. I don't remember any nasty vulnerability, but interesting stuff nonetheless.
Extending this to even more, and with government backing, is definitely an interesting proposition.

Some vulnerabilities are stumbled upon, and a lot of others are crapshoots, even when analyzing decompiled code. For every vulnerability found, there are dozens or hundreds of attempts against other aspects of a program. This kind of research really does need a mind that finds it challenging; to many people, it's a cure for insomnia.

Open-source software changes this somewhat because the code is open for review. This allows a researcher to examine it for things like unchecked buffers, but there are sometimes still problems in logic that aren't as clearly problematic, and these kinds of dedicated reviews will help to find them. Dedicated security reviews, where a team stops all feature work to focus on finding and dealing with bugs, can often be painfully revealing, but are important if one's product is to be taken seriously.

I suspect that we'll start seeing lists of fixes that stretch into the dozens for many of these projects as the reviews are completed. Hopefully, someone will also be taking notes (to be made public) on how developers can make use of tools like Coverity's to improve their work whether open-source or proprietary.

So, who's looking at source code for closed-source applications that the government depends on? Or are we asserting here that the cost of such investigations be the price of keeping source closed, and that we trust the developers of those apps to do a good enough job?

I'm not sure if spending Tax-$$$ to help out with
Coverity's sales is such a good idea.
I'd rather see that money spent on a (possibly currently unemployed)
open-source developer with the task to find these problems using her brain.

Actually, I think this is all that is needed. Once a bug has been discovered, it can be published and corrected. If the bug represents a major security problem, at least the disclosure will allow administrators to do a proper security review.

From my standpoint, I regard the existence of unpublished bugs to be a bigger threat than unpatched software - if you know something is vulnerable (and how), you know what you need to restrict until the bug is corrected.

Plus, it seems open source vendors do a good job of quick turnaround on bug fixes. This just creates a third party audit system that exposes problems. I'm generally a fan of third party audit systems...

"My only concern would be if the open source code which undergoes this security review, as a direct result of public funding, would need to contain a new distribution license. "

I would expect that open source vendors may be affected by a publically funded project like this. Also, it is my understanding that the now publically funded source code would become "public domain", which is certainly different from "open source".

- The individual is less likeley to have a hidden agenda.
- The individual is less likeley to disregard a good solution
to a problem because it does not align with a "business-strategy"
- The individual is generally more flexible, since she's not bound
to use the in-house product.

And a possibly nice side-effects:
- It potentially cuts down unemployment
- It increases competition
(Coverty is already doing it, and won't stop because of it).

As a preamble, note that this is a devil's advocate argument. I'm not opposed to your idea, necessarily, but there are many reasons for contracting a company rather than individuals which you may be overlooking...

> - The individual is less likely to have a hidden agenda

That depends on how well you check the motives of the individual. In this particular incident, this is a highly debatable point, especially given that the contracted company now has a huge vested interest in keeping their reputation clean.

> - The individual is less likely to disregard a good solution
> because it does not align with a "business-strategy"

If you're not looking for solutions, merely looking for problems, I don't know that this is relevant. I also think that you're disregarding individual motivations that can represent a problem.

> - The individual is generally more flexible, since she's not
> bound to use the in-house product.

An individual is also more likely to follow his or her own investigative patterns, which may or may not be the most efficient. Systematic approach by a group may produce better results.

> It potentially cuts down unemployment

If the company expands due to this contract, that can also be the case, no?

> It increases competition

If other companies can get in on this contract, don't you think they'll make efforts to compete?

Also, if you're contracting an existing company, they come with a managerial organization and their overhead and G&A costs are known factors.

If you contract individuals, you need to hire managers, pay for insurance, review your potential employees, develop a process for reviewing their work, etc. etc.

In short, if you contract individuals, you have to create a bureaucracy to manage them...

Absolutely. Sorry for not mentioning that in my earlier response:
In my opinion, if you look at a company as a hole, oftentimes you get average performance.
With a careful selection process, chosing individuals has the potential to give you results
that are way beyond average.

> That depends on how well you check the motives of the individual.
Again, absoluteley. There needs to be a selection process.
Just throwing money randomly at people won't get us anywhere.

> contracted company now has a huge vested interest in keeping their reputation clean
I can't see how that's different compared to ``regular'' paying customers.

> Systematic approach by a group may produce better results.
Individuals are just as capable to implement a systematic approach.
It's not the monopoly of a company to be systematic.

> If the company expands due to this contract, that can also be the case, no?
Correct. But in this case it sound like they're using a pre-existing product.

> Also, if you're contracting an existing company, they come with a
> managerial organization and their overhead and G&A costs are known
> factors.
Not necessarily a benefit: http://dilbert.com

> If you contract individuals, you need to hire managers
^^^^^^^^^^^^^
This is highly debatable.

> ... pay for insurance ...
Whether the money comes out pre-paycheck or after should just be an accounting issue.

> In short, if you contract individuals, you have to create a bureaucracy to manage them...
I do not agree with that.

Let me get this straight. You think that if I do straight contracts of slightly more than $400,000/yr. to individual programmers (say, 6 programmers at $65,000/year self employed, no insurance, etc.), give them a list of software but no standard tools, and let them develop their own investigative techniques to find bugs that I'm going to get meaningful results?

I don't know 6 good programmers who would be willing to work for that much as individuals without benefits... especially if the job description was, "find bugs in other people's code" ... a universally abhorrant task for all the programmers I've ever met...

You have a much higher opinion of individuals in general than I do :)

Note, before I get flamed to death on this post, I'm not trying to malign just individual programmers here. I just think that in order to get any sort of useful return on your money for complex projects, you need people to work together. I've never ever seen an instance where a collection of individuals can produce the same results as a team of people working in an organized fashion, at least on anything reasonably complex.

Do you really think that an automated tool can do anything substantial to identify vulnerabilities? The vulnerabilities I usually find are not anything I would be able to write a program for.

Yes, there is a class of really stupid vulnerabilities based on buffer size mismatches, bad printf formats, and the like. But those aren't the interesting vulnerabilities, and getting rid of those doesn't solve subtle or architectural problems.

On the other hand, as long as an attacker can turn one of these automated tools loose against the code, it makes sense to eliminate any problems that tool finds.

But is it a good use of public funds? I'd argue that a better use would be to train people to attack closed-source software. Attacking closed source requires techniques like dissecting network protocols, tracing system calls, and generally figuring out the internal software architecture from scratch. This can lead to the discovery of architectural problems that no automated tool will be able to identify. And once you can attack closed source, open source is a breeze.

I agree. This is great news. The devil may be in the details, but at least public funds are being earmarked with the intent to support open source security audits and raise the quality bar for everyone.

Let me just say that Coverity rocks. I've helped evaluate a number of static analysis tools for my employer. None have had the completeness and low false-positive rate of Coverity, and build integration is extremely simple. Their Kung-Fu is the strongest.

> contracts of slightly more than $400,000/yr. ... that I'm going to get meaningful results?

$400k for a large company translates into 15 minutes worth for 10k employees
heading towards a general assembly in order to listen to the announcment.
Enough time for everyone to go to the bathroom and fetch a coffee.

I've known of bad corporate environments as well, your points are definitely notable. Remember scaling, though -> if this is contracted out to a company, the company already likely provides this sort of service to other customers, and as a result all the customers get the benefit of their scaling issues.

@ antibozo

> Yes, there is a class of really stupid vulnerabilities based on
> buffer size mismatches, bad printf formats, and the like.

Buffer overflows account for more CERT announcements than anything else (I believe, I'm not going to count all of them for last year and check my fact on this). In any event, they certainly count for quite a few. Automated tools should be able to find these rather easily, I'd think.

> But those aren't the interesting vulnerabilities, and getting rid of those doesn't
> solve subtle or architectural problems.

What makes a vulnerability "interesting"? From a security standpoint, anything that allows a privilidge escalation should be regarded as "interesting", even if the process required to discover it isn't intellectually astonishing :)

> I'd argue that a better use would be to train people to attack closed-source software.

That depends on what problem you're trying to solve.

This doesn't sound like the beginnings of a Perfect World, by any means. This just means that a relatively trivial amount of money is going to be spent (in the public interest) to do bugchecking on a handful of open source projects.

It's just a band-aid by any estimation, but I agree with Bruce that this is the sort of band-aid I'd like to see (regardless of who's doing the work). It certainly makes more practical sense than having people sit around in conferences coming up with security standards that not even the participants are going to follow...

Given that funding seems a bit light, even with good tools (and Coverity undeniably has good tools) I have to question the usefulness of Symantec siphoning off a portion of it for 'providing guidance'. Do they have the breadth of experience required? Win protection mechanisms are nothing like PAM, networking is very different, etc. Surely a better better choice might have been made.

I'm of two minds about this project. On the one hand, I like the idea of developing or improving tools that can find vulnerabilties in the existing code base. On the other, I'm concerned that a large portion of the benefit will disappear when the funding stops, at least if regular scanning stops at the same time.

It seems like trying to design a tool to tackle whatever a programmer's creative mind can throw at it might be the hard way to solve the sofware security problem. It might be more effective to constrain the coding patterns developers use, sort of like the way structured programming got rid of goto's and the resulting spaghetti code.

Since this is about supporting open source security, wouldn't it be better to support an open source auditing tool? That would benefit more than just those projects covered here. I'm not saying this is bad, but isn't it strange to support the development of a proprietary product to make open source better?

Examining Mozilla seems like a good thing. Over the past year or so vunerabilities identified in IE have been found later in Firefox and Mozilla causing me to suspect that the vunerability started out in the Mozilla code.

If this is such a good idea why not spend the tax money on MS software fixing. MS certainly is used in many more installations than the open source sw. and thus we get better use out of our taxes? Or is this not pc?

I'm all for auditing and suchlike -- indeed, I'm just now starting in with OpenBSD, after trying various Linuxen. But remember, half the point of Open Source is precisely that *anyone* can audit it, with or without permission. That includes Coverity, and Stanford, and even the DHS.

The problem is, the DHS, and the current Administration in general, have a breathtakingly bad record not only for performance in cybersec, but even for concern thereof. Furthermore, they have a well-established pattern of favoring large corporations over... well, anyone else.

So, now they've got a new initiative, supposedly to improve computer security. So, are they largeting Outbreak macro viruses? Or spam-phishing? How about defenestrating a few more botnets? Maybe even (gasp) holding Microsoft to some semblance of public responsibility?

No no, it's much more important to find some vulnerabilities in all those Open Source projects! And you're worried about *individuals* having a hidden agenda? Pardon my skepticism, but I'd be much more enthused about this if DHS were *not* involved.

It's definitely a step in the right direction. Finding and fixing OSS bugs won't compensate for uninformed administrators, insider attacks, misconfigured systems, or any of a thousand other risks. However, software vulnerabilities are certainly a piece of the security puzzle. This will strengthen those systems where security is already a priority, and all risks are taken seriously.

I daresay too, that good practices can protect against unknown or suspected vulnerabilities, OSS or otherwise. Least privileged access, hetrogenous systems, disabling unneccessary ports and services, multiple reinforced controls, etc - these and other practices are significant pieces of a good defense.

I think this is a horrible use of tax dollars. You can call it a public service thing, but those open source programs are being used by companies to generate revenues. So this essentially becomes a government subsidy against companies like Apple, Oracle and Microsoft who don't get government money to improve their security.

I think it's a great idea to improve the security of these open source products, but it's not the governments place to get involved in the marketplace.

> those open source programs are being used by companies to generate revenues.

Agreed.

> this essentially becomes a government subsidy against companies
> like Apple, Oracle, and Microsoft who don't get government money
> to improve their security.

Er, I'm going to have to disagree with this assessment. Apple, Oracle, and Microsoft own their software, it's their intellectual property and they make money off of it. The open source software we're talking about here is not sold; the maintainers of GPL'd software do the work for free. As such, Apple, Oracle, and Microsoft have a revenue source they can divert towards security, these projects don't.

"Subsidy" has a perjorative connotation in the U.S. because there is a stigma of unfairness attached to it. Given the amount of money companies like Microsoft, Oracle, and Apple have made off of pumping junk into the market (and offloading the insecurity costs to the consumer through EULAs), I'd argue that the lack of liability for software vendors is a bigger subsidy than this sort of a program could ever be.

Since the Internet is in many ways a utility commodity now (and a large portion of the economy is driven by access to it), it is in the best interests of the general public for it to be more secure. That makes this a good expenditure of taxpayer money (IMO).

The summary that the DHS is "funding the security of open-source products" is a bit misleading. The DHS is apparently funding Coverity, Stanford, and Symantec, not those open source projects, to maintain a database on what they perceive to be problems in open source software, not make them more secure by actually fixing those problems (assuming they are problems) when they are found.

The Apache Software Foundation is a nonprofit charity. The Stanford team did not contact the ASF prior to that press release and still hasn't described to us what their plans will be, though (I am told) they have responded privately to some of the Apache members who have criticized the publicity. Our experience with similar efforts is that the research money is spent on building or demonstrating the tool, with little or no feedback to the products themselves and even less understanding of the product architectures. For example, most such tools fail miserably on the way memory pools are utilized in Apache httpd.

I would love to see more people with security expertise get involved in any of the ASF's 60 or so open source projects, particularly the one that most people talk about when they say "Apache" (the HTTP server project). I will be happily surprised if some folks from Stanford actually show up and suggest patches to our security team: the address is security _at_ httpd.apache.org, or security _at_ apache.org for any of our projects.

OTOH, wouldn't it be nice if the DHS (or others) would help fund the real ASF security efforts? Such as by funding sysadmin tasks for securing our own machines from intruders (we rely on volunteers for that now) and more sophisticated protection from network attacks on our development environment. How about funding the cost of network monitoring like that performed by Counterpane? Even simply providing a license to Coverity's tools for our own volunteer developers would be welcome. One reason we created the Foundation was to make such donations easy.

The article says that the DHS is granting the research group $1.24 million over three years. Compare that to the total operating expenses of the Apache Software Foundation (i.e., less than $40,000/year right now)? I think funding general research is a great idea, but I don't like it when popular open source projects are used for PR reasons without the project's involvement or consent.