Category Archives: OSVDB News

[2014-05-09 Update: We’d like to thank both McAfee and S21sec for promptly reaching out to work with us and to inform us that they are both investigating the incident, and taking steps to ensure that future access and data use complies with our license.]

Every day we get requests for an account on OSVDB, and every day we have to turn more and more people away. In many cases the intended use is clearly commercial, so we tell them they can license our data via our commercial partner Risk Based Security. While we were a fully open project for many years, the volunteer model we wanted didn’t work out. People wanted our data, but largely did not want to give their time or resources. A few years back we restricted exports and limited the API due to ongoing abuse from a variety of organizations. Our current model is designed to be free for individual, non-commercial use. Anything else requires a license and paying for the access and data usage. This is the only way we can keep the project going and continue to provide superior vulnerability intelligence.

As more and more organizations rely on automated scraping of our data in violation of our license, it has forced us to restrict some of the information we provide. As the systematic abuse rises, one of our only options is to further restrict the information while trying to find a balance of helping the end user, but crippling commercial (ab)use. We spend about half an hour a week looking at our logs to identify abusive behavior and block them from accessing the database to help curb those using our data without a proper license. In most cases we simply identify and block them, and move on. In other cases, it is a stark reminder of just how far security companies will go to to take our information. Today brought us two different cases which illustrate what we’re facing, and why their unethical actions ultimately hurt the community as we further restrict access to our information.

This is not new in the VDB world. Secunia has recently restricted almost all unauthenticated free access to their database while SecurityFocus’ BID database continues to have a fraction of the information they make available to paying customers. Quite simply, the price of aggregating and normalizing this data is high.

In the first case, we received a routine request for an account from a commercial security company, S21sec, that wanted to use our data to augment their services:

I’m working on e-Crime and Malware Research for S21Sec (www.s21sec.com), a lead IT Security company from Spain. I would like to obtain an API key to use in research of phishing cases we need to investigate phishing and compromised sites. We want to use tools like “cms-explorer” and create our own internal tools.

The use you describe is considered commercial by the Open Security
Foundation (OSF).

We have partnered with Risk Based Security (in the CC) to handle
commercial licensing. In addition to this, RBS provides a separate portal
with more robust features, including an expansive watch list capability,
as well as a considerably more powerful API and database export options.
The OSVDB API is very limited in the number of calls due to a wide variety
of abuse over the years, and also why the free exports are no longer
available. RBS also offers additional analysis of vulnerabilities
including more detailed technical notes on conditions for exploitation and
more.

[..]

Thanks,

Brian Martin
OSF / OSVDB

He came back pretty quickly saying that he had no budget for this, and didn’t even wait to get a price quote or discuss options:

We figured that was the end of it really. Instead, jump to today when we noticed someone scraping our data and trying to hide their tracks to a limited degree. Standard enumeration of our entries, but they were forging the user-agent:

So after requesting data, and hearing that it would require a commercial license, they figure they will just scrape the data and use it without paying. 3,600 accesses between 09:18:30 and 09:43:19.

In the second case, and substantially more offensive, is the case of security giant McAfee. They approached us last year about obtaining a commercial feed to our data that culminated in a one hour phone call with someone who ran an internal VDB there. On the call, we discussed our methodology and our data set. While we had superior numbers to any other solution, they were hung up on the fact that we weren’t fully automated. The fact that we did a lot of our process manually struck them as odd. In addition to that, we employed less people than they did to aggregate and maintain the data. McAfee couldn’t wrap their heads around this, saying there was “no way” we could maintain the data we do. We offered them a free 30 day trial to utilize our entire data set and to come back to us if they still thought it was lacking.

They didn’t even give it a try. Instead they walked away thinking our solution must be inferior. Jump to today…

They made 2,219 requests between 06:25:24 on May 4 and 21:18:26 on May 6. Excuse us, you clearly didn’t want to try our service back then. If you would like to give a shot then we kindly ask you to contact RBS so that you can do it using our API, customer portal, and/or exports as intended.

Overall, it is entirely frustrating and disappointing to see security companies who sell their services based on reputation and integrity, who claim to have ethics completely disregard them in favor of saving a buck.

Via Twitter, blogs, or talking with our people, you may have heard us mention the ‘scraping’ problem we have. In short, individuals and companies are using automated methods to harvest (or ‘scrape’) our data. They do it via a wide variety of methods but most boil down to a couple methods involving a stupid amount of requests made to our web server.

This is bad for everyone, including you. First, it grinds our poor server to a stand-still at times, even after several upgrades to larger hosting plans with more resources. Second, it violates our license as many of these people scraping our data are using it in a commercial capacity without returning anything to the project. Third, it forces us to remove functionality that you liked and may have been using in an acceptable manner. Over the years we’ve had to limit the API, restrict the information / tools you see unauthenticated (e.g. RSS feed, ‘browse’, ‘advanced search’), and implement additional protections to stop the scraping.

So just how bad is it? We enabled some CloudFlare protection mechanisms a few weeks back and then looked at the logs.

The attacks against OSVDB.org were so numerous, the logs being generated by CloudFlare were too big to be managed by their customer dashboard application. They quickly fixed that problem, which is great. Apparently they hadn’t run into this before, even for the HUGE sites getting DDoS’d. Think about it.

We were hit by requests with no user agent (a sign of someone scraping us via automated means) 1,060,599 times in a matter of days…

We got hit by 1,843,180 SQL injection attack attempts, trying to dump our entire database in a matter of weeks…

We got hit by ‘generic’ web app attacks only 688,803 times in a matter of weeks….

In the two-hour period of us chatting about the new protection mechanisms and looking at logs, we had an additional ~ 130,000 requests with no user-agent.

To put that in perspective, DatalossDB was hit only 218 times in the same time period by requests with no user agent. We want to be open and want to help everyone with security information. But we also need for them to play by the rules.

If you didn’t catch the tweet, OSVDB pushed its 100,000th vulnerability on December 25, 2013.

This goal was on our minds the last quarter of 2013, with the entire team working to push an average of 36 vulnerabilities a day to reach it. That is quite the difference from when I started on the project ten years ago, where one day might bring 10 new vulnerabilities. Factor in the years where only one or two people worked on the project, and 100k in 10 years is substantial. In addition to the numbers, we track a considerable amount more data about each vulnerability than we did at the start, and every entry that goes out is 100% complete.

While this is a landmark number of sorts, as no other vulnerability database has that many entries, it is still a bit arbitrary to us. That’s because we know there are tens of thousands more vulnerabilities out there, already disclosed in some manner, that are not in the database yet. As time permits when doing our daily scrapes for new vulnerabilities, we work on backfilling the previous years. It is a bit scary to know that there are so many vulnerabilities out there that are not cataloged by any vulnerability database. It doesn’t matter if they were disclosed weeks ago, or decades ago. Thousands of pieces of software are used as libraries in bigger packages these days, and what may seem like a harmless crash to one could lead to code execution when bundled with additional software. It is critical that companies have vulnerability information available to them, even if it is older. Better late than never may sound rough, but it certainly is the truth.

In 2014, the only goal we have right now is to continue pushing out high-quality data that comes from a comprehensive list of sources. Over 1,500 sources with more being added every day actually. Now that the project is funded by Risk Based Security, we have an entire team that ensures this coverage. Now more than ever before, we’re in the position to slowly make the goal of cataloging every public vulnerability a reality.

We are occasionally asked how many people work on OSVDB. This question comes from those familiar with the project, and potential customers of our vulnerability intelligence feed. Back in the day, I had no problem answering it quickly and honestly. For years we limped along with one “full time” (unpaid volunteer) and a couple part-timers (unpaid volunteers), where those terms were strictly based on the hours worked. Since the start of 2012 though, we have had actual full-timers doing daily work on the project. This comes through the sponsorship provided by Risk Based Security (RBS), who also provided us with a good amount of developer time and hosting resources. Note that we are also frequently asked how much data comes from the community, to which we giggle and answer “virtually none” (less than 0.01%).

These days however, I don’t like to answer that question because it frequently seems to be a recipe for critique. For example, on one potential client call we were asked how many employees RBS had working on the offering. I answered honestly, that it was only three at time, because that was technically true. That didn’t represent the number of bodies as one was full-time but not RBS, and two were not full time. Before that could be qualified the potential client scoffed loudly, “there is no way you do that much with so few people”. Despite explaining that we had more than three people, I simply offered for them to enjoy a 30 day free trial of our data feed. Let the data answer his question.

To this day, if we say we have #lownumber, we get the response above. If we say we have #highnumber that includes part timers and drive-by employees (that are not tasked with this work but can dabble if they like), then we face criticism that we don’t output enough. Yes, despite us aggregating and producing over twice as much content as any of our competitors, we face that silly opinion. The number of warm bodies also doesn’t speak to the skill level of everyone involved. Two of our full time workers (one paid, one unpaid) have extensive history managing vulnerability databases and have continually evolved the offerings over the years. While most VDBs look the same as they did 10 years ago, OSVDB has done a lot to aggregate more data and more meta-data about each vulnerability than anyone else. We have been ahead of the curve at almost every turn, understanding and adapting to the challenges and pitfalls of VDBs.

So to officially answer the question, how many people work on this project? We have just enough. We make sure that we have the appropriate resources to provide the services offered. When we get more customers, we’ll hire more people to take on the myriad of additional projects and data aggregation we have wanted to do for years. Data that we feel is interesting and relevant, but no one is asking for yet. Likely because they haven’t thought of it, or haven’t realized the value of it. We have a lot more in store, and it is coming sooner than later now that we have the full support of RBS. If you are using any other vulnerability intelligence feed, it is time to consider the alternative.

Anyone who knows me in the context of vulnerability databases will find this post a tad shocking, even if they have endured my rants about it before.

For the first time ever, I am making it policy that we will no longer put any priority on Vulnerability Labs advisories. For those unfamiliar with the site, it is run by Benjamin Kunz Mejri who now has a new company Evolution Security.

If you read that web site, and even a history of his/VL disclosures, it looks impressive on the surface. Yes, they have found some legitimate vulnerabilities, even in high-profile vendors. Most, if not all, are pedestrian web application vulnerabilities such as cross-site scripting, traversals, or file upload issues. More complex vulnerabilities like overflows typically end up being what we call “self hacks”, and do not result in the crossing of privilege boundaries. Many of their published vulnerabilities require excessive conditions and offer no real exploit scenario.

During the past 10 months, I know of three other vulnerability databases that officially gave up on adding their advisories. Nothing public, but the internal memo was “don’t bother”. OSVDB was the holdout. We did our best to keep up with their stream of horrible advisories. I personally offered to help them re-write and refine their advisory process several times. I started out nicely, giving a sincere offer of my time and experience, and it went unanswered. I slowly escalated, primarily on Twitter, giving them grief over their disclosures. Eventually, their advisories became nothing but an annoyance and incredible time sink. Then I got ugly, and I have been to this day. No, not my proudest moment, but I stand by it 100%.

As of tonight, we are giving in as well. Vulnerability Lab advisories represent too much of a time sink, trying to decipher their meaning, that they simply aren’t worth adding. For cases where the software is more notable, we will continue to slam our head against the wall and figure them out. For the rest, they are getting deprioritized in the form of a “to do when we run out of other import sources”. Since we monitor over 1,100 sources including blogs, web sites, changelogs, and bug trackers, this is not happening for a long time.

I truly regret having to do this. One of my biggest joys of running a vulnerability database is in cataloging all the vulnerabilities. ALL OF THEM.

So this also serves as my final offer Benjamin. Search the VDBs out there and notice how few of your advisories end up in them. Think about why that is. If you are as smart as you think you are, you will choke down your pride and accept my offer of help. I am willing to sink a lot of time into helping you improve your advisories. This will in turn help the rest of the community, and what I believe are your fictitious customers. As I have told you several times before, there is no downside to this for you, just me. I care about helping improve security. Do you?

UPDATE: Shortly after the initial draft of this blog was written (but days before it was published), David mailed again shortly after my reply to apologize and clear up that any notion of a legal threat was not intended. Note that his reply was not sent to the same addresses he originally mailed, or the ones that were added in our reply, so it was not immediately seen. He went on to say that he “fired off an email quickly on my own in frustration without talking to anyone before hand or letting anyone else preview it“. As such, we have edited this post to mostly redact the company name as well as fully redact David’s last name. It is not our intent to punish anyone and we understand and appreciate that such actions are often misunderstood and not intended. We now hope that this blog post can serve as a lesson to everyone, ourselves included, about how emails can be perceived from both vendors, and vulnerability databases.

As most people who follow the OSVDB project know, we strive for the most complete and accurate information about vulnerabilities. We take it very seriously, almost to a fault. We actively seek out information from the community and routinely contact vendors and researchers directly to confirm we have a clear understanding of the information published. When we are provided more clarity we update our entries without hesitation. However, when we receive an email from a vendor with a “legal issue” in the subject and it tells us to change an entry without new evidence, this concerns us as it goes against the core of the project to provide accurate, detailed, current, and unbiased technical security information.

In keeping with our mission to help educate both vendors and researchers on how best to handle the vulnerability disclosure process, we believe it is in the interest of the community to publish details if a software vendor uses legal action, or the implied threat of legal action, to silence vulnerability information. Typically when we have vendors contact us they want to have an entry removed completely, but that was not the case in this situation. In this case, rather than try to work with us to ensure the entry is accurate we received an email from a large medical vendor that “suggested” we change published information so that it would no longer be factual.

David, who sent the mail, said he would follow up the next day with us but did not. As we shared with him on Friday in our reply, we would write a blog about the incident on Monday to ensure that everyone was made aware of the situation. Below are the two emails exchanged, only edited for formatting. No content has been removed or altered.

We respect vendor concerns about entries, and will flag “Vendor Disputed” immediately when we are contacted. We will then examine their concerns and make changes appropriately. In this case, the vendor has verified the vulnerability itself, but they are disputing the access vector. This may not seem like a big deal, but we take the “accurate information” guiding principle very seriously. Our vulnerability entry is currently using the CVSSv2 scoring from NVD at 4.3 publicly. The associated CERT/VU scoring has it at 7.4. We believe the score should really be 10.0 due to the vulnerability being remote default hardcoded credentials that allow full access to the database. Changing the access vector from remote to local, as the vendor requested, could result in a score as low as 1.9. Remember, while CVSSv2 has some faults, the base scoring system is still done according to the “constant with time and across user environments”. That means that third-party protection mechanisms like firewalls, routers, or other screening devices are not factored into scoring.

If any entry containing technically inaccurate information needs to be updated, we are happy to do so immediately provided there is sufficient evidence available. This has been our policy for almost 10 years now, and it will not change. Threatening legal action over something so trivial, without trying to resolve it amicably, seems counterproductive.

2) Contains a fundamental misrepresentation of the original CERT posting which we will dispute with any and all means necessary. Please correct the post to indicate that the issue is not remotely exploitable which is clearly evident in the CERT post description, the remediation steps and is evident in the CVSS score itself. Ex: http://www.kb.cert.org/vuls/id/948155 “…the attacker would need network access to the database in order to obtain sensitive patient information.”

Please correct it immediately and ensure any other entities that receive a feed from your site also have corrected this misrepresentation. I will make our security response team aware of this posting and we will follow up with you tomorrow to ensure its corrected.

Thanks,
David

[..]

Please consider the environment before printing this email.

E-mail messages may contain viruses, worms, or other malicious code. By reading the message and opening any attachments, the recipient accepts full responsibility for taking protective action against such code. Henry Schein is not liable for any loss or damage arising from this message.

The information in this email is confidential and may be legally privileged. It is intended solely for the addressee(s). Access to this e-mail by anyone else is unauthorized.

Subsequent to this email, we had two comments left on other entries meaning the problem with comments causing a 500 are intermittent. Regardless, email is always a better way to ensure reaching us to discuss an issue.

: 1) When clicking on comment the web site returns a 500 error. As a
: result comments are not allowed on VBD items and owners of the software
: are not able to dispute misrepresentations in the posts.
:
: http://osvdb.org/92817

Please note that in emailing the moderators, you are in fact disputing the
entry. This is a faster and more reliable method of raising a question
with us. During a recent upgrade, the comment functionality broke, and you
are the first to notice. That is why it has remained unfixed, as it is
considered very low priority to us.

: 2) Contains a fundamental misrepresentation of the original CERT
: posting which we will dispute with any and all means necessary. Please
: correct the post to indicate that the issue is not remotely exploitable
: which is clearly evident in the CERT post description, the remediation
: steps and is evident in the CVSS score itself. Ex:
: http://www.kb.cert.org/vuls/id/948155 “…the attacker would need
: network access to the database in order to obtain sensitive patient
: information.”

Between your subject line calling this a “legal issue” and including “any
and all means necessary” in the body, the Open Security Foundation (OSF)
is considering this email a threat of intended legal action and will reply
accordingly. We already strive for accuracy in our data and have a long
history of going out of our way to ensure it, frequently contacting
vendors for additional information, bringing issues to their attention,
and engaging in emails such as this to figure out details.

First, our entry does not misrepresent the CERT posting at all. Looking at
CERT VU 948155, specifically the Solution section:

As a general good security practice, only allow connections from trusted hosts and networks. Restricting access would prevent an attacker from using the hard-coded credentials from a blocked network location.

Do not allow the Dentrix G5 database to be accessed by unauthorized users on an insecure wireless network. If the Dentrix G5 database is accessible from an insecure wireless network, a remote attacker may be able to gain access using the hard-coded credentials.

Further, looking at the CERT page that includes what they call a “vendor
statement”, implying it came from Henry Schein Practice Solutions:

It is important to note, however, that the disclosure of the internal database password only posed a vulnerability for practices whose network was unprotected (i.e. practices who lacked a firewall and/or other basic network safeguards).

Between CERT and your company statement, it is abundantly clear that our
classification of this issue is accurate. In both cases, it explicitly
says that this may be a remote issue, and it relies on having third-party
hardware and software installed to protect the database from a remote
attacker. While most companies would follow these guidelines as part of a
regular security posture, we cannot make that assumption because history
has shown us that companies routinely fail to practice the most basic of
security measures. Our entries are added and updated with _factual
information_ pertaining to the issue. We do not account for network
configurations or the possible presence of third-party devices because
that does not happen 100% of the time.

With that, I have updated the entry to reflect that Henry Schein Practice
Solutions stresses that proper network protection be implemented to help
mitigate this issue. It does not change the fact that this can be remotely
exploited in some circumstances.

: Please correct it immediately and ensure any other entities that receive
: a feed from your site also have corrected this misrepresentation.

Now that the information is updated in our site, anyone viewing or
accessing the information has the latest updates.

: I will make our security response team aware of this posting and we will
: follow up with you tomorrow to ensure its corrected.

Likewise. I have made the other moderators aware of this situation and I
will be authoring a blog post on this entire matter (which will also be
Tweeted to our followers, and included on the ISN mail list that goes out
to ~ 6,000 security professionals), including the implied threat of legal
action to be posted Monday during business hours. We feel it is important
for the industry to know when a vendor uses such tactics in an attempt to
stifle vulnerability disclosure, and to unfairly pressure an organization
into displaying inaccurate information, which you are attempting to do.

For years, we have used Typo3 for our blog, hosted on one of our servers. It isn’t bad software at all, I actually like it. That changes entirely when it sits behind Cloudflare. Despite our server being up and reachable, Cloudflare frequently reports the blog offline. When logged in as an administrator and posting a new blog or comment, Cloudflare challenges me by demanding a CAPTCHA, despite no similar suspicious activity. Having to struggle with a service designed to protect, that becomes a burden is bad. Add to that the administrative overhead of managing servers and blog software and it only takes away from time better spent maintaining the database.

With that, we have migrated over to the managed WordPress offering to free up time and reduce headache. The one downside to this is that Typo3 does not appear to have an export feature that WP recognizes. We have backfilled blogs back to early 2007 and will slowly get to the rest. The other downside is in migrating, comments left on previous blogs can’t be migrated in any semblance of their former self. Time permitting, we may cut/paste them over as a single new comment to preserve community feedback.

The upside, we’re much more likely to resume blogging, and with greater frequency. I currently have 17 drafts going, some dating back a year or more. Yes, the old blogging setup was that bad.

We had the best intentions to post more frequently on this blog but haven’t had an update since August. While we would have loved to post more frequently, quiet on the blog is actually of great benefit to you. Every minute we don’t update here, we’re updating the database and adding more vulnerability information. On top of adding new vulnerabilities every day (including X-mas!), we typically update between 100 and 400 existing entries with new references, updated solution information, and more. Anyone monitoring vulnerability disclosure sources know the number of new vulnerabilities are approaching crazy. Some of the other changes and news:

Even after doing server upgrades to handle increased traffic we have still been experiencing some site availability issues. After doing more research, it appears that this is due to an absolutely incredible amount of hits on the web site, primarily from automated scrapers. We are currently testing various technical solutions to help ensure this doesn’t affect site availability. Please note that customers of Risk Based Security (RBS), who we have partnered with for vulnerability intelligence, are not affected by any of these hiccups. For companies that rely on timely vulnerability data delivered in a standard format and are tired of trying to keep up on their own (or tired of their current provider delivering sub-par information), send an inquiry to RBS to discuss the numerous services available.

The Open Security Foundation, and thus OSVDB, has recently gained a new sponsor, High-Tech Bridge. In addition, both Jake Kouns and Brian Martin have joined HTB’s advisory board to give advice and recommendations on further developing and driving their vulnerability research efforts. HTB has spent a considerable amount of time not only performing pro bono research for open source projects, but they have put serious effort into ensuring their research and advisories are at the top of the industry.

Risk Based Security has also been funding the day-to-day import of vulnerability data by sponsoring 2 full time employees, 1 part-time employee, and lending out Carsten Eiram to assist us with problematic entries (e.g. vague disclosures). Carsten is also using his experience with VDB management and vulnerability research to help OSVDB refine our templates, enhance our title scheme to be more descriptive, and provide guidance in moving forward.

Finally, we’d like to give a big shout out to several vendors that go above and beyond. Another ‘behind the scenes’ thing we do is frequently pester vendors for more information about third-party disclosures. We often ask for additional details for exploitation, solution information, and clarification if there is anything left to question. In the past month, there have been several times where our mail was answered incredibly fast that answered all of our questions. This includes a day-long thread on a Sunday that included Foswiki and TWiki, replies from the Microsoft Security Response Center (MSRC) on Christmas day (about 5+ year old CVE assignment questions), and quick responses from Mozilla, Cisco Security, and Symantec’s Security Response team. We can’t emphasize how much we appreciate their attention to these questions, as it ultimately helps their customers and ours.

As always, we encourage you to follow us on Twitter (@OSVDB), for news, quips, and status updates about vulnerabilities.

Our dev team tackled some of the ticket backlog on the OSVDB project. While many changes are ‘behind the scenes’ and only affect the daily manglers, there are a few that are helpful to anyone using the database:

Metasploit links have been fixed. At some point, the Metasploit project changed the URL scheme for the search engine. Our incoming links stopped matching the format and resulted in landing at the main search page. We now use the new URL scheme, so links from OSVDB will directly load the Metasploit module again.

Microsoft changed their URL scheme yet again. Our links for MS bulletins were redirecting, but sometimes 2 or 3 times on Microsoft’s side. It’s cool that they kept up the redirects, but our links have been updated to be more efficient and land without the 30x magic.

Immunity CANVAS references have been added. In our quest to add as much vulnerability information to each entry, we have used Immunity’s API to pull in data about their exploit availability. While it is a commercial offering, such exploit frameworks are invaluable to pen-testing teams, as well as administrators that mitigate based on the availability of exploits. An example of an OSVDB entry with a CANVAS reference is OSVDB 60929.

Continued backfilling; we have still been pushing to backfill vulnerability data from prior years, focusing on 2011 currently. The data is coming from a variety of sources including bug trackers, changelogs, and Exploit-DB. We have been working with EDB so that each site has a more thorough cross-reference available. The EDB team has been outstanding to work with and continues to show diligence in their data quality and integrity. Moving forward, we will continue to focus on more vulnerability data imports and more information backfill.

At a glance, it may appear as if the OSVDB project has fallen by the wayside. Some of our public facing pages have not been updated in several years, the last string of blog posts was over a year ago, and a recent update caused a few functions to fail (e.g., data exports). On the other hand, anyone paying attention to the data has noticed we are certainly present and moving forward. We have had one person working full time on OSVDB for over a year now. He is responsible for the daily push of new vulnerabilities and is scouring additional sources for vulnerabilities that didn’t appear through the normal channels. Given the nature of the project, we place data completeness and integrity as the top priority.

The OSVDB project is coming up on its tenth year anniversary. The last ten years have seen some big changes, as well as many things that have not changed one bit. The biggest thing that hasn’t changed is the lack of support we receive from the community. The top ten all time contributors are the core members of OSF, the handful of longstanding dedicated volunteers we have had over the years, or some people we have been able to pay to help work on the project. Beyond those ten people, the volunteer support we lobbied for years never materialized. We still enjoy a couple dozen volunteers that primarily mangle their own disclosures, or add CVE references, which we appreciate greatly. Unfortunately, the rate of vulnerability disclosures demands a lot more time and attention. In addition to the lack of volunteers, community support in the form of sponsorship and donations has been minimal at best. Tenable Network Security and Layered Technologies have been with us for many years and have largely been responsible for our ability to keep up with the incoming data.

Other than those two generous companies, we have had a few other sponsors/donations over the years but nothing consistent. In the last year, we have spent most of our time trying to convince companies that are using our data in violation of our posted license to come clean and support our project. In a few cases, these companies have have built full products and services that are entirely based on our data. In other cases, companies use our data for presentations, marketing, customer reports, and more while trying to sell their products and services. Regardless, the one thing they aren’t doing is supporting the project by helping to update data, properly licensing the data or at least throwing us a few bucks as an apology. In short, several security companies, both new and well established, that sell integrity in one form or another, appear to have little integrity of their own. After a recent server upgrade broke our data export functionality, it was amazing to see the number of companies that came out of the woodwork complaining about the lack of exports. Some of them were presumptuous and demanding, as if it is a Constitutional right to have unfettered access to our data. Because of these mails, and because none of these companies want to license our data, we are in no hurry to fix the data exports. In short, they don’t get to profit heavily off the work of our small group of volunteers, many of whom are no longer with us.

Even as an officer of OSF and data manager of OSVDB, I honestly couldn’t tell you how we have survived this long as a project. I can tell you that it involved a lot of personal time, limping along, and the hardcore dedication of less than a dozen individuals over ten years that made it happen. With almost no income and no swarm of volunteers, the project simply isn’t sustainable moving forward, while still maintaining our high standards for data quality. We gave the community ten years to adopt us, and many did. Unfortunately, they largely did it in a completely self serving manner that did not contribute back to the project. That will be ending shortly. In the coming months, there will be big changes to the project as we are forced to shift to a model that allows us to not only make the project sustainable, but push for the evolution we have been preaching about for years. This will involve making the project less open in some aspects, such as our data exports, and has required us to seek a partnership to financially support our efforts.

For ten years we have had a passion for making OSVDB work in an open and free manner. Unfortunately, the rest of the community did not have the same passion and these changes have become a necessity. The upside to all of this is that our recent partnership has allowed us to develop and we will be offering a subscription data feed that has better vulnerability coverage than other solutions, at a considerably better price point. That said, the data will remain open via HTTP and for a 99% of our users this is all that is required. When exports are fixed, we will offer a free export to support the community, but approval will be required and it will contain a limited set of fields for each entry. We are still working out the details and considering a variety of ideas to better support a wide range of interest in the project, but doing so in a sustainable manner. In the end, our new model will help us greatly improve the data we make available, free or otherwise and ensure OSVDB is around for the next 10 years.