Monday, June 22, 2009

We are growing, fast, and need more exceptional people to join our team. The things we work on, people we worth with, and organizations we work for are completely unique. Check out our job listings, and if something looks like its for you, let us know!

Application Security Specialist For the up-and-coming security specialist: this is an amazing opportunity to flex your creative muscles as you take a crack at poking holes in websites, actively searching for security vulnerabilities that you can exploit.

Customer Support EngineerLooking for an experienced customer support engineer to provide exceptional services to our customers and business partners in supporting our Sentinel Service offering. We are Software as a Service (SaaS) provider.

Wednesday, June 17, 2009

Update 07.28.2009: Salesforce.com publishes their "Vulnerability Reporting Policy" and becomes the latest large corp to "legalize it." I was privy to early drafts for feedback and I must say, the final product looks pretty good! They even have a safe for testing playground for security researchers. Nice touch! Hopefully more organizations will follow suit.Update 06.20.2009: Jack Mannino offers very well-thought and persuasive counterpoints to my suggestion below. I'll have to take some time to consider his arguments and respond accordingly. Boiled down Jack is reminding us that private websites are different in purpose than government systems, which exist for totally different purposes and may not automatically benefit from pen-test crowd-sourcing. The second main point is one of saturating already limited resources with respect to incident response.

I’d wager fewer than ten percent of United States .GOV and .MIL websites are professionally tested for custom Web application vulnerabilities. The reasons why are probably the same as in the private sector. Those responsible don’t know or don’t want to know that problems exist. Statistically of course they do, and our statistics are validated by a recent Federal Aviation Administration security report indicating that 70 of its websites tallied thousands of vulnerabilities. Those who do acknowledge and wish to address the problem often lack the budget or authority to initiate a project. Consequently, enemies both foreign and domestic are likely to know more about what and where our government’s website vulnerabilities are located than the defenders do.

This is a vital concern and an issue I think could be solved through policy or legislation. I believe there are hundreds, maybe thousands, of vulnerability researchers ready and willing to volunteer themselves to find and disclose vulnerabilities -- for free -- if only allowed to do so. What every information security professional knows is there are the penetration tests you pay for and those that you get everyday for free, no matter who you are. What they also know is that testing anything you don’t own or have written consent to test runs the risk of legal prosecution. You are especially not supposed to touch government or military systems, as these organizations have an infinite amount of time and money to go after someone. This is vastly different from the approach of a private enterprise whose investigation eventually has a cost-benefit analysis attached. Generally, no more is invested than the amount lost.

Even so, some researchers are comfortable with harmlessly poking at private sector websites for Cross-Site Scripting, Cross-Site Request Forgery, SQL Injection, and other bugs. XSSed.com serves as good examples of open disclosure, which also demonstrates that no top-level domain is off limits despite the legal consequences. It is time to do something new because we know Web applications are the biggest InfoSec risk we face. This is an extraordinarily large problem. And so, what if, to meet this challenge, we leveraged people’s willingness to find vulnerabilities on their own time, eliminated their risk of prosecution, and instead provided a mechanism for disclosure like a government version of the MSRC? That’s right, let us hack .GOV and .MIL as a veritable army of volunteer pen-testers. How cool would that be! It is not like anyone is being prosecuted for simply finding a government website vulnerability, so no loss there. Sound crazy? Maybe, but here me out first.

Consider that several prominent websites such as PayPal, Microsoft, and Google have already successfully taken such measures. I have first hand knowledge that more are on the way. Their policies state that as long as the researchers follow the rules of engagement -- essentially not doing any damage or defrauding the system, and discreetly disclosing their findings so the companies can create a fix, no legal measures will be taken. These organizations have matured and learned to work with the community. After a fix has been issued, the researcher may tell the world to bolster his reputation in the security community. No dollars are exchanged, but impressive work has led companies to single out a specific researchers thank them. Yes, there are potential downsides, but in my humble opinion the gain more than justifies the risks.

Assuming reported vulnerabilities are fixed promptly, a similar approach would benefit the government while measurably raising the bar for the bad guys. Currently, it seems that for many governmental Internet-connected systems the bar is set quite low. By allowing the good guys to assist them, the government could get access to a qualified pool of security talent to fill their internal security positions. Wasn’t the Pentagon looking to hire high school students for this sort of thing anyway? Open source and commercial vendors could get a new playground to test and improve their vulnerability scanning products. Hard to beat free pen-tests. College students and security training professionals could apply and safely hone their skills. Fake websites are nice and all, but nothing in Web application security compares to experience on real systems. Everyone wins!

Arian promised to get back to 3APA3A after scanning several hundred production websites using WhiteHat Sentinel. A huge R&D benefit of the platform. Two years later there is data to share. We’ve been busy, but hey, better late the never right? :) As it turned out 3APA3A was correct! Arian discovered a small number of Web applications vulnerable to the encoding technique and they add up if the sample pool is large enough. Samples ranging from 300 to roughly 1000 websites. Remember these are collapsed numbers. Meaning multiple vulnerability inputs on the same Web application are grouped together.

These are exploitable conditions where this was the ONLY way that arbitrary HTML could be created. There were are many more sites that normalized these and the same encoding could be used for filter-evasion/exploitation, but they were not the ONLY way to create arbitrary HTML in the application. Unfortunately the dataset does not count all of the ANDs/combinations right now, just the ONLYs. So if there was a simpler way to create arbitrary HTML, that is the only way it was counted. The rabbit hole goes much deeper. Dozens of combinations and permutations that lead to exploitation and not just for XSS. For many types of syntax-attacks. Still researching.

There are also MANY more of these in international language code pages. Browser behavior gets really unpredictable with foreign-language character sets which increases XSS and HTTP/RS exploit options even more. There are also many more ways to use these when you start layering your encoding techniques. Yosuke Hasegawa did a great presentation on Japanese/Kanji character sets @ BlackHat Tokyo 2008. For example I found many of these attack vectors work at an even higher percentage when URI-escaped or combined with other Hex-encoding formats (or Decimal, Base64, etc. etc. etc.).

3APA3A, thanks for opening my mind up to some new angles on filter-evasion tricks! :)

Wednesday, June 03, 2009

The future: Long standing Web application security scourges such SQL Injection (SQLi), Cross-Site Scripting (XSS), and Cross-Site Request Forgery (CSRF) are finally under control. Remaining buffer overflow issues are considered fossilized evidence of a prior era. Cyber criminals out of necessity have evolved their attack portfolios to include Clickjacking as a preferred method for tricking their victims into propagating malware, defrauding themselves, and initiating other forms a malicious acts. Clickjacking, a long-known and fundamental design problem in the way the Web works, had not until 2017 garnered the respect necessary to be taken seriously. Now with significant damage increasing and loses mounting, the issue has forced website owners and browser developers to scramble for solutions to a problem nearly a decade in the making. Or so the story may go should history repeats itself.

By tracking the seminal papers/events of the more widely used attack techniques, it takes somewhere between 6 and 9 years for the bad guys to scale their exploits and cause enough damage where defenders are compelled to react. For example, Aleph One’s “Smashing The Stack For Fun And Profit” was published in 1996, but it wasn’t until 2002 that Microsoft’s then CEO Bill Gates issued the famous “TrustWorthy Computing Memo.” A six year gap sparking the software security revolution. XSS experimentation began around 1997 with few appreciating its true power until 2005 (8 years). The Samy Worm, the first mass scale JavaScript malware Web Worm, infected over 1 million MySpace users in under 24 hours. In 1998 rain.forest.puppy published the first research into SQL Injection. Nine years later marked the beginning of mass Web page malware infections proving how truly vulnerable websites were. The first CSRF papers began appearing around the turn of the century, but no convincingly evidence of catastrophic attacks has yet to appear justifying remediation investment. So we wait, knowing full well it is only a matter of time.

Clickjacking, an issue known by some for at least several years as UI Redressing, it was not fully explored or advertised until 2008 with the Flash videojacking demonstration. While non-malicious experimentation is taking place targeting those such as Twitter, no major damaging incidents can be referenced. And perhaps there won’t be until sometime between 2014 and 2017 if historical timelines hold. If so, the upside is we have time to deal with the issue, but I doubt we will be any more prepared by then. More likely the problem will scale well beyond our control, just like the others, as Web-enabled devices increase exponentially built upon a system where security fundamentals are difficult to change. In the meantime I’m sure we will be having a lot of fun times dealing with XSS, SQLi, CSRF, Intranet Hacking, Flash Malware, Business Logic Flaws, and so on.

About Me

Jeremiah Grossman's career spans nearly 20 years and has lived a literal lifetime in computer security to become one of the industry's biggest names. He has received a number of industry awards, been publicly thanked by Microsoft, Mozilla, Google, Facebook, and many others for his security research. Jeremiah has written hundreds of articles and white papers. As an industry veteran, he has been featured in hundreds of media outlets around the world. Jeremiah has been a guest speaker on six continents at hundreds of events including many top universities. All of this was after Jeremiah served as an information security officer at Yahoo!