Monday, 8 August 2011

The purpose of this post is to demonstrate that usable security tools don’t need to be rocket science and to hopefully inspire those of you with other ideas to write your own browser extensions. The reason we wrote the Chrome extension in the first place was that we felt that there were a number of low hanging fruit security attributes that should be accessible to development and quality assurance processes in a clear and easily understandable manner. We've also already had some feedback from security professionals that they sometimes forget to look for these issues and simple functions they can access in their browser are a good way to ensure they are consistent.

Why a Chrome extension?

The reasons we chose to write a Chrome extension was really two fold. The first is that development and QA already use a browser so why not just make it accessible via a tool users are already familiar with? The second was the thinking that Google’s Chromium team have done all of the heavy lifting, implementing HTTP, SSL/TLS, the DOM, JavaScript engine, Cookies etc. and provided programmatic access to the browser. So why would we want to re-implement the wheel in no doubt a lacklustre fashion over many months (if not years) and be plagued with corner cases and related dramas? The ideas we discounted included:

Writing a web security proxy or plug-in for an existing one. Discounted as we thought it would not be as intuitive or quick to install, set-up and use on a daily basis by non dedicated staff.

Writing it in C, Java, C#, or Python. Discounted as we thought parsing everything and building a UI would be a lot of work for no benefit. We did look at some Java / C# web browser libraries and felt they would be filled with the corner cases we mentioned due to their inability to keep up with the development speed of modern browsers. Or in the case of the Internet Explorer COM object a pig to work with that doesn't always expose easily everything we wanted (that prototype is in the hours wasted bin).

The extension’s components

So, the extension is made up of three components, the background element, the content element and the main popup element. All three of these components perform distinct functions which are described in detail below.

The background element

The background element is used to register the right-click context menus and the call-back to handle any processing of these events. When the user clicks on the right-click context menu the call-back is executed, the type being requested is understood, the URL obtained. Then a new tab is created with an instance of the popup element. At which point all control is handed to the popup for further processing. We encode the Chrome tab ID and the URL of page, frame or link being analysed by the user into the request parameters to the popup to perform a method of IPC.

The page content element

The page content element is used to parse the DOM of the requested page (as seen by the user). This is broken out into to two main functions. The first enumerates all the of page’s meta headers looking for security related attributes. The second parses the page looking for forms and form elements. These two functions build an object that contains the results ready to pass to the popup element. In order to reduce non-essential page load overhead we don’t execute this on every page load. Instead we only inject and execute this extra code on a case by case basis when the user requests us to. We found this had a profound impact on general Chrome performance and seemed to make us good browser citizens. Finally we register an IPC event listener looking for requests for the popup element to return the newly built results object. When a request comes in from the popup element we then return the results object as instructed.

The popup element

The popup is the main component and holds a majority of the logic and all of the user interface. This is behind the icon when the user clicks on the Recx icon or appears in the tab when the user clicks on the right-click context menu. The popup then does the following things:

The popup works out how it was called (browser icon versus right-click context menu) and the URL being analysed and the tab ID.

Performs an XMLHttpRequest to the URL requested, so inheriting any session cookies in order to obtain the HTTP headers from the server. Analyses any security related HTTP headers and builds the UI with the results. It also then builds the ‘All HTTP headers’ hidden DOM element for advanced users.

Enumerates all cookies within Chrome looking for those cookies that relate to the URL requested. Analyses them and builds the UI with the results. It also then builds the ‘All cookies’ hidden DOM element for advanced users.

Injects our content element into the Chrome tab for the page requested using the tab ID. Chrome provides a way for us to inject this content element not into the actual page but a container which has full access to the pages DOM. This then performs the operations described previously to analyse the DOM for security issues.

Sends a request via Chrome IPC to our content element for a copy of the results object and then receives it via an asynchronous call-back. Analyses the returned object for any security issues and builds the UI with the results.

Finally the popup element builds the rest of the UI using a mixture of JavaScript and DHTML and makes it visible to the user.

And ‘ta-da’ the user is then presented with the results. All the strings are actually pulled in from a locale file so if we ever want to dust off our schoolboy German, Spanish or French (or put our faith in Google translate) we can in theory easily support different languages.

Time effort spent
We thought it might be interesting to provide a breakdown of the time we spent developing the extension (but not the upfront research/wasted prototype code). We're going to caveat all of this with that we've never written a browser extension before, not written a tremendous amount of JavaScript recently and we're not intimately familiar with all the properties of every DOM object. The break down below is for all effort to-date across the team. In this time we've had three releases, the initial release, a spelling patch (doh!) and a new feature release.

While we were writing this extension we fell afoul of a security issue (which our SDLC code review caught before release) which is always amusing when you’re in the business of software security consultancy and writing security software. The root cause was that we were using innerHTML when building the results DOM with untrusted data instead of innerText. Net result is we would have been vulnerable to Cross-Site-Scripting had we shipped with it. There was a surprising amount of re-factoring required to move away from the use ofinnerHTML toinnerText as you end up doing a lot of DOM building. But hey, who says security was free (aka don't take short cuts)? Anyway, for those of you considering writing your own extensions we recommend you read both the Chrome extensions documentation in detail (which does warn you in numerous places) and the recent blog post by the Chromium team titled ‘Writing Extensions More Securely’.

The browser as the next web security tool platform

We’re pretty passionate about the fact that we believe the browser will make a solid foundation for development and QA friendly web security vulnerability testing and regression tools. There are already examples of other extensions in the Chrome web store that provide a more penetration tester centric tool-set. This demonstrates to us that others clearly feel the same about the power of the browser. We already have our eye on a number of experimental Chrome extension APIs that once mainlined by Google will allow us to bring other more powerful tool-set to market. In the mean time we expect to further refine, polish and extend the existing extension.

Getting the extension, taking it for a spin and wrap-up

The extension is available free from the Chrome web store (23 users and counting – only several of which we suspect are our parents and siblings), please provide us feedback or feature requests.

If you want to see why we wrote this tool and the somewhat bi-polar nature of web security try running our plugin against (don't check ours out.. as err..):

Thursday, 4 August 2011

Yesterday the World Anti Doping Agency (WADA) released a press release regarding the McAfee Operation Shady Rat report. There was no obligation in the released paper for WADA to make a public announcement, but in doing so they have at least recognised the analysis performed by McAfee Labs. McAfee uniquely identified 72 organisations, which it broke down into 6 sectors and 32 categories. Of those organisations, 4 were named, of those only WADA at the time of writing (August 4th, 2011) have released a public statement. We reviewed both the statement and McAfee's white paper, performed some high level analysis and drew some rudimentary but fair conclusions.

Information disclosure

WADA having been named by McAfee have done the responsible thing, acknowledged the white paper and communicated that they're looking into it. Unfortunately, that's not all they said. Their 6 paragraph press release goes on to reveal information about:

Their current defences (they use a managed solution from ISS (IBM)).

A previous apparently unrelated security breach (in February 2008, they don't appear on McAfee's radar until August 2009).

Their response to a breach of their email system (they upgraded their firewalls).

That they escalate attacks to both national and international law enforcement agencies.

That McAfee have not provided them with any information on the attack, its extent or the systems involved.

Openly disclosing information about the defences that you have in place is poor security practice and potentially to the technically savvy reader undermines your good intentions. Although privacy of ones security operations is only a minor control, the more private you can keep your operations, the less informed an attacker will be. Although it's common to reveal information, through poor server configuration, vendor press releases etc, keeping as much information private rather than public is solid security advice.

The statement gives away far too much information; although essentially a public relations exercise by WADA it would be fair to conclude that they've been poorly advised by their representatives on what they should say. Acknowledge the white paper; say that you're taking it seriously; that you're conducting an investigation into McAfee's analysis; and welcome their involvement but not refuting their claims. To release a 'knee jerk' press release, is in this case not the best course of action and shows a lack of preparedness.

How would we have advised WADA?

We took the WADA press release and the material released by McAfee and authored the following response. This is how WE would have done it:

"Following the release of the McAfee white paper on Operation Shady Rat, WADA can confirm that we are in dialogue with McAfee and are investigating thoroughly the reported intrusions. This includes actively working with its retained security experts pending further specific information. We have already taken steps to further bolster the operational security of our systems by working with our security technology and service providers. We will continue to work with all parties concerned to ensure an appropriate and timely response until resolved in a satisfactory manner."

By issuing a press release similar to above would acknowledge the McAfee's report while outlining at a high the level steps being taken to investigate the specific claims. Additionally it demonstrates, but without giving specific details, that immediate reactionary and remedial actions have been taken and thus the seriousness with which it's being taken.

So why only four?
The white paper details intrusions of 72 organisations. Of those 72, only four were named explicitly in the paper:

McAfee does not detail why these organisations were selected to be named; and certainly from the WADA press release, the conclusion could be drawn that they didn't give their permission to be disclosed; nor were informed in advance of the disclosure. Interestingly, even though 68% of the organisations listed were in the United States, none were named. The author believes that naming the four organisations above was warranted to "reinforce the fact that virtually everyone is falling prey to these intrusions". Naming less than 6% of the total organisations represented adds little to the weight of the white paper (the remaining 68 organisations provide an equally powerful message).

The analysis presented is relatively lightweight and is presented without references, correlation with significant events along the timeline or analysis of countries notably absent from the list. The author eludes to the fact that further analysis would be interesting, but without access to the raw data we rely on McAfee potentially performing that analysis in the future.

Throwing stones in glass houses

Of course, it shouldn't go unnoticed that the author states:

"I am convinced that every company in every conceivable industry with significant size and valuable intellectual property and trade secrets has been compromised (or will be shortly)"

And then goes on to say:

"In fact, I divide the entire set of Fortune Global 2000 firms into two categories: those that know they've been compromised and those that don’t yet know."

Intel (the owner of McAfee) falls cleanly into both these categories, and it's also likely that McAfee security software is running within a significant portion of other organisations similarly categorised.

We don't dispute the quotes above, the threat posed to organisations is considerable, and credit should be given to McAfee Labs, for not sugar coating the information or the statistics presented.

Conclusions

Were WADA right to release a press statement? Yes.

How ethical were McAfee in naming some organisations and not others? Without knowing the reasons behind this it's hard to produce a definitive conclusion, however it would appear that not all organisations were treated equally.

Did WADA release too much information in their press release? Yes, without question. A more succinct response, concentrating on the McAfee release would have been a more appropriate announcement.

All of this goes to show that all organisations should be prepared for such disclosures. Having a pre-planned response in the case of such events for a variety of scenarios will ensure that messaging is clear, concise without further undermining your organisations security. As with all reactionary events it also good to run a fire drill to ensure that the organisations response processes are well known and second nature even if their need is hopefully never required.

Tuesday, 2 August 2011

FOI and the release
On July 29th the Department for Transport (DfT) released the list of their top 1,000 visited sites. Although there have been articles written about the list, mostly they seem to centralise on the sites themselves and the browsing habits of the civil servants within the department. However, it occurred to us that whilst released under the freedom of information act, the list itself does present a significant risk, not only to the DfT, but to other Government sites to which there is likely to be a strong correlation of browsing habits. Whether the list was edited before release is a subject of debate, but we would expect a degree of filtering to be applied in order to remove sensitive sites (although the four sites on the Government Secure Intranet (GSI) were retained).

The increased risk

There is an increased rate of technical attacks against Government systems, particularly using browser based or client side attacks. Knowing the browsing habits of your intended victims provides a potential attacker with a list of sites to target, and seed with malicious content. This approach would reduce the footprint within the target organisation of an attack. A typical approach first requires the user to navigate to a malicious site; this is ordinarily achieved through enticement or social engineering (embedded links or terms in a targeted email for example). However, by directly compromising sites this additional step and therefore log imprint at the target environment is avoided.

Seeding the target sites for an increased attack conversion rate is one use of the information. The servers themselves, and the logs they maintain, may also contain information which is useful to an attacker. For example, analysis of the logs contained on the published web servers, would likely reveal users who view the same content at work and at home. A work laptop or mobile device in the home environment can in some cases present a softer target for attack. If it's the same user on a different home machine, there is the potential for information gathering to inform more complex attacks against the Government department or to capture information stored outside the DfT network perimeter.

Taking the list to automation
So say an aggressor wanted to automate the analysis of these sites how hard would it be? In short not very, we can use the python PDF miner to extract the contents of the PDF as so:

pdf2txt.py -o output.txt f0007532-table.pdf

Tidy up the output a little to just get the hosts and remove some blank lines:

cat output.txt | awk 'NF {print $2}' | awk '$0!~/^$/ {print $0}' > tidy.txtResult? A list of host names and IP addresses all tidied and ready for feeding into any automated analysis system. Looking for easy to exploit web application and server configuration vulnerabilities in the target sites. Given the number of sites, the range of material and the potential for vulnerabilities; the likelihood for accurate seeding of malicious content is significant.

Conclusion
Should the DfT have released the information? In our opinion, no. The value of the information in the public domain is relatively insignificant, beyond that of titillation of the reader (someone likes their expensive cars). The value to the attacking population is significant, both in the potential for increased accuracy of direct attacks, and in the availability of user specific data through the correlation of site access across multiple source addresses.