No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.

Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.

Developer submits build request to the builders.

Builder checks out build input and does not notice the malware, construct the product.

Builder does not spot the malware in the built product.

Testers do not spot the malware in final testing.

Release engineers do not spot the malware, and release the product.

Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

I realise now that I didn’t cover this when it happened back at the beginning of March, but that not everyone in either the Apple world nor the general infosec community is aware of it. Nearly one month ago, Apple hired a new Security Product Manager (the position was vacant at the time of WWDC 2008 and I think it was just being covered by another product manager in the interim): welcome Window Snyder.

Window has a good history in the infosec world; after working as security design architect at @stake, she moved to Microsoft to act as security sign-off for XP Service Pack 2 (Microsoft’s first OS release focussed solely on security improvements) and Windows Server 2003 (their first completely new OS release after the security push of 2002). It was during her watch that Microsoft became more open about their vulnerability reporting, and introduced “Patch Tuesday” to help systems administrators manage the patch lifecycle. I happen not to like the Patch Tuesday mentality, but at least Microsoft thought about the issue and reacted to it.

After Microsoft, Window became Chief Security Something-or-Other at Mozilla. Here she promoted measurement and tracking of security issues, process improvements and greater transparency, both in terms of Mozilla’s reporting and that of other vendors.

I think that, given the authority to make process and reporting changes regarding Apple’s security procedures, she will be a great addition to Apple’s security teams. Apple typically drop security updates without warning and with minimal information on the content and severity of the vulnerabilities addressed; they maintain what could be charitably described as an “arm’s length” relationship with security vendors and have a history of slow reaction to vulnerabilities discovered in open source components. I have great hope for those facets of Apple’s security work changing soon.

I assume that, with my audience being mainly Mac users, you are not familiar with Microsoft Security Assessment Tool, or MSAT. It’s basically a free tool for CIOs, CSOs and the like to perform security analyses. It presents two questionnaires, the first asking questions about your company’s IT infrastructure (“do you offer wireless access?”), the second asking about the company’s current security posture (“do you use WPA encryption?”). The end result is a report comparing the company’s risk exposure to the countermeasures in place, highlighting areas of weakness or overinvestment. The MSAT app itself isn’t too annoying.

Mostly. One bit is. Some of the questions are accompanied by information about the relevant threats, and industry practices that can help mitigate the appropriate threats. Information such as this:

So, how does changing a password reduce the likelihood of a brute-force attack succeeding? Well, let’s think about it. The attacker has to choose a potential password to test. Obviously the attacker does not know your password a priori, or the attack wouldn’t be brute-force; so the guess is independent of your password. You don’t know what the attacker has, hasn’t, or will next test—all you know is that the attacker will exhaust all possible guesses given enough time. So your password is independent of the guess distribution.

Your password, and the attacker’s guess at your password, are independent. The probability that the attacker’s next guess is correct is the same even if you change your password first. Password expiration policies cannot possibly mitigate brute-force attacks.

So why do we enforce password expiration policies? Actually, that’s a very good question. Let’s say an attacker does gain your password.

OK, "an attacker does gain your password."

The window of opportunity to exploit this condition depends on the time for which the password is valid, right? Wrong: as soon as the attacker gains the password, he can install a back door, create another account or take other steps to ensure continued access. Changing the password post facto will defeat an attacker who isn’t thinking straight, but ultimately a more comprehensive response should be initiated.

What is it about government security agencies and, well, security? The UK Intelligence and Security Committee has just published its Annual Report 2008-2009 (pdf, because if there’s one application we all trust, it’s Adobe Reader), detailing financial and policy issues relating to the British security services during that year.

Sounds “riveting”, yes? Well the content is under Crown copyright[*], so I can excerpt some useful tidbits. According to the director of GCHQ:

The greatest threat [to government IT networks] is from state actors and there is an increasing vulnerability, as the critical national infrastructure and other networks become more interdependent.

The report goes on to note:

State-sponsored electronic attack is increasingly being used by nations to gather intelligence, particularly when more traditional espionage methods cannot be used. It is assessed that the greatest threat of such attacks against the UK comes from China and Russia.

and yet:

The National Audit Office management letter, reporting on GCHQ’s 2007/08 accounts, criticised the results of GCHQ’s 2008 laptop computer audit. This showed that 35 laptops were unaccounted for, including three that were certified to hold Top Secret information; the rest were unclassified. We pressed GCHQ about its procedures for controlling and tracking such equipment. It appears that the process for logging the allocation and subsequent location of laptops has been haphazard. We were told:
Historically, we just checked them in and checked them out and updated the records when they went through our… laptop control process.

So our government’s IT infrastructure is under attack from two of the most resourceful countries in the world, and our security service is giving out Top Secret information for free? It sounds like all the foreign intelligence services need to do is employ their own staff to empty the bins in Cheltenham. In fairness, GCHQ have been mandated to implement better asset-tracking mechanisms; if they do so then the count of missing laptops will be reduced to only reflect thefts/misplaced systems. At the moment it includes laptops that were correctly disposed of, in a way that did not get recorded at GCHQ.

[*] Though significantly redacted. We can’t actually tell what the budget of the intelligence services is, nor what they’re up to. How the budget is considered sensitive information, I’m not sure.

Not much movement has occurred on projects like SSHKeychain.app or SSHAgent.app in the last couple of years. The reason is that it’s not necessary to use them these days; you can get all of the convenience of keychain-stored SSH passphrases using the built in software. Here’s a guide to using the Keychain to store your pass phrases.

Create the key pair

We’ll use the default key format, which is RSA for SSH protocol 2.0. We definitely want to enter a passphrase, so that if the private key is leaked it cannot be used.

If you haven’t seen it before, randomart is not a screenshot from nethack; rather it’s a visual hashing algorithm. Two different public keys are not guaranteed to have different randomart fingerprints, but the chance that they are close enough to pass a quick visual inspection is small.

Deploy the public key to the SSH server

Do this using any available route; I choose to use password-based SSH.

There is no step three

Mac OS X automatically runs ssh-agent, the key-caching service, as a launchd agent. When SSH attempts to negotiate authentication using your key-based identity, you automatically get asked whether you want to store the passphrase in the keychain. Like this:

Now the passphrase is stored in the keychain. Don't believe me? Lookie here:

So your SSH key is protected by the passphrase, and the passphrase is protected by the keychain.

Update: Minor caveat

Of course (and I forgot this :-S), if you use FileVault to protect the home folder on the SSH server, the user's home folder isn't mounted until after you authenticate. This means that the authorized_keys file can't be consulted during negotiation. Once you have logged in once (using your password), subsequent logins will use the keys (as PAM automatically mounts the FileVault volumes the first time, so authorized_keys becomes available).

You too can own a piece of the magic. Professional Cocoa Application Security and Enterprise Mac: Mac OS X Snow Leopard Security are both already in pre-order; use the Amazon affiliate links below if you want to give me a little extra kick-back from the sales of each. You’re most kind :).

So what conference was on in this auditorium before NSConference? Well, why don’t we just read the documents they left behind?

Ooops. While there’s nothing at higher clearance than Unrestricted inside, all of the content is marked internal eyes only (don’t worry, feds, I didn’t actually pay too much attention to the content. You don’t need to put me on the no-fly list). There’s an obvious problem though: if your government agency has the word “security” in its name, you should take care of security. Leaving private documentation in a public conference venue does not give anyone confidence in your ability to manage security issues.

I’ll be talking at the US NSConference on Tuesday, with an extended version of my talk on code signing. I’ll cover how it works, what it does, what it doesn’t do, and what it should do. Importantly, there are still a few seats left for the conference so if you can get to Atlanta by Tuesday, come and watch me talk!

I just reviewed a blog post I wrote for Graham Cluley a while back, in which I looked at the impact a common vulnerability on the iPhone and Mac would have. I think in the run-up to the iPad’s release, it’s a risk worth bearing in mind.

A couple of days ago, Daniel Kennett of the KennettNet micro-ISV (in plain talk, a one-man software business) told me that a customer had fallen victim to a scam. She had purchased a copy of his application Music Rescue—a very popular utility for working with iPods—from a vendor but had not received a download or a licence. The vendor had charged her over three times the actual cost of the application, and seemingly disappeared; she approached KennettNet to try and get the licence she had paid for.

Talking to the developer and (via him) the victim, the story becomes more disturbing. The tale all starts when the victim was having trouble with her iPod and iTunes, and contacted Apple support. The support person apparently gave her an address from which to buy Music Rescue, which turned out to be the scammer’s address. Now it’s hard to know what she meant by that, perhaps the support person gave her a URL, or perhaps she was told a search term and accessed the malicious website via a search engine. It would be inappropriate to try and gauge the Apple support staff’s involvement without further details, except to say that the employee clearly didn’t direct her unambiguously to the real vendor’s website. For all we know, the “Apple” staffer she spoke to may not have been from Apple at all, and the scam may have started with a fake Apple support experience. It does seem more likely that she was talking to the real Apple and their staffer, in good faith or otherwise, gave her information that led to the fraudulent website.

The problem is that if you ask a security consultant for a solution to the problem of being scammed in online purchases, they will probably say “only buy software from trusted sources”. Well, this user clearly trusted Apple as a source, and clearly trusted that she was talking to Apple. She probably was, but still ended up the victim of a scam. Where does this leave the advice, and how can a micro-ISV ever sell software if we’re only to go to stores who’ve built up a reputation with us?

Interestingly the app store model, used on the iPhone and iPad, could offer a solution to these problems. By installing Apple as a gateway to app purchases, customers know (assuming they’ve got the correct app store) that they’re talking to Apple, that any purchase is backed by a real application and that Apple have gone to some (unknown) effort to ensure that the application sold is the same one the marketing page on the store claims will be provided. Such a model could prove a convenient and safe way for users to buy applications on other platforms, even were it not to be the exclusive entry point to the platform.

As a final note, I believe that KennettNet has taken appropriate steps to resolve the problem. As close as I can tell the scam website is operated out of the US, making any attempted legal action hard to pursue as the developer is based in the UK. Instead the micro-ISV has offered the victim a discounted licence and assistance in recovering the lost money from her credit card provider’s anti-fraud process. I’d like to thank Daniel for his information and help in preparing this article.