Archive for July 10th, 2013

By now, you’ve likely seen Google’s announcement that they now support a seven-day timeline for disclosure of critical vulnerabilities. Our CTO Raimund Genes believes that seven days is pretty aggressive and that rushing patches often leads to painful collateral damage.

I agree that with the current environment many firms would have a hard time understanding the vulnerability, creating a patch and running quality assurance in that seven-day window. Hopefully, someday we will look back and wonder why it took us so long to get to a 24 hour patch cycle. Today, fixes are rarely given the level of resource that a new feature would have. But how do we effect a change here?

I would like to float a proposal to change the social contract. My proposal is simple: when reporting a vulnerability to a vendor, the individual finding the flaw should wait at least until the day after the next Patch Tuesday before releasing their report publicly. There should be a declaration in the initial report indicating the intent to publish based on this protocol. I should add that if there are less than fifteen days to the next Patch Tuesday, one should wait one more cycle. That’s it, clean and simple.

Note that this suggested timeline is a minimum wait period for unilateral action on the part of the reporting party. Discussion and negotiation with the vendor is encouraged. If the vendor fixes it sooner and clears release earlier, then by all means publish. If the vendor asks for more time, it is up to the reporting party to balance the risks to the public and the concerns of the vendor and decide whether to grant the extension or go ahead with publication.

Vendors cover a wide spectrum in terms of responsiveness to reported vulnerabilities. Some are super responsive, others do a good job of emulating /dev/null. My proposal aims to level the playing field. Researchers would have the responsibility to provide notice to the vendor and a reasonable time to repair, this would give them the right to publish on a set timeline. Vendors would have the right to expect advanced notice and the responsibility to fix within the agreed timeline.

One final thought: I used Patch Tuesday because it is well-known and prevalent. If the vendor has an established patch cycle, it would be good form to use their cycle, provided it is reasonable. If the cycle is too long (e.g. updates are released on January 1st every other year), then I suggest falling back to the Patch Tuesday model.

Last week, security researchers announced a new vulnerability for Android phones which could allow installed apps to be modified without the user being aware of it. Almost all Android devices are vulnerable, as the vulnerability has existed since Android 1.6 (Donut), and currently only the Samsung Galaxy S4 has been patched to protect against it.

The vulnerability – known in some quarters as the “master key” vulnerability – has attracted considerable media attention, but it has not always been accurately reported. We have updated Trend Micro Mobile Security to protect our users, but at the same time we wish to clarify what’s going on, what the threat is, and what users can do.

What’s this “master key” vulnerability?

The vulnerability is related to how Android apps are signed. All Android apps have a digital signature from their developer, which verifies that the app actually did come from the developer and was not modified en route. An app can only be updated if the new version has a matching signature from the same developer.

This particular vulnerability is in that last step. What researchers have found is a way for attackers to update an already installed app even if they do not have the original developer’s signing key. In short, any installed app can be updated with a malicious version.

Note that technically, there is no “master key” that has been breached. Yes, any app can be modified and used for malicious purposes, but there’s no “master key” in the first place.

What are the risks?

This vulnerability can be used to replace legitimate apps on an Android device with malicious versions. Apps with many permissions – like those from the phone’s manufacturer or the user’s service provider – are at particular risk.

Once on the device, they can behave in the way that any malicious app would, except the user would think they were a completely legitimate app. For example, a modified/Trojanized app for a bank would continue to work for the user, but the credentials would have been sent to an attacker.

What can users do to protect themselves?

We’ve updated our Trend Micro Mobile App Reputation Service to detect apps that abuse this vulnerability, but so far we have not found any. Nonetheless, for users of Trend Micro Mobile Security, we have released an update to the pattern to ensure that we will detect apps that target this particular vulnerability. (All users with pattern version 1.513.00 or later are covered. Apps found exploiting the vulnerability will be detected as Android_ExploitSign.HRX) This is sufficient to ensure that our users are protected from this threat.

We strongly suggest disabling the ability to install apps from sources outside of Google Play. This setting can be found under Security in the system settings of Android devices.

Google has made some steps to protect users. They’ve modified the backend of their online store so that apps that try to exploit this problem are blocked. Thus, users who do not download apps from third-party stores or sideload APK files should not be at risk from this threat. The company also released a fix for the vulnerability and distributed it among OEMs. Hopefully, the importance of this update will prevent delays in its deployment.

Update as of July 11, 2013 3:43 AM PST

We were able to find a report that features a different approach for the same attack to bypass Android signature checking, this time using a Java Zipfile implementation vulnerability. We are currently working on the solution, and malicious apps that will be found using this technique will be detected as AndroidOS_ExploitSign.HRXA.