Is iOS jailbreaking an enterprise security threat?

Jailbreaking a smartphone means fiddling with its OS so you can load the applications of your choice, bypassing the requirement to download digitally signed apps only from, say, Apple’s iTunes App Store. Opinions tend to be binary: Either jailbreaking is an unalloyed act of end user liberation and empowerment, or it’s the Digital Apocalypse.

Recently, Apple quietly and without explanation disabled a new API, introduced in iOS 4.0, intended to be used in discovering whether an iOS device had been jailbroken. Software vendors of mobile management applications insist they can, and do, use other techniques to discover that.

Apple’s decision sparked a new round of debate over jailbreaking, but without shifting the binary terms in which the debate has been framed. We went into more details about jailbreaking and the enterprise with Jeremy Allen, a principal consultant with Intrepidus Group, a New York City consulting firm specializing in mobile security. Allen has a background in security and application development, and he focuses on iOS and applications that run on it.

Some will argue that jailbreaking iOS is a right, not a risk. How do you see it?

My general thought on it is that, as shipped, iOS devices add a lot of security due to the code signing of everything on the device. When you live and play in the "Walled Garden of Steve" as I have seen it called, you get a lot of benefits for that...The problem I have is that, usually, big organizations don't let users have administrative privileges on corporate-owned devices [e.g. laptops], so why would we be letting users have them on a corporate-owned iPad?

What does code signing bring to the table for mobile security?

Code signing is a pretty giant roadblock to malware.

On a Windows PC, when you download a program from the Internet, you get a popup that tells you “publisher: unknown” or “publisher: Adobe” and so on. Windows figures that out through code-signing – the code publisher gets a certificate from Verisign, and “signs” the code. That lets you, as the developer, prove you’re the author of the code and that it’s trustworthy.

For iOS devices, you as a developer get a certificate signed by Apple. When the code is downloaded, Apple will lookup the code and make sure it’s properly rooted to the certificate. For iOS devices, if the code signing is not from Apple, and Apple only, you can’t run it. It creates a secure playground. By forcing any code that you want to run on the mobile device to be [first] signed from Apple, you can eliminate a lot of problems.

So what does jailbreaking actually do?

It disables most of the code signing checks.

Apple offers [in iOS] public and private APIs. Any apps in the App Store use only the public APIs. Private APIs aren’t necessarily secret but only Apple can use them, and Apple can change them at any time.

Jailbreaking lets you use the private APIs. Then, you can implement things like multitasking in iOS 3.0 [before Apple partly enabled it in 4.0]. You have more control over the apps you write. And you can put anything you want on your iPhone. At bottom, it’s a Unix device. [So] you can install SSH [Secure Shell] and tunnel into your phone and use it, for example, for tethering. You can change the graphical look and feel of the iPhone pretty significantly.

What are the risks with jailbroken devices?

Any code can run on your phone: You could get malware that could steal all your emails or whatever.

Usually, jailbreak users install software from Cydia [an open source code package manager and, now, online store], and who knows where that code came from? You could throw some backdoor on those programs a lot more easily than you could on Apple’s servers.

Second, if you install and configure SSH, the root user password would be weak and make it easy for anyone to take over your phone. There are all kinds of bad and unexpected outcomes with jailbreaking.

Having said that, the chances of someone currently targeting jailbroken iPhones are low, because there are not that many of them. From the standpoint of a developer writing ‘malware that will run anywhere,’ it’s a very small user audience.

[Apple has a list of problems encountered by iOS users who have jailbroken their devices.]

Based on your work with enterprise IT in mobile deployments, how do they see jailbreaking?

They want a way to detect it. The iOS 4.0 release was focused on mobile device management: Jailbreaking sidesteps all that. Even when it’s a personal [iOS] device, IT is saying “we know this is your personal device, but if you want to access to corporate email on you phone, you need to have some security configured.”

A lot of endusers may not realize all the other risks they take when they jailbreak.

Some say “jailbreaking is not a big deal.” And it’s not, from my perspective. But you don’t need a lot of the features you get with a jailbreak, and the phone is less secure. So why do it?

Why do you say jailbreaking is no big deal?

At Intrepidus Group, we’re always talking about this. We agree that jailbreaking isn’t an instant death sentence. If you’re a “consenting IT practitioner,” and you jailbreak your device, you probably know what you’re getting into. You know the risks.

But a lot of end users who do this, they don’t even change the root password on their device. That’s the problem: If you make an informed decision, it’s like being on your laptop as “administrator,” which you have to be in order to install programs. But in iPhone, you can’t do that [legally], and the phone is intended to take care of itself. If you change this by jailbreaking, you take on the responsibility for doing what the phone was doing for you.

So, I would say it is a big deal for end users to jailbreak their phones. But the result is not technically different from most of the other devices out there on the Internet today.

But I think if a user wants to bring their mobile devices "into the corporate fold," they need to accept some things like “not removing things that make your device more secure.”

Apple’s mobile device management APIs have been released to these vendors, but they’re under NDA. You or I can’t see these, or how they’re used. Apple doesn’t even give this information to the Department of Defense.

The classic mechanism is to write an application that tries to do things it shouldn’t be able to do. But the question is, will that still work in six months, with the hackers always one step ahead?

Copyright 2017 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.