Paul Squires on security and related topics

Main menu

Tag Archives: telecoms

Some interesting analysis on why the iOS platform can be considered to be secure – largely as a result of the level of control that Apple maintains over the hardware, OS and available applications for commercial purposes and not because of any inherent choice for the sake of security.

To me, this opens up some interesting questions about the security design of the variety of programmable machines we now use, ranging from true “general purpose computers” to specific function devices and where a “phone” sits on that spectrum. We’ve moved a long way in the mobile device world in a very short amount of time – modern phones share more in common with our desktop computers than with first generation mobiles.

One potential cause of security issues (particularly in embedded or specialist systems) is allowing the device to do to much (in a basic betrayal of the principle of least privilege). If I’m making, for example, a domestic refrigerator I probably don’t need to include a HTTP server, unless I want to start adding “features” such as inventory checking over a network (because, y’know, it’s easier than opening the door). The issue then becomes that the HTTP server in question is configured by people who manufacture ‘fridges and not by experts in apache (or IIS!).

Phones (or indeed tablets) are hybrid devices – more than a ‘fridge, but not as flexible as a laptop – that’s mostly a choice of the OS provider though and we see easy to use hacks (such as jailbreak) to extend that flexibility. The problem is that, in almost all security systems, the weak link is the humanity – by giving that greater flexibility we will see security issues – the fact there is a default “root” password on iPhones, or the ability to run applications that have not been vetted. For those of us that are advocates of open systems this can be a dilemma – how can we give freedom, but ensure that the stupid edge of the user-base is properly blunted?

This is worse when we consider what “security” means to the vendors rather than the owners of the device – preventing people from playing unauthorised media (DRM) or using functions that would “inhibit” revenue (smartphone data tethering).

This brings us back to a point about who controls the update process for a device and when those updates are released. The great success of Apple has been to remove control from the carriers – they deliver the update to your computer and the device is updated when you sync media – it’s elegant and means that more people have the latest versions. Other devices do not fare so well – over the air delivery is one thing, but potentially less reliable, uses precious mobile bandwidth and pushes your phone back to being at the mercy of whomever controls that channel.

One other point about patching is that whilst it’s almost always better to patch there have been plenty of examples where it has caused more problems – ranging from new security flaws, unexpected changes in functionality or rendering a device unusable. It’s taken Microsoft many years to establish a process that works where Windows users can be kept up to date without too much worry – even so, it’s always possible to roll-back. Would that be possible with an over the air update that somehow renders the device unable to re-connect..? Not a likely scenario, but something to consider.

As we get more and more connected devices understanding the software used and potential vulnerabilities will become more important – how we can quickly and easily update those, correcting the errors, is a vital part of the system, but will never be the most important – the ability to work around the security issues of the human element will be.

Ultimately, Apple may have the most “secure” OS, but that’s because it’s one of the most locked down. Security is easy to achieve on any system – switch it off, lock it away somewhere inpenetrable and don’t allow any inputs or outputs – making it usable and secure is slightly tougher.

What’s interesting to me about this ongoing story (how many years is this now?!) is the lack of detail and information from a security perspective and even the basics about what has been alleged.

From following the story I’m still not entirely sure what is meant by “phone”; does it refer to a handset itself, or a telecoms network? I’m also not sure what is meant by “hacking” in this case although I’m assuming it’s not someone jailbreaking an iPhone…

Either way this is less of an individual privacy story and more one related to criminal misuse of computer systems. Where are the network operators involved in all this? Shouldn’t they be the ones calling for an investigation, or at the very least demonstrating that the networks they run are not so easy to “hack”?

The media coverage of this whole “event” is pathetic. A sample line from the BBC Q&A (linked to from the above story) is –

Who do we know was hacked?

I’d go so far as to say that, with regards to this, nobody has been hacked, unless there are some related battery and ABA charges related to this.

What’s missing is clear and concise information about what has happened. This affects all of us – individuals and businesses – who use commercial telecoms networks, not just celebrities and politicians (although I’d include them in the former category nowadays). At the very least there’s a fantastic upsell opportunity for someone…

In these days when Google and Facebook are slammed for not providing satisfactory privacy controls (even though users willingly share information on those services) I find it disgusting to see the people responsible for controlling these systems are not being questioned.

Update (27 Jan 2010 @18:47): Some more information from The Register. The comments on this story indicate that there’s not really “hacking” in any true sense, but taking advantage of the ability to access voicemail from other ‘phones, along with easily guessable PINs. Perhaps there’s an easy lesson to be learnt here.