Posted
by
Soulskill
on Tuesday February 02, 2010 @12:31PM
from the feel-free-to-listen-in dept.

viralMeme writes "According to the Register, 'Security researchers have turned their attention to femtocells, and have discovered that gaining root on the tiny mobile base stations isn't as hard as one might hope.' One of the researchers said, 'After hours of sniffing traffic, changing IP address ranges, guessing passwords and investigating hardware pinouts, we had obtained root access on these Linux-based cellular-based devices, which piqued our curiosity [about] the security implications.' Whoever designed these devices should be sent back to computer school. An authentication device that can be bypassed is a contradiction in terms. Or, as some pen-pusher would put it in a report: an unantipicated security excursion.

The very concept of Femtocell's is bass-ackwards. You pay a carrier for wireless access, then pay again for a device to actually provide you with the wireless access, along with monthly fee's for the device and also pay for internet access so the device can connect to the carrier over the internet.

It's like "we couldn't be bothered to actually provide you with coverage at your home/office, so would you mind building out our network for us, and pay us extra for the privilege of doing so".

You pay for the hardware, and the 'minutes' at the normal rate, but no carrier I have seen charges you per month for owning the cell. It isn't nearly as sinister as you describe, since their network still has to haul the call where it's going, even if you do in fact bring it to them via the Internet.

You are right that it's 'their job' to provide you with coverage, but no carrier asserts that they will go to any length necessary to cover 100% of the earth with 100% usable signal. Verizon's ad campaign featuring an army of tower workers following customers around was hyperbolic. Sorry if you got confused.

You also pay for the power needed to operate the cell, which presumably their other customers benefit from. If they put a full cell site on your property, they'd typically pay you between $10-25,000 per year to lease the right to do so (even if it is just putting it on top of an existing structure). Why should they get to place a femtocell at your house for free merely because it runs at a lower power? At a minimum, they should give you a discount on your monthly charge and free service on that cell. Anything less is outright taking advantage of you.

so would you mind building out our network for us, and pay us extra for the privilege of doing so

Nonsense. I bought a unit to extend Verizon's coverage into the areas of my house that the local tower just can't handle. Like, down in the basement - a level of service that no carrier is going to say they'll promise. Verizon doesn't charge me anything for using it, other than the cost of the hardware - a one-time purchase that I gladly, gladly made. And I can sell the unit any time I want, and any other Verizon customer can use it - and there's no account-related paperwork involved. The devices just work. They look for a DHCP server on your LAN, and off you go. You do need to fire them up near a window until they get their GPS bearings, though. But they don't have to stay there.

You know what else is nice? The household mobile phones now only have to talk to a transciever that's a stone's throw away, instead of a quarter of a mile or more away. That means much better battery life when they're not tethered to a charger.

I have the same device for Verizon. My house is basically a Faraday cage since I have a steel roof and chicken wire in the outside walls.

My only issue is that the location based services on my Droid get all screwed up and think I'm a couple hundred miles from where I'm at. I just got a callback from Verizon on this about 30 minutes ago. Apparently this is due to the fact that the network extender only does 1x and not EVDO, but he also said they're looking into enhancing the firmware so that it will support

My house also has a steel roof, and many (not all) walls are plaster with metal lath. My Droid usually locates me just fine indoors, usually within a hundred feet or so. Worst case I've seen is when it can't get GPS at all, and seems to fall back on the location of the current tower a few blocks away -- which is also "close enough" for most things that location data is useful for indoors (searching for local restaurants, for instance).

According to Verizon this is a known problem of the femptocell and the cell phone. It shows my location to be almost 400 miles north. Verizon claims it is due to the fact that the femptocell does not handle EVDO and that my Droid is also picking up an EVDO signal. The engineer said that they are working on adding EVDO support to the femptocell, possibly with a firmware upgrade, which would solve that problem (and give the advantage of EVDO as well). Weatherbug, for example, will often show that I'm in Cresc

I had a land line years ago, but as well as almost never using it since I got my cell, it was notorious for going out. I actually called from my cell to cancel the service and the person at the phone company tried to get me to keep it at a low level of service since cell phones are unreliable. I reminded her I was calling from my cell because I picked up the land line and had no dial tone, and averaged that level of outage about once per 6 months that I was aware of which, since as I said I didn't hardly ev

Say the carrier provides 95% coverage. Getting that last 5% is prohibitively expensive, and only a small portion of possible customers will benefit. It just doesn't make sense carrier to saturate every place with cell towers to the point where they have 100% coverage.

If you are a consumer living in a dead spot, you can rant and rave, but if the amount they can earn from you and others that want coverage there you doesn't cover the cost of an additional tower, it makes little sense for them to build one.

If you're encrypting stuff with X's public key, then only whoever has X's private key can decrypt it. So, in essence, you're certain you're talking to X and not someone pretending to be X.

So, by displaying the hash of the public key of the device you're talking to, you're effectively showing the true identity of who you're talking to.

I think the OP's idea is that you can use this information to be sure you're connecting to your own femtocell (on which you have fixed the vulnerability) and not you neighbor's (possibly hacked) femtocell.

You surely can stick it into another femtocell, but that will do you no good. This new femtocell can't use this key to communicate, because it doesn't have the corresponding private key.

To give another example: I can get the public key from any bank site and stick it into my own web server. This doesn't mean I can trick people into thinking my web server is the bank's -- I won't be able to decrypt anything they send me!

Well sure, but this would only be an exceptable solution to the most paranoid of technically minded people. Nobody wants to manually confirm a public key each time they make a phonecall. It's also quite worthless if you are not controlling the femtocell in question, which would be the vast majority of the time.

But, if an attacker can get control, then so can the owner, which means the owner can fix the security hole.

Not really.. you're assuming the flaw exists in software. Regardless though, I'm interested to see a "fix" for a vulnerability get published which requires people to hack their phone and gives them a list of memory addresses and values that need to be changed. That would go over well.

I believe we usually call "fixes" requiring people to "hack" their phones "firmware upgrades" - The fact that many of us hack our phones with other firmware / software doesn't change what the company is going to call it. It would seem to me to be fairly easy to set up even cheap phones for such a firmware upgrade. Any old phone would need to be replaced at end of contract or it simply would stop functioning. While this won't immediately solve the privacy issues, it would provide for a workable solution. For

Better passwords would have made all the difference in the world. 16 character, mixed case and symbol types would have been enough of a roadblock to prevent them from gaining access. Too many companies are still shipping products that have no intended user access to the command shell with passwords like "Admin", "12345", and the ever-popular "password". It's not like it costs more to have a longer, more complex password.

The problem is not what the default password is. It could be blank and still not significantly affect the security of the device. Its the admins that don't change the default password that are to blame. Lets face it, even if they ship the next device with a 16 char mixed case, special character, number containing, sufficiently random password, it will still be the default password. A simple google search of "Device model default password" will get you the default password pretty much as soon as its rele

I can't say that I agree. Yes, having to guess both the username and the password does improve security, but no more than simply making the password that many characters longer. Essentially you're just making the username part of the password.

The only reason that your approach would add value is if the password length were somehow artificially limited, and the username were protected like a password and assigned using strong password conventions.

Why hasn't anyone suggested that devices like these need to have something likefail2ban built in? Lock people out who are trying to brute force the internalssh server. This exploit probably doesn't represent anything terribly interestingfrom a server or desktop perspective. It's likely that the device just isn't puttogether terribly well and no one ever considered someone trying to hack it likea server.

That would allow for a DOS attack - unless the ban were only temporary. Just connect to a device and give it x wrong passwords, and now even the owner can't get into it.

The solution is what I think is already employed for things like ssh - connection throttling. If the ssh server does not allow a given IP to attempt more than one login per second or two that won't impact legitimate users at all but it pretty-much eliminates brute force attacks against all but the weakest passwords.

Maybe they could give a custom password to each device, and then have their assembly line print out the default password on the bottom of the device. They already print a serial number. Why not print a password? Each device would have a different default password. You may want to keep a highly guarded list of passwords/serial numbers for customer support issues, but if it's printed on the bottom of the device, I would say even that is unnecessary.

Using the SN severely limits the possible space unless the SN is itself highly unique from device to device. This is a good place to start but make sure it's a significant portion of the SN and the SN itself is very long and non sequential. Better yet, make it a hashed password only based on the SN; using a private key so it can't be reversed. Going from a trillion possibilities to a million may sound like a trivial problem but it really hurts the depth of security, all the attacker needs is a sufficient

Simple, some devices require no log-in to make use of them (such as the femtocell, or almost every other firewall-router) since the default settings are sufficient for 99% of users. In this case, you don't want to burden the user with setting (and then forgetting) the password to the device just to make use of it. Set it to something strong and unique, and give it to the user in a form that is secure (a sticker on the box which can be clipped and saved, or a sticker on the unit). The final effect is that

Even better might be having a cryptographic token, either something like a SecurID card except with a replacable battery, or a USB smart card that stores a private key on board. This way, an authorized user just needs to dig out the keyfob, jam it in a port or type in the 6-8 digit number plus the password as mentioned above, and access is granted. A remote attacker most likely would not have physical access to the cryptographic token, so that slams the door on a lot of attacks right there, forcing the bl

I was thinking about the entire S/N, which should be 20-40 alphanumeric characters (and, of course, nonsequential; maybe an MD5 hash of the order the device was built in prefixed by model information). Provides a nice incentive for the user to change the password, as well. 36^20 is a nice, big search space and 36^40 should be enough to keep any naive attacker at bay until the device has been replaced.

Maybe, if you want to be particularly user-friendly, you could use shorter S/Ns but I wouldn't go below 10

That's not a good idea, since the serial number normally is - eh - serial in nature. You can easily scan the range. The other option is to use a random "serial" number but then you need to make changes to the organization. If you are going that way, just print a random number after the serial number (starting with "PW" to distinguish it from the serial number, svp, we've got enough "anonymous" numbers as it is).

Easy way to relate serial numbers to passwords: Append a secret value to the S/N, hash the value (SHA-512 comes to mind), take the first x number of characters (preferably more than 20, 64 would be best). This way, the serial number doesn't really matter because without the nonce added, it won't give meaningful information.

Of course, the machine that has the secret value (and I hope this is something that changes with each model), is going to be heavily locked down.

The problem is not what the default password is. It could be blank and still not significantly affect the security of the device. Its the admins that don't change the default password that are to blame. Lets face it, even if they ship the next device with a 16 char mixed case, special character, number containing, sufficiently random password, it will still be the default password.

It could have a randomly generated password printed on the same sticker as the serial number and phy mac.

Or you could use the serial number as the initial password and require the administrator to change it at first login, thus making it impossible to configure the device without first setting a password. Include a convenient physical reset button to reset it to factory configuration (including password) if you screw up, but make sure that this forces you to reconfigure everything before the device is usable.

Of course, this assumes that it is necessary to do at least some configuration in order to use the dev

Simple Passwords have to be reset less often, Which means less cost on the Customer/Luser Support calls. Not By a lot but not entirely Negligible.

Also having a complex password also means it usually has to be written down or requested often leaves room for Social engineering,So therefore Having a Stronger Password Unnecessarily can actually reduce overall security by increasing other attack vectors.

Any system that lets a user bruteforce the password is inherently flawed, Hell even windo

Yes there is a cost; a company installs a plug-n-play device A. It works for a while (months, years). Then it stops working or they want something changed or it doesn't work with some new device B. So then they call me to figure out the integration. Now, I need to log in and find out as much as I can about the device in as short a time as possible. I'm over 100 km from the device, have never used one before. The person who originaly installed device A has retired and is now snorkeling in the Solomon i

I think what you meant to say is there is an inherent cost to being forgetful (forgetting the password before writing it down in a safe place) or lazy (not writing it down in an safe/perpetual place.) Yes, if the alternative is leaving a password susceptible to casual attack, feel free to write the password down and lock it in your desk drawer with the IP of the device on it, and leave that post-it around for the next guy.

That's real nice if everyone cooperates, but it is all too easy for a disgruntled admin to change either the password or the password database, and lock the next guy out. Wasn't there such a psycho last year, who was screwing with CA utilities or some ISP long after he'd been fired (for being a psycho) ?

Too many companies are still shipping products that have no intended user access to the command shell with passwords like "Admin", "12345", and the ever-popular "password". It's not like it costs more to have a longer, more complex password.

You think longer, complex setup doesn't cost the company money? I gather that you haven't considered support costs?

The best solution I've seen so far is to have a strong password printed on a sticker on the outside of the box. That's a pretty good compromise because if the attacker has physical access to the box, he/she could hit the "Reset" button on the device anyway. Thus, putting the password on the bottom of the device on a sticker really isn't any less secure than other solutions, and this can be done fairly cheaply.

But it still costs - each router has to be given its own unique password, and a process has to be set up to match up the passwords given with the stickers, and there are still more support costs from the clueless dolts who have to be told to look on the bottom of the device for the default password.

If you assume any intelligence on the part of the end user, your support costs will quickly challenge that assumption!

The device ships with it's user interface completely locked. There's no possibility to login. Press a button on the device, and you can logon using default credentials - doing this will prompt you to change user and password. After doing this, the button can be used to perform a full reset of the device.

Basically, the device is secure out of the box - when logging in for the first time, you need to provide physical authentication, and afterwards you

Better passwords would have made all the difference in the world. 16 character, mixed case and symbol types would have been enough of a roadblock to prevent them from gaining access. Too many companies are still shipping products that have no intended user access to the command shell with passwords like "Admin", "12345", and the ever-popular "password". It's not like it costs more to have a longer, more complex password.

Neither is it anymore secure. Having the same 16 char password on every unit of a product only makes it frustrating to use; not any more secure. What is needed is a individual password for every unit based on something unique like the serial number of the unit, and this WOULD cost more money for production AND support costs. Also you would alienate a portion of the market because this seemingly simple thing will be well beyond their ability. Stupid people will always exist, it is the burden of society to to

"Security Excursion" gets 50 Google hits, most of which seem to be talking about boondoggles and outings. ("Excursion" about "security".)

One google hit [gcps-ocs.com] supports GFP's use of the phrase, though:

Security Vulnerability Threat Assessment Audit: The scope of Gulf Coast Project Services audit process goes beyond Public Safety. It encompasses Business Interruption and Corporate Survivability. The objective of this audit is to leverage existing work processes and standard guidelines in order to determine gaps in a

The Reg article kinda brushed off the risks of a cell-tower MITM attack, relegating it to a mere "loss of privacy" because the 3G cryptosystem is strong.

I assume it means that the cryptosystem is too strong for a realtime attack. It's a damn rare cryptosystem that can't be broken using enough stored ciphertext, so if the modified femtocell is storing and forwarding all traffic, traffic analysis + theoretical weaknesses in the algo + massive compute power == recovered clear material at some point in the future. Depending on the use case, there may be a lot of value in that.

"I assume it means that the cryptosystem is too strong for a realtime attack. It's a damn rare cryptosystem that can't be broken using enough stored ciphertext, so if the modified femtocell is storing and forwarding all traffic, traffic analysis + theoretical weaknesses in the algo + massive compute power == recovered clear material at some point in the future."

It's not such a rare cryptosystem that can't be broken given enough stored ciphertext,. And it is definitely not hard to construct nowadays (especia

Don't use the regular 3G voicecalls, use only encrypted VoiP. Preferebly with a microSD card filled with one-time pad

Of course its not actually a bad thign that these are hacked, people just need to realise that their communications are not secure. just like when I use my Nokia's SIP client now I know full well that it would be easy for the person who'se WiFi i'm using to intercept my calls but I take the chance anyway.

Femtocells rely on 'security against the user' much like DRM does, in fact a larg

use only encrypted VoiP. Preferebly with a microSD card filled with one-time pad

Say what? Either you don't know what a one-time pad is and are just pulling cryto terms out your ass, or you have really weird telephone habits. OTPs never make sense, unless you are a spy deep in enemy territory and you need to transmit a handful of words with perfect security to a single receiver. The logistical issues with a system like the one you are proposing are absurd.

One time pads are truly secure, but the hard part is getting a copy of the OTP from Alice to Bob via a secured route, as anyone who intercepts it has full and unfettered access. Also, depending on the amount of data transferred, the amount of bytes stored on the OTP might run out.

Instead, if you are designing a cryptosystem where the two endpoints are "introduced" to each other, and essentially only talk to each other, so public key cryptography isn't needed, there is one method you can do:

The summary mentions "investigating hardware pinouts". This makes me think that the attack is, in part, on the hardware. If one has access to hardware, they've pwned the system. Period. So this is a non-issue.

Second; cell phones trusting the base station has always been a security issue. And "exploits" based upon this weakness are already in use by law enforcement as well as criminals. The whole inmates sneaking cell phones into prisons has been made a non-issue based upon this very approach. Prisons are beginning to cover their facilities with femtocells which give them the ability to monitor all illicit cell traffic on their property. Any truly secure system will assume that the network carrying its traffic is insecure.

I'd presume (without having RTFA of course) that what is meant is that they bought a femtocell, looked at its hardware pinouts, and this helped them devise an attack that would work on any instance of that model of femtocell (without physical access).

Whoever designed these devices should be sent back to computer school. An authentication device that can be bypassed is a contradiction in terms.

First of all, this is not an authentication device, it's a cell network extender, which obviously requires some kind of authentication for any measure of security. What "Authentication device" (I think they mean "authentication mechanism") has never had a vulnerability exposed? Are all devices with a privilege escalation vulnerability designed by people who "should be sent back to computer school?" ("computer school?"...seriously?). How many privilege escalation vulnerabilities were found in the Linux kernel last year? I empathize with the fact that an escalation exploit this serious in a device that is designed to be used by the public is not a trivial matter, but the poster is being sensationalist here, and, honestly, comes across as undereducated in the subject matter. I wouldn't consider myself an expert, but this person doesn't seem to have a clear understanding of the issue. It's a security vulnerability in a device that runs Linux because the designers were lazy when picking a password.

The real issue here is the fact that security is sometimes not taken as seriously with hardware and firmware design in commodity devices as it is with software.

"The real issue here is the fact that security is sometimes not taken as seriously with hardware and firmware design in commodity devices as it is with software."

I love that last statement. It's not only not taken seriously, it is rarely programmed by someone educated on the subject. And the users of these systems are also to "blame". Even I, when browsing for a new ADSL modem, don't look at the state of the security in a device. I'll look if a router has WPA2 but that's about the extend of it. This is not

Just what is that supposed to mean exactly? Does this crack require physical access in order to be executed?

"We've sniffed for hours, and nothing.""Try a different BOOTP request!""Damn orinoco firmware...""This sucks, how are we gonna get a publication out of this?""Fine, gimme the bolt cutters" *snip* *clink*..."Hmm.. those are intersting pinouts.. they look like..""Yeah, dude that's SATA !!"... *knoppix cd spins up*

I've been working on hacking the Vodafone femtocells for fun. They have an internal serial port and the bootloader has no security, not to mention the Linux image uses short default passwords that are easy to crack given the shadow file. So far we don't know of a way to get root given only network control, but it might be possible depending on how their IPSEC tunnel is set up. Our goal would be to use these for our own network, via OpenBSC.

It's worth noting that it's early and we're not entirely sure about the security implications and just how much you can do with these things (e.g. I don't know yet if voice traffic is decrypted inside the femtocell or if it is passed on encrypted to the servers). Chances are there will be some interesting exploits and chances are they will be presented at this year's Chaos Community Congress if they're interesting enough. Unless we get bored and work on something else, which happens sometimes.

There are two modes: 'anyone' or 'from a list'. Now 'anyone' means that any Sprint customer in range can use the device up to the preprogrammed maximum of 3 simultaneous calls. 'From a list' means that only the phone numbers from a pre selected list are allowed to access the box. The problem is that is if you are a Sprint customer and your # is not on the list you can't have ANY service at all. You are in a 'private network' and therefore excluded from BOTH the Airave and connections to a local tower.

I spoke with Harald Welte (of OpenBSC etc. fame) on ELC Europe back in October. He told me that he successfully gained root access to one of those Femtocells sold in the UK. As far as i remember he said that it was not very difficult to get access, also that he found some of the builtin features (e.g. check if operated in the correct location) nonworking.

On the other hand: This was bound to happen. Most embedded linux systems which have at least some remote hack-value tend to get opened up some day.

We tested one of those at 26C3 using a simple VPN to the UK, so we had a Vodafone UK network in Germany (and successfully placed a call). This is Not Supposed To Work (and at this point we hadn't made any changes to the software yet). It seems beyond nonexistent physical security, the location determination features and other measures in place to prevent use in the wrong place/country aren't working very well or at all.