Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Something that bothers me about this that I haven't seen mentioned anywhere is that "hacking" in to anyone's voicemail is far too easy to do.

Every voicemail system I have had has used a 4 digit pin to protect it. Many times it comes pre-set with the last four digits of the phone number as the pin or something simple and obvious like 0000 or 1234. As far as I know there is no company that implements throttling or lockout when too many wrong combinations are tried, meaning that using an automated dialer you could try every possible combination in around 10 hours even if you could only try one attempt every 30 seconds.

I don't know what else is required to "hack" into someone's voicemail but I'm tempted to think that if a private investigator could do it, the phone companies are not trying hard enough.

Punishing the people who have committed this crime is appropriate, but it is not sufficient to prevent the crime from being committed again.

Thank you. Unfortunately, the paper is hidden behind the customary paywall but the abstract contains enough information to let me know that the summary here has missed the point entirely.

The universe being 1+1 and 2+1 dimensional in its past is not a new theory. You will find it in Stephen Hawking's "A brief history of time" amongst other places. The trouble with the theory has always been that we didn't know a way of testing it. These guys are proposing a way to test it. Well done to them.

Have another read of his comment. He sent an email To: one person and Bcc: to that person's boss. The boss receives an email that does not have his email address anywhere in it. When the boss hits reply-all, the email will go to two people: the person who sent the email and the person to whom it was addressed. It was the boss who was in the Bcc field and hence when the boss hits reply, he doesn't send an email to himself.

There is no adding "everyone in by hand" because there are only two people who receive the boss's email and neither of them were in the Bcc field.

No. That horse has well and truly left the stable. They don't care about the key or the software any more. The key was only useful while it was a secret.

What they want to do now is punish GeoHot in a public way so that people in the future will think of him every time they start down the path of trying to determine signing keys so they can run homebrew software or at least before publishing those keys.

Sony are thinking about the future now, not the past. The best thing for them about this is that the publicity will be exclusively on the kinds of sites frequented by the kinds of people that can find signing keys and will be largely unnoticed by the general purchasing public. If everyone who has ever heard of GeoHot boycotted Sony, Sony would barely notice. But if everyone who has the skill to find the new keys has heard of GeoHot Sony will have less to worry about.

Sorry for the wall of text I posted. To help, I'll extract just the relevant bit that answers this question.

What makes you think that can attacker that can get part-way into your system can't get the rest of the way ?

It's true that for a short while an attacker may have only got part of the way to everything he needs. He has the hashes in your scenario and does not have the source code or the salt. Are we imagining a CMS that allows raw database access and doesn't have a file browser or any upload capabilities ? I've never seen one like that. Either of those capabilities would likely allow for easy and quick privilege escalation.

Hashes are usually not exposed in a CMS and you need to have greater access such as a database connection or file system access to get to them. Although you can create a scenario in which an attacker has the hashes but nothing else, such a scenario won't happen in the real world. If anything, it would make more sense to store the password hash rather than the salt on the more secure internal box, but since they are always needed at the same time, they are always stored together.

That said, there is something to be said for separation of responsibility and least-privilege. In the real world, the extra couple of hours it takes him to get from the web admin to the shell and from the shell on to the authentication box might be just long enough for your sysadmin to notice that something is going on and respond. Your plan will not stop an attacker dead in their tracks by any means but it may slow him down enough that you don't have to tell your customers to reset their passwords.

You're absolutely right about the first part and wrong about the rest.

Any attacker that has access to your hashed passwords will have access to your source code and the salt.
How ? Unless you have exposed your database directly to the internet, he has attacked you through the web-facing part. So he knows how you created your salt because he has the source code that created it and he knows the value of the salt for any password, even if it's in a separate database, because your web application must have it in order to do authentication. If the web application can request it from the other database then so can he. If (as you suggested) the authentication is done on a separate box with a well-defined API, the attacker can happily use that API or simply continue his waltz through your system until he has access to that box as well. Well protected box ? What makes you think that can attacker that can get part-way into your system can't get the rest of the way ?

All of this effort creating a separate database, a separate authentication box, encrypting your salt or attempting to keep it secret, adding weird things like email addresses to your user's passwords and other techniques of that ilk is merely adding complexity that is entirely unnecessary.
Why ? Because you should be using an encryption scheme like bcrypt (there are other schemes that have the same properties). bcrypt allows you to artificially slow down the brute-force discovery of passwords by extending the algorithm that hashes them in the first place. If you aren't happy with attackers being able to brute-force 10,000 potential passwords per second, change the input to bcrypt when hashing the passwords so that it takes 10x as long. Now they can only do 1000 per second. In five years time when computers are 10x faster, just increase the input to bcrypt again to make it 10x slower. Brute forcing them faster would require a breakthrough in the mathematics behind cryptography. There is nothing (apart from using salt) that you mentioned above that can't be adequately achieved by stretching bcrypt out a little further.

Salt is protecting you against exactly one threat and it protects against that threat perfectly when it's random, large and stored right next to the hashed password. Nothing more is required of salt.
How large ? Well, large enough that when combined with the shortest password your system allows, a rainbow table that included every possible salt+password combination would be unfeasibly large. If your salt is any smaller then the attacker can just use a rainbow table.

You're right, and I knew it when I wrote that post but I ignored that inkling I had that I shouldn't have included it in the list.

I posted further down the page that it is actually a good rule. It stops teenagers who think they are "good with computers" from pulling the drive apart in their bedrooms and destroying it while allowing people with clean rooms and the required expertise to still participate in a meaningful way.

Thanks for reading the post. It always cheers me up when someone does that.

Upon thinking further about the no disassembly rule (except for qualified data recovery company or government agency), I realised that it's actually a good rule. It stops teenagers who think they are "good with computers" from pulling the drive apart in their bedrooms and destroying it. The people mentioned would have clean rooms and the required expertise.

I disagree on the publicity. I'd never heard of this challenge so it's not doing too well there for a start. But money creates headlines. I can't see a newspaper running "Data recovery company recovers data for $40" on their front page. Hell, I can't even see Slashdot putting that in idle. A million dollars would definitely get some headlines. You might be able to get away with less money. You might be better off finding a different incentive than money but the money is creating the publicity here.
The publicity argument doesn't work for government agencies either. In fact, I suspect they'd rather people think that it wasn't possible.

The other problem with publicity is that most people already believe it's possible. A successful demonstration that it is possible is rather anticlimactic because it just confirms everyone's suspicions. Making a headline out of an anticlimax is difficult. Proving that it's not possible is impossible as we know you can't prove a negative so the easy, sensational headline just won't happen.

While I'm trying to improve the challenge, two more improvements occurred to me.

1. Have the money and promotional activities put up by someone with a vested interest in the outcome. Say, a company that provides secure deletion services. Once the data is recovered, they can say "See ? You can't just use zeroes. You must use our product to delete your data."

2. Delete some "diplomatic cables" from the drive and accidentally mail it to China. Make sure that one of the cables contains information that the Chinese government would act upon in a detectable manner. (Substitute whichever state you like instead of China.)

Good luck if you do go ahead with a new challenge. I've never seen "proof" of this sort of thing and I'd love to actually see some or at least a compelling challenge that hasn't been accepted.

There are four problems with the Great Zero Challenge that I could identify at a glance:

1. No incentive. The prize is $40. Data recovery companies charge tens of thousands to recover a drive. (Depending on how hard it is.)
2. No disassembly. Any technique that "reads residual magnetism" is going to require custom read heads and access to the platters.
3. No longer running. The challenge ended in January 2009 and only ran for one year. That blog post is from September 2008.
4. Full disclosure. This is a show-stopper. Data recovery companies guard their secret methods very closely. Those secrets are their only competitive advantage. Telling everyone how they did it for $40 ? I don't think so.

In contrast, the James Randi Paranormal Challenge has a $1,000,000 prize, only has rules that disallow cheating, has been running since 1964 and is still running. The fact that no one has passed the preliminary stage of that challenge means something

The money a bank has is worth pretty much the same amount to the banks and to the criminals. With a $10 note you can buy $10 worth of goods.

The pot is worth significantly more to the criminals than to the legal growers. The legal growers probably get 1/10th as much as the illegal sellers for selling the same pot. Hence the criminals will be willing to put a lot more time and effort into stealing the pot and the growers will not have as much money with which to enhance their security.

As an interesting side note: The pot stealing criminals have committed two crimes where the bank robbers have only committed one. Possessing large quantities of cash is not against the law. Possessing large quantities of pot is.

If you put a frog in a pot of water and don't even bother boiling it, the frog will jump out anyway.

If you were to find a frog in its natural habitat where it's happy to sit all day waiting for food to drift past and boil that environment slowly, you might actually have an experiment on your hands... and an ethics committee on your tail.

Seems like a nice easy way to make a bit of cash in your spare time without any particularly rare skills needed. Just find a vulnerability from CVE that doesn't have a corresponding Metasploit module, write a Metasploit module and put it up in Exploit Hub.

Since it's not a 0-day, there's nothing to be gained by getting an exclusive purchase so the prices will be reasonable. There's less risk of being sued too because it's not a 0-day; just a bit of code that you can use to test for an already disclosed vulnerability.

The company who wrote the vulnerable software will want it to put into their QA cycle to guard against regressions.

Anyone who writes penetration testing software will want it to integrate into their product... unless the price is higher than just having their own coders do it.

Penetration testers will want it in their arsenal to make sure they get the maximum coverage possible.

The "bad guys" probably won't want it. It's already known and getting patched and they'll have to rewrite it anyway because it will have an easily identifiable signature as it comes from Exploit Hub.

There will still be a market for 0-day exploits, but as the article mentions, it's a finicky market. Setting up a market for turning disclosed vulnerabilities into Metasploit modules is smart.

Yet another potential problem that no one seems to have mentioned yet is that of shared houses. If my flatmate has a virus (which he doesn't any more because I cleaned it off last night) then the whole house is going to be seen as "infected" and four innocent people will be cut off the internet due to the indiscretions of one person. This could be made all the worse if the person owning the infected computer is on holiday for a week.

ISPs are in a great position to significantly impact bot activity but the first adopters of this kind of policy will lose customers to more forgiving ISPs as customers get angry about being cut off, whether this anger is justified or not. ISPs will have to ease their way into this kind of policy, being very careful not to alienate their customers.