"Internet security has a puzzling fact at its core. If security is only as strong as the weakest link; then all who choose weak passwords, reuse credentials across accounts, fail to heed security warnings or neglect patches and updates, should be hacked — regularly and repeatedly.

Clearly this fails to happen."

Alrighty then. It's obvious: I have some catching up to do. Here's what Cormac had to say.

Kassner: Through our past collaborations on security research (my favorite: "Are users right in rejecting security advice?"), I've come to expect — cherish actually — your "outside the box" thinking. This paper appears to be more of the same, starting with you asking:

"Why isn't everyone hacked every day?"

Definitely "outside the box." What prompted you to look at this subject?

Herley: Thanks Michael, flattery will get you everywhere! I'm always interested when there's a mismatch between what we think should happen and what we observe. And the mismatch between conventional security wisdom and what is actually occurring is a perfect example.

We're told to plug every security hole if we want to protect our digital stuff. Yet, we don't. We are careless about software updates and running anti-virus. We ignore OS and browser warnings and click on links with abandon.

Let's not forget about password habits. We choose weak passwords, use common names, write them on Post-its for everyone to see, and re-use the same three or four passwords across multiple accounts.

Yet, of the two-billion people using the Internet, only 5% suffer significant harm each year. So how do 95% of us escape scot-free?

Kassner: The paper suggests the reason for the disparity is because attackers need to use a sum-of-effort approach instead of going after the weakest link. What does that mean?
Herley: Sum-of-effort means attackers need the exploit to be profitable on average across all attempts, not just in particular situations. Every attack has a cost, and no attack works 100% of the time.

To be profitable on average, you have to make enough when you succeed to cover the cost of all the times when you fail.

For example, suppose Alice uses her dog's name as the password for her bank portal. According to what we are told, her password is weak, making it an easy mark for an attacker. But, an attacker only succeeds:

If the username is known.

If he or she can figure out the dog's name.

The bank doesn't catch the transfer.

Another cyber-thief doesn't get there first.

So what percent of the time can the attacker expect to succeed? Let's say the attacker spends an hour per user, and:

5% of all users choose their dog's name as the password.

5% of the time, the password is determined.

5% of the time, the username is figured out.

Based on that, the attacker gets into one account for every 20x20x20=8000 accounts attacked.

Let's say the attacker is willing to work for $7.25/hour. He needs the average compromised account to yield $7.25 x 8000 = $58,000 to meet payroll.

What if the bank catches 75% of the attacker's attempts? That means the attacker needs to get $232,000 per compromised account to meet payroll. And, we have not discussed competition from other attackers.

Even if an attacker is willing to work for 1/10th of the US minimum wage and spends 10 minutes (instead of an hour) per user, an average of $3,866 per compromised account will be needed to make payroll. So what started out looking like easy money has turned into a very difficult way to make a living.

Now consider this. If the attack is unprofitable, therefore not used, the 5% with weak passwords escape unscathed. The axiom that they're only as safe as their system's weakest link turns out to be false, because it's hard to build a profitable attack for exploiting weak passwords.

Kassner: The paper claims:

"Many attacks, while they succeed in particular scenarios, are not profitable when averaged over a large population. This is true even when many profitable targets exist and explains why so many attacks types end up causing so little actually observed harm."

I get the logic, until the next sentence:

"Thus, how common a security strategy is, matters at least as much as how weak it is."

Now, I'm lost. Help.

Herley: Sure. Let's revisit the dog's name as a password example. Suppose 50% instead of 5% choose their dog's name as a password. Now (keeping the other assumptions unchanged) the attacker succeeds one time in 800 rather than one time in 8000. The return is 10 times better, just because more people use this strategy.

So if a strategy is common and predictable, it becomes very dangerous because it becomes easier for an attacker to exploit. Leaving the house key under a flower pot is risky, mostly, because a lot of people leave the key under the flower pot. If you were the only person in the world to do so, it becomes less risky because checking under the flower pot is a waste of time for a thief since it almost never succeeds.

Basically, attackers are playing a numbers game. The more people who use ‘password' as their password, the better an attacker's return for trying that.

Kassner: I tried to get through the math — failed miserably. I'll let math geeks check out your work. Is it possible for you to briefly explain your conclusions?
Herley: Stealing is like any other economic activity. Things have to succeed on average, not just when circumstances are favorable. Meaning, the attacker pays a price for every attempt, but gets a return only when the attack succeeds. If each attempt costs $1, but succeeds only 0.1% of the time, then each success with have to bring in $1000 just to break even. Attacks that have low success rates can have challenging economics.
Kassner: Am I correct: You are concerned that security pundits are "crying wolf", because they are using the wrong modeling approach?
Herley: I'm not sure I'd characterize it as "crying wolf", which makes it sound like there's a desire to deceive. Security people are trained to look for weaknesses, and see what can happen when things go wrong. So it's natural to want to warn people. At the same time, I think we have to acknowledge that two billion people are using the Internet, and in spite of poor security practices, most have positive Internet experiences.

"Think like an attacker" is an oft-repeated mantra among security pundits. But to my mind, it is seldom followed all the way through. Thinking like an attacker doesn't end when you find an attack or an exploit, unless you're only interested in it as an intellectual exercise.

If you're interested in the total effect an attack will have, you must continue, just as an attacker would continue, and figure out how the exploit can be used to turn a profit. That means not just looking for vulnerabilities and spotting when things can go wrong, but figuring out how much an attack costs and how often it can succeed. An attacker certainly has to think that through, and it's lazy of us to skip that part of the analysis.

Kassner: Now the tough question. Do you have advice for us users when it comes to security advice?
Herley: I want to be clear. Our goal in this paper is to explain the mismatch between prediction and observation. While I use it as an example, I definitely do not recommend people use their dog's name as a password!

You do not want to be part of a group having vulnerabilities that are easy to predict and exploit. What advice to give is a tough question. My answer isn't that different from what others have offered. One thing I might add — definitely avoid being predictable.

Final thoughts

I guess we should consider ourselves fortunate. Most Internet bad guys are all about turning a profit. If other motives were involved, I think we'd be in trouble. Or, to put it another way, we may be low-hanging fruit, but we are not worth the effort.

A big thanks to Dr. Cormac Herley for his insightful answers.

About Michael Kassner

Information is my field...Writing is my passion...Coupling the two is my mission.