Humans: Still The Weakest Link In The Security Chain

from the nothing-is-'infallible' dept

With CISPA still floating around in a heavily-revised state and tons of words being thrown at all things cyber-related, it certainly woud seem as though the "defensive" half of the cyber-struggle is covered. But for all the time, effort and money being thrown at the problem, it appears that a very important aspect of "cyber-defense" is being forgotten.

It's an aspect that hackers are well aware of, but is often overlooked in the discussion. Many times, a successful hacking has nothing to do with the weaknesses of this:

The human element is generally considered to be something that sits on the other side of the item doing all the work. But even most robust software and hardware can be rendered as useless as an eMachine loaded with bloat-, mal- and spy-ware by social engineering.

"Gary Darnell" from Walmart's home office in Bentonville, Ark., called a store in Western Canada. He lamented having to work the weekend.

He explained that NATO was shopping around for a private retailer who could serve as part of its supply chain in the event of a pandemic.

"Or at least that's what they say it's about," Darnell cracked with the store manager. "Who knows, maybe they're practicing for an alien invasion — don't know, don't care — all I know is that the company can make a ton of cash off it."

Darnell told the manager he'd be coming up to Canada to help plan the exercise, which would see NATO types coming in to survey products and later, to buy them in a hurry, as they would in an emergency.

He just needed a little information first.

A "little" information included such detailed store security information, the after-hours cleaning service, its garbage disposal contractor, who provides IT support and what computers, operating systems and anti-virus programs were in use.

Rather than put man-hours into attacking systems designed from the ground up to repel hackers, criminals are returning to the old standby: social engineering. Humans remain the weakest link in any security chain, due to an inherent desire to help people solve their problems.

At 4:33 p.m., according to Apple's tech support records, someone called AppleCare claiming to be me. Apple says the caller reported that he couldn't get into his Me.com e-mail — which, of course was my Me.com e-mail.

In response, Apple issued a temporary password. It did this despite the caller's inability to answer security questions I had set up. And it did this after the hacker supplied only two pieces of information that anyone with an internet connection and a phone can discover.

Despite safeguards being set up to prevent this sort of thing from happening, a hacker was able to bypass most of the hurdles simply by talking to another human being. From that point on, Honan's life went into nightmare mode:

At 5:02 p.m., they reset my Twitter password. At 5:00 they used iCloud's "Find My" tool to remotely wipe my iPhone. At 5:01 they remotely wiped my iPad. At 5:05 they remotely wiped my MacBook. Around this same time, they deleted my Google account. At 5:10, I placed the call to AppleCare. At 5:12 the attackers posted a message to my account on Twitter taking credit for the hack.

By wiping my MacBook and deleting my Google account, they now not only had the ability to control my account, but were able to prevent me from regaining access. And crazily, in ways that I don't and never will understand, those deletions were just collateral damage. My MacBook data — including those irreplaceable pictures of my family, of my child's first year and relatives who have now passed from this life — weren't the target. Nor were the eight years of messages in my Gmail account. The target was always Twitter. My MacBook data was torched simply to prevent me from getting back in.

This desire to be "helpful" can lead to these situations. As in the Def Con experiment, this innate helpfulness is often combined with another human trait: the willingness to obey authority figures, even when doing so means going against your better judgement. MacDougall's impersonation went straight to the top of the ladder: Wal-Mart's home office. The following, very disturbing example, uses another form of authority. Over the course of a decade, a man claiming to be a police officer used a phone and social engineering to do what can only be described as "hacking" actual human beings:

The McDonald's strip search scam was a series of incidents occurring for roughly a decade before an arrest was made in 2004. These incidents involved a man calling a restaurant or grocery store, claiming to be a police detective, and convincing managers to conduct strip searches of female employees or perform other unusual acts on behalf of the police. The calls were usually placed to fast-food restaurants in small rural towns.

The details of one incident are particularly horrifying:

"Officer Scott", and gave a vague description of a slightly-built young white woman with dark hair suspected of theft. Summers believed this described Louise Ogborn, a female employee on duty. After the caller demanded that the employee be searched at the store because no officers were available at the moment to handle such a minor matter, the employee was brought into an office and ordered to remove her clothes, which Summers placed in a plastic bag and took to her car at the caller's instruction. Another assistant manager, Kim Dockery, was present during this time, believing she was there as a witness to the search. After an hour Dockery left and Summers told the caller that she was also required at the counter. The caller then told her to bring in someone she trusted to assist.

Summers called her fiancé, Walter Nix, who arrived and took over from Summers. Told that a policeman was on the phone, Nix followed the caller's directions for the next two hours. He removed the apron the employee had covered herself with and ordered her to dance and perform jumping jacks. Nix then ordered the employee to insert her fingers into her vagina and expose her genital cavity to him as part of the search. He also ordered her to sit on his lap and kiss him, and when she refused he spanked her until she promised to comply. The caller also spoke to the employee, demanding that she do as she was told or face worse punishment. Recalling this period of time, the employee said that "I was scared for my life".

After the employee had been in the office for two and a half hours, she was ordered to perform oral sex on Nix.

This involved two people in management positions, who ostensibly should have "known better," but instead displayed irrational, but completely "normal" behavior: a willingness to obey an authority figure, even one that was nothing more than a disembodied voice. Social engineers know this, and nearly every scheme will leverage this human trait to its advantage.

With as much as companies are spending on hardware, software and security experts, you'd think a little more care and attention would go into hiring, selecting and training the people who can render thousands of dollars of computing power completely useless. And it's more than just trying to drill security principles into their heads. Def Con had a few suggestions for businesses to keep them from becoming victims of something akin to MacDougall's thorough "hacking" display.

• Never be afraid to say no. If something feels wrong, something is wrong.

• An IT department should never be calling asking about operating systems, machines, passwords or email systems — they already know. If someone's asking, that should raise flags.

• If it seems suspicious, get a callback number. Hang up and take some time vetting the caller to see if they are who they say they are.

• Set up an internal company security word of the day and don't give any information to anyone who doesn't know it.

• Keep tabs on what's on the web. Companies inadvertently release tons of information online, including through employees' social media sites. "Deep-dive" to see what information is out there.

Of all these suggestions, two stand out. First, an internal "security word" would help trim down the number of successful hacks... at least at first. Once someone lets another person slide because they forgot or didn't get the memo or showed up late for work or "just need to get into the system for a second," it's all over. If this situation is not handled swiftly and dramatically, the "exceptions" to the rule will soon become the rule. That's also human nature.

The second one, tracking what's out on the web, is more useful and should help rein in what's available to outside attackers. But if social media sites are included in this sweep, you'll need a ton of paperwork on the HR end to make it fly, usually earning you the resentment of your employees.

Because the weakest link in the security chain will always be human beings, these humans need to be selected and cultivated properly. Not solely "trained." No amount of role play or instructional videos will prepare them for determined social engineers. This won't be because companies fail to recognize the importance of the job they perform, but because these companies will fail to recognize the importance of the individual(s) entrusted with keeping them secure.

Without a doubt, you need the right people for the job. Because the job itself often devolves into little more than keeping logs and handling password/privilege requests, it's often mistaken as being something anyone could do with the proper training. If you're in charge of staffing security, it's not enough to simply trust them. They have to trust you.

It's a two-way street. First and foremost, the security personnel need to know that you can protect them from the eventual fallout that comes as a result of doing their job properly. It's not tough to imagine a situation where a higher-up is in need of a password change but can't meet any of the requirements needed to approve this change. Denying a request to the wrong person (i.e., someone powerful within the company) could put jobs on the line just as quickly as handing out user info to an outside attacker.

As the head of this staff, you need to prevent this fallout from settling on your team. Feeling heat or receiving retribution for doing a job the way it's supposed to be done damages the security of the company, turning well thought-out rules into mere guidelines and worse, turning good employees resentful. If the security staff feels they'll be the scapegoat for common situations like these, it impairs their ability to make solid decisions and opens the door for social engineers to appeal to their basic humanity (dignity, confidence, etc. -- anything that's been damaged by scenarios like this) in order to get the information they want.

Protecting the staff from this sort of retribution shows them that they're covered if things go wrong. This frees them up to make better decisions, rather than tangling them in the minutia of day-to-day compliance. If they know, and have seen it proved, that they're trusted to make the right decisions, rather than micro-managed or forced to run checklists against a policy manual, they'll work with more confidence. More confidence in their own skills and intuition will make them less susceptible to being flattered (by appealing to the power they wield and/or resurrecting their sense of duty for a "higher cause") into allowing access to the system and information they're in place to protect.

TL;DR: The trust an entity has in its security staff is less important than the trust the security staff has in the entity it's protecting. Any policies and protocol put in place are only as good as the people behind them.

Compartmentalize

Those are all execellent ideas, but there's a big one missing. Compartmentalize whenever possible. Have seperation of duties. If there's no good reason for someone to have access to a system or information, don't give it to them.

A programmer responsible for designing the next version of something doesn't need access to production systems. Neither does the CEO.

If someone doesn't have access to a system, or the information, a social engineer can't trick them into giving it away.

Re: Compartmentalize

Re: Compartmentalize

>>A programmer responsible for designing the next version of something doesn't need access to production systems. Neither does the CEO.

The CEO probably does not want access. Unfortunately there are a host of managers below the CEO who insist on full access to everything. These people make such yummy targets for social engineering:
1) They have access to all of the systems under them, so they can fulfill all of your social engineering objectives in one place.
2) If they are insecure enough to demand full access to everything, they are probably also eager to brag about their full access to everything. If they don't know something, they will find out for you, just to demonstrate that they can.
3) They don't understand the most of the stuff they have access to, so they don't know when to be suspicious.

Re: Compartmentalize

A programmer responsible for designing the next version of something doesn't need access to production systems.

As a programmer, I'd have to disagree. You'd be right if that was the *only* thing the programmer was doing, but a lot of times he's also fixing bugs and performing other maintenance work.

In order to fix a bug, you need to be able to reproduce it. And I've come across plenty of times when reproducing the bug relies upon some specific detail in the client's database that they neglected to inform me about in the bug report because they didn't know about it or think it was important. In cases like this, I usually need to get a copy of their production database in order to fix the bug.

And yes, I'm aware that this isn't *technically* the same thing as "access to production systems", but the line can get pretty blurry sometimes, depending on what's being fixed...

Re: Re: Compartmentalize

Adequate test environments would solve most of that issue. In some cases, I agree it may not be practical to fully duplicate a production environment, but if you're dealing with that complex of an environment, it is even more important to have oversight and good controls in place.

Re: Re: Compartmentalize

Security is always implemented at the cost of functionality, always.

I'm sorry, but that's nonsense.

Bad or lazy security is implemented at the cost of functionality. Yes. And sure, there can be times when even well thought out security can impact functionality or usability. But saying it is "always" an impact is silly.

Humans: Still The Weakest Link In The Security Chain

That's not news to me. I work in IT support and you would be shocked how many people leave their workstation unlocked or give out passwords to all and sundry.

The biggest example of human stupidity when it comes to security? I once worked for an insurance company who had no formal security policy and who did not even encrypt customer data. In this day and age, that is shocking.

It's really amazing how much information a company employee will give you. To make things worse most of the time they will think all the information you provided to them is correct so they have little concern and forget about it pretty much instantly.

When they do catch it most times it's already far too late and the damage has already been done.

In some cases social engineering skills can be more important than a hackers skill on a computer.

If a company has very good security and protection that's great!! However if some hacker is able to call them up and get all the info they need to fuck shit up all that security is a waste.

Re:

This isn't unusual and way back in 2003 a survey was done that showed this weird need for people to give out their passwords. And since then the percentages of have really stayed static

found at http://passwordresearch.com/stats/statistic113.html
"% of office workers at Waterloo Station gave away their computer password for a cheap pen, compared with 65% last year. Men were slightly more likely to reveal their password with 95% of men and 85% of women giving away their password.

Workers were asked a series of questions which included what is your password, to which 75% immediately gave their password, if they initially refused they were asked which category their password fell into and then asked a further question to find out the password. A further 15% then revealed their passwords."

Though there is still one foolproof method of finding passwords 50% of the time in any office anywhere. Look under the keyboard - normally it's on a postit note or something ;)

Good security starts at the bottom

Keep your physical security in line. Don't tell anyone where the Data Center is. Don't let people in without an escort.

The simple things are always, well ... simpler for the bad guys to exploit, yet they are extraordinarily unsatisfying to fix.

The best book I have ever read on computer security is by McAfee (back when he knew what he was doing) and is called "Computer viruses, worms, data diddlers, killer programs, and other threats to your system: What they are, how they work, and how to defend your PC, Mac, or mainframe". Don't be put off by McAfee Corporation's current reputation. This book came out in 1989, and covers the ENTIRE security spectrum. It is of the quality that you would expect from Schneier today.

The bottom line: if someone wants to hack you, they will. Don't be a target. Strategies to reduce your profile work. And this was back before the Internet, when Bulletin Boards and floppies were the main vectors for infection. The principles remain the same, though. Take care of the basics.

Re: Good security starts at the bottom

The bottom line: if someone wants to hack you, they will.

This cannot be overstated. Just thinking that you're secure makes you more vulnerable.

There is no such thing as 100% security. All any security measure can do is to increase the amount of time and expense required to breach the security. Be damage-resistant, so that when it happens, you can recover with as little pain as possible.

A safety net

I currently work for a security software manufacturer, and we (as one would expect) are pretty uptight about security, mostly to mitigate the fact that humans are inherently insecure.

We have one measure in addition to the ones listed that I thought was a particularly good idea: the safety net.

If a situation arises where a security practice apparently needs to be bent, no matter how legitimate things may appear to be (or in fact be), we have a sort of "security helpdesk" to call 24/7 who will act as our proxy in the situation.

Is an employee out of the office but you need access to their workstation for some vital document? You don't call them for the password, you call the security desk and they take the steps needed to get you access. Someone is asking for sensitive information on the phone? You give them the security desk's #, or have the desk call them. And so forth.

The genius of this is that it completely removes the need to withstand social pressure or coercion. You don't have to be a stick-in-the-mud or bad guy: the security desk gets to do that for you, or finds a way to do what you need done in a secure fashion.

Re: A safety net

I just realized that I failed to make my point clear. My point is that the security desk is not set up in the mode of some kind of cop shop. It's set up as a helpdesk. Referring a customer or coworker to them is painless, because they will be treated as valued customers & coworkers, not as potential threats. Unless they're told, they would not even know they're talking to security personnel.

Well, duh!

IT people are MEAN!

And this might explain why, they are left to deal with the aftermath of Becki in Accounting helping out someone from the office in Petersburg by clicking some popups.

It is human nature to be helpful.
It is in some human's nature to appear to be strong.

Make like they are the only one that can help you, and the doors open.
Be in awe of their power, or "disbelieve" them enough to make them ignore every smart thing they ever were told to prove how powerful they are.

We all openly mock people who fall for advance fee fraud, and laugh at the letters we get. If it didn't work they would stop... notice they aren't stopping. Even with the media covering these scams, people still want to help (and get rich). You can not con an honest man.

Yeah if you needed any more convincing that the people who handle money in this country as stupid... see above.

Greed makes people stupid to.

How many times can you be expected to explain to a coworker that surfing to these sites is bad... mmmmkay. Eventually you realize that pain is a very useful tool to train them to stop, and the people tell stories about the BOFH...

working in IT for banking, you go onsite, you then phone a specific phone number, the person gives you are 'code', you enter that code into a little machine, that machine then displays a password, that is your password that is only good for that session, when you are finished your work, you phone the person again, tell them the job is complete, that you have logged out.

the person who issues you your code does not know nor can find out what password you are given..

the other security feature for these system is that they are simply not connected to the internet, therefore making them impossible to 'hack' via the internet..

no one can steal your password, no one can social engineer it from anyone, as no one knows it until it is used then only the users knows it..

security is simple, and if you dont want to get hacked you can easily ensure that is the case.

Holy crap. That's scary. And yet I'm glad to know I have many more security layers, and my mail accounts are only partially linked (my two main accounts only and they are both protected by multi-factor security steps which include Google 2-step) I also avoid using the same password on any account so the weakest part of the chain would be the security questions and other recovery means that might be open for exploitation.

I've recently started using Last Pass to keep track of my passwords and manage to use different ones for every service (and I use one time passwords to access my account in less safe environments) but it poses yet another issue: you have everything linked in a single place. Sure I'm using grid authentication and Google 2-step as added security but linking everything can be a problem.

Human factor is indeed a heavy security burden.

After reading this story I'll be taking a few steps to fix gaps I noticed on my security practices for sure. But I can only control my end of humanity.