Fear, Trust, and Desire: Fertile ground for social engineers

According to the recently released Microsoft Security Intelligence Report (2H2008), social engineering is taking the lead as the preferred method of network and end-user device malware infection. Since operating system vulnerabilities are slowly disappearing and more organizations are implementing basic network controls, the easiest way to a target system is via the end-user.

Fear, Trust, and Desire (FTD)

According to the Microsoft SIR, users fall prey to social engineering attacks because of three common modes of human behavior: fear, trust, and desire. As depicted in Figure 1, each of these behaviors is targeted by specially crafted attacks.

Fear is always a strong emotion, one often fed by media spin on Internet security, causing the average user to believe the end of civilization as we know it is at hand. So it’s easy to understand why some panic when malware-induced messages appear on their desktops, telling them their machines are infected. And the best—or only—way to remove the virus or worm, asserts the alert message, is by immediate purchase and download of the recommended AV software. An example is shown in Figure 2. Note the malware listed does not necessarily reside on the target system. It’s just for effect.

Another method to lure a user is the promise of a trial version of a for-fee product. These pages look authentic, as shown in Figure 3. However, the result is the same. The trial version displays a long list of bad stuff and prompts the user to purchase the full version to remove them. So not only is a Trojan or some other malware loaded, but the user actually pays for the honor of hosting it.

Playing on the fears of users seems to be working very well. Microsoft reports rogue security software is quickly becoming the Internet users’ biggest threat. Table 1 lists the Top 25 malware loaded in this way.

In the midst of fear and uncertainty, users are continuously looking for a safe haven. Thus the element of Trust. Without the belief there is safety on the Web, no one except hard core techies would ever use it. However, users often take trust too far, relying simply on the appearance of authenticity. So when a page comes up which looks like their bank’s site, assumptions are made, passwords entered, and transactions executed. In many cases, however, there is also the stealing of passwords, hi-jacking of sessions, or other revenue generating activity managed and controlled by an attacker.

Microsoft breaks trust down into different types. I’m focusing on two: trust in institutions and trust in employer networks. Again, bank sites seem to catalyze immediate trust. This is a user behavior characteristic is actively exploited by cybercriminals. Figure 3 is an example of a bogus bank site. Routing victims to sites like this is typically done via phishing or DNS redirection.

In some cases, redirection isn’t necessary. A Trojan installed on a user’s system can pop up a data entry window when an actual bank or other financial site is visited. As shown in Figure 4, the pop up is used to collect information of value to revenue-centric attackers.

In addition to trusting external sites, users tend to believe their employer’s networks are secure. After all, most organizations are at least talking-the-talk when it comes to protecting information assets. This often lulls employees into that proverbial false sense of security. So what not click on that link? My company will protect me…

Finally, there is what I call the I-want-what-I-want syndrome. Microsoft calls it Desire, but it means the same thing. When a user wants something bad enough, whether business related or not, he or she is going to download and install it. It doesn’t matter how much awareness training your organization has conducted. Attackers understand this, using various media to lure users and hook them, especially “free” music, videos, and software with something extra added to make it worth the attacker’s time and effort. Like a little rootkit with your free copy of X-Men?

What to do

The most important control has nothing to do with technology. Rather it is you understanding these issues exist when you design your control framework. It is understanding that human behavior is what it is. It is taking steps to reduce human behavior’s impact by implementing technical controls which help mitigate or eliminate organizational or individual breach opportunities caused by user “mistakes.”

For more information about building user behavior mitigation controls, take a look at the following:

Share this:

Like this:

Related

It seems as if it’s a forgone conclusion (i.e., we’ve given up): You simply can’t trust the internet, so you’d better not.

I can’t help but wonder how/why this has become the de-facto rule for the internet. Is it because the powers that be don’t care enough about the issue and hence resources do not get allocatted to fix this “problem” (perhaps they aren’t internet users themselves)? Or is it simply just too hard to keep the bad guys out because there are just too many of them out there?

Another example is all the pr0n out there. This content has always be regulated while I was growing up, yet we’ve just come to accept it’s increasing availability and seem to not have too much of an issue if kids “stumble” upon it nowadays (I mean how can they not).

It just seems odd how in the past we’d taken care to “control” some of the risks in our pre-internet world: business licenses, FCC, ID checks, etc… (obviously we felt there was a need), but simply throw our hands up when it comes to taking the same care with the internet (a medium which is no longer a novelty and is practically in every home in America). Seems odd we regulate, say, television and radio, but not the internet? Playing Devil’sadvocate: Couldn’t you argue we are being unfair to television and radio then?

Don’t get me wrong…I’m not advocating censure, but it’s baffling that we’re so willing to accept the “exposure” that comes with going online that we wouldn’t accept in our non-virtual world.