The Perfect InfoSec Mindset: Paranoia + Skepticism

A little skeptical paranoia will ensure that you have the impulse to react quickly to new threats while retaining the logic to separate fact from fiction.

My latest superfluous and random supposition is that a dash of paranoia paired with a side of skepticism makes for the perfect security pro mindset.

Chances are if you’ve worked in the security field for any significant period of time, you’ve developed at least a slight modicum of paranoia. This is a pretty natural occurrence in our field. When you immerse yourselves in the newest threat research, follow every new vulnerability, zero day, and breach disclosure, and see first-hand what the latest malware or exploits can do, it’s not that startling when you increasingly suspect random computer “glitches” are due to some sort of malicious ghost in the machine.

This increased level of paranoia -- perhaps better described as a general state of suspicion -- has certainly happened to me over my security career. Today, whenever a program crashes, my display flashes, or there’s any sort of “hitch” with my system, my immediate, knee-jerk impression is, “I’ve been hacked!”

Of course, if my thoughts ended there, my paranoia would be a very bad thing. While many of us informally consider paranoia as excessive suspicion or fear, it has a more clinical definition. Specifically, real paranoia is baseless or irrational fear. In other words, anxiety based on delusion.

Obviously, true delusional paranoia has no place in infosec. Panicked reactions to fictional threats are a recipe for disaster. However, I believe the proper dose of paranoia can be a good thing for security professionals. After all, it does increase your vigilance and quickens your response to threats. So how do you get the right dose? Temper your paranoia with a dash of skepticism.

When I say, “skepticism,” I don’t just mean indiscriminant doubt. Rather, I’m talking about scientific skepticism. This is where you question new concepts or beliefs, not accepting them as truth until you have empirical evidence backing them up. Keep in mind, scientific skeptics don’t just dismiss unconventional ideas either. Even if something seems inconceivable today, that doesn’t necessarily make it untrue. Skeptical inquiry simply means you stay doubtful until you can prove something repeatedly with evidence.

This is why I believe a dash of paranoia, complemented with a healthy portion of scientific skepticism, can keep security professionals stay on their toes, while solidly grounding them in reality.

Let’s put this little theorem to the test…Remember BadBIOS, the incident where a well-respected security researcher warned of super-sophisticated, sci-fi-sounding malware? He claimed BadBIOS could infect any computer platform, could inject itself into your computer’s BIOS (making it nearly impossible to remove), and could even spread via high-frequency sound waves. If it weren’t for the researcher’s reputation, most would’ve dismissed his claims as paranoid delusions (which they may be).

How would a paranoid skeptic respond to news of BadBIOS? First, paranoia would kick in, putting you in a heightened state of alert. You’d remember BIOS infecting malware exists, and you’d recall variants of cross-platform Java-based attacks. You may have even read the latest research proving attacks can spread via audio transmissions. These thoughts would entice you into wanting to learn more and protect yourself.

However, before your paranoia turns into unjustified panic, your skepticism takes hold. You ask yourself, is there any evidence for this extraordinary claim? Is there a malware sample others have validated, or that I can analyze? Are there logs, transmissions, or any other evidence to corroborate the story? In the end, if evidence supports the claim, you’re not suffering paranoid delusions, and you should do something to protect yourself. However, if there is no evidence (which is the case with BadBIOS), you remain skeptical, and take no action.

This skeptical paranoia ensures you have the impulse to react quickly to new threats, while retaining the logic to separate fact from fiction. In the end, I think a quote from the movie Catch-22 summarizes this idea more succinctly: Just because you’re paranoid doesn’t mean they’re not out to get you.

What do you think? Is a smidgeon of paranoia an asset to security professionals, or does it just turn us into doomsayers? Let me know in the comments.

Corey Nachreiner regularly contributes to security publications and speaks internationally at leading industry trade shows like RSA. He has written thousands of security alerts and educational articles and is the primary contributor to the WatchGuard Security Center blog, ... View Full Bio

Good perspective... Getting distracted or misdirected by delusional issues is definitely something we want to avoid.... and I like your comment... "less about being paranoid and more about being prepared." I do, however, think the rote nature of day in, day out IT and IT security does sometimes create some level of apathy among security folk... For instance, if you are so used to seing false positives, you may not hear the next time someone cries wolf for real... Perhaps paranoia isn't the right word, since it's so tied to fear of something false, but I do think that Infosec pros need something to keep them on their toes enough to not ignore the real incident when it happens...

I believe in preparation, but I also know the right bad guy can slice through all our carefully crafted defenses, so we need some attribute that makes sure that we are never too confident in our own defenses...

Thanks for the counterpoint... I hoped other would share whether or not they thought paranoia could be valuable.

That was a well done post John, and I agree. I guess when I was writing it I didn't really think much about my use of "prefect," as containing all aspects of what a infosec pro mindset was... In all honesty, it just started with the thought of does our tendency towards paranoia (or lack of trust) have any benefit, or is it a detriment...

That said, I totally agree that "enablement" should definitely be in every security pro's arsenal. As I mention, often I see security pros live in their "white tower" and impose seemingly draconian rules down on their users, with no real explanation why, nor with trying to figure out what the users needs are. I agree with you that this is the WRONG mentality, and it's why infosec is often perceived as a roadblock to innovation.

Rather, as you suggest, we need to learn what our users are trying to accomplish to get their job down (and help the business run). Then with can make the risk-based decision about how to allow the user to do this easily, without putting exposing to much risk (furthermore, sometimes the rewards to doing things someway may be big enough to accept the risk). This sort of communication helps users understand your perspective (trying to keep everything safe), while also providing what they've asked for.

I think you said this very well... And I like that you brought up risk management. When I started in Infosec, I began as a more techincal personality that thought mostly about the threat, and the prevention. I lived in a security "white tower," where I believed security meant beating back every possible threat, no matter the cost.

This is not the way to do profesional informations security...

Infosec is all about risk management We know that we can't beat every threat, and we should also know that our goal isn't perfect security, it is to make sure that our business can run with minimal risk... I like Risk-based security since it tries to more scientifically quantify threats (thought that can be hard), and only focuses on prevention and security practices that keep business running while minimizing risk...

While it is definitely true that a healthy dose of skepticism is necessary to effectively do your job in InfoSec, I don't believe that paranoia is the most effective way to achieve this (even when served up with a side of skepticism). If you're paranoid, you're getting distracted by conspiracy theories and misdirected assumptions. In my opinion, the best way to approach today's cypersecurity landscape is with the understanding that there is always someone out there who is actively trying to steal your personal information, your secrets, your money – because they are. InfoSec should be less about being paranoid and more about being prepared. If your system is designed only to be used a certain way - what would it do if it were given an unfamiliar command, or put in an unnatural setting? Would it spew out information? Shut down? Or, deny access? Hackers are successful because they find ways to make a system accidentally do something it was never intended to do. Testing your system for unlikely and unthinkable - even obvious - faults isn't being paranoid, its being realistic.

It has been said that there is a fine line between sanity and insanity. The same goes for paranoia in terms of information security. An antonym for paranoia is trusting, so the question is: how much trust can an InfoSec professional place on an unknown entity? I've been in IT for a very long time, and in that context, I begin with the assumption that I cannot trust anyone. That isn't to say that I go overboard because without some sort of balance, the paranoia becomes insanity. Risk management serves to provide that balance in the assessment phase, where actions (or inactions) are based on the analyses. Healthy doses of paranoia, skepticism, and sanity motivate InfoSec pros to research any perceived threat, and if there is no evidence to corroborate it, then theoretically, they give it the least regard. Now if you skip the research phase and straightaway go full bore into protection mode against that perceived but unqualified threat, then you are no longer teetering on the proverbial brink of insanity, but instead have gone over the edge. I have to admit that as I look down from that brink, the bottom looks awful dark, deep, and full of unknowns. That is why my passwords are a bit more complex than 12345, and I'm sure Dark Helmet approves of that practice.

Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?

It's one thing to hire a third-party developer to build a mobile app. It's quite another to trust a pen tester, MSSP, or DDoS protection firm. But the fact is, the threat landscape is complex, and few organizations can keep security completely in house. Here's how to decide what to outsource and select and manage providers.

Published: 2015-03-03Off-by-one error in the ecryptfs_decode_from_filename function in fs/ecryptfs/crypto.c in the eCryptfs subsystem in the Linux kernel before 3.18.2 allows local users to cause a denial of service (buffer overflow and system crash) or possibly gain privileges via a crafted filename.

Published: 2015-03-03** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was withdrawn by its CNA. Further investigation showed that it was not a security issue in customer-controlled software. Notes: none.

How can security professionals better engage with their peers, both in person and online? In this Dark Reading Radio show, we will talk to leaders at some of the security industry’s professional organizations about how security pros can get more involved – with their colleagues in the same industry, with their peers in other industries, and with the IT security community as a whole.