Information security tips and tricks for both home and business users

The Internet has grown just a bit since former VP Al Gore created it way back in 1983. In that time, chances are that we’ve all left one or two virtual breadcrumbs online. While the right to be forgotten has been (and will continue to be) debated for years, that doesn’t mean you have to wait.

You can perform your own Internet Reset whenever you want.

If you’re a fan of South Park or of The IT Crowd, the phrase Internet Reset might conjure images of a giant Linksys wireless router nestled in a remote cave in California, or of a tiny black box with a glowing red light on top. That’s not the kind of reset I’m talking about.

I’m talking about you. The virtual you. The online you.

If you’ve ever used the Internet (and if you’re reading this article, then I’m guessing you have), then it’s on you to make sure the amount of info you’ve exposed online is reasonable. No one else is going to do it for you.

Even if you’re only online occasionally, you might be sufficiently creeped out by how much information sites like FamilyTreeNow have about you.

Either way, I STRONGLY recommend that you follow the steps below to tidy up your Internet presence.

Step 1: Delete Unused Social Media Accounts

If you’re still using Myspace, that’s cool. This is a judgment-free zone.

If you know where you’ve created social media accounts over the years, then head over to Just Delete Me and start clicking.

If you don’t remember all of the social media accounts you’ve created over the years, then you can visit Namechk.com, type in your social media handle(s), and let them do the heavy lifting.

Before deleting these accounts, though, there’s no shame in backing up all the things. The list below includes shortcuts to account backup instructions for some of the most popular social media services, but you could also use a quick Google query to find backup instructions for social media services not in the list.

Alternately, you could sign up for an account at Forget.me to complete this step, but creating another online account in order to reduce your presence online seems a bit counterintuitive. Still, there’s no reason you can’t cancel your account after you’re done using it.

Step 5: Opt Out of Data Mining Lists

The good folks over at StopDataMining.me have collected an extensive list of data brokers. If you want these companies to stop collecting/sharing/selling your data, it’s up to you to let them know.

While StopDataMining.me will hit most of the services you’ll want to address, their list isn’t complete. Once you finish up with their list, check out this Reddit post to see if there are any other services you want to contact (*cough* MyLife *cough*).

(If that FamilyTreeNow article I shared earlier in the article still has you a little nervous, you can opt out of their service here.)

Step 6: Clear Your Browser History

Don’t forget to clean up Internet data cached on any device you’ve used to connect to the Internet.

If you’ve used ANY of these web browsers on your workstation, laptop, tablet, or smartphone, clear out everything (all of it; from the beginning of time).

Google allows you to go one step further and delete all of the activity they’ve captured related to your Google accounts. It’s crazy simple, and it would be nice to see other service providers follow suit.

Step 7: Minimize Your Internet Footprint Going Forward

You’ve already put so much effort into reducing your Internet footprint that it would be a shame to see you backslide. The good news is that you can keep that footprint small by making a few minor adjustments to your browsing habits.

Take a deep, cleansing breath, and say it with me: “Compliance is not security.”

Good. One more time. “Compliance is not security.”

It’s okay. We’re all friends here. No need for false pretenses. We all know how much truth is contained in those four simple words.

Information Security is a tricky business, due largely in part to the fact that both the good guys and the bad guys are an innovative, creative, and (sometimes) devious bunch.

A compliant organization can demonstrate that they have implemented the bare minimum in information security controls. That, or they can sweet talk an auditor into believing that the “compensating controls” are strong enough to meet the intent of the compliance requirement.

Not that it matters, though, since compliance is not security.

You protect your servers with a locked door. I bypass the lock with a lock pick (or a modified hotel keycard, or a coat hanger, or… you get the point). So what do you do? You replace the pin and tumbler lock with something stronger.

Well, shucks. I guess I’m out of luck. Time to throw in the towel (said no determined criminal EVER).

You install a stronger lock, I adapt with a stronger (more devious) attack. Motion and heat sensors? No problem. I’ll just use a Mylar balloon, a warm washcloth, and a little bit of helium (props to Chris Nickerson).

Let’s shift gears and talk about more technical controls. You patch your operating systems? Fine. I’ll target the desktop applications. You keep those patched, too? Well, smell you! Sounds like your web apps might be my best bet, except that your security-minded developers test both their code and their deployed apps for vulnerabilities.

Fine. It looks like you’ve decided to give me a run for my money. I guess I’ll have to resort to (gasp) SOCIAL ENGINEERING.

If I want in… if I really want to deface the website / steal the data / encrypt all the things and then extort payment… then compliance isn’t going to stop me.

Security, though… that’s another matter.

It is absolutely possible (even probable) that a security-minded organization, one that chooses to go above and beyond compliance, will know when they’re being attacked and be able to prevent, detect, and respond to the attack in a manner that minimizes the damage.

So how do we get from here to there? The answer has been right in front of us the entire time: assessments.

Notice I said assessments (plural).

Attackers are going to look at your business, your customers, your employees, your locations, and your infrastructure from multiple perspectives. They’ll keep at it until they find the chink in your armor.

In order to effectively (and proactively) defend against those attacks, you should be doing the following:

Compliance Assessment(s). Wait a minute. I JUST said that compliance is not security. I also said that compliance is the bare minimum. Love ’em or hate ’em, compliance requirements like PCI, HIPAA, and NERC/CIP are based on leading information security practices. If you want a good baseline for how prepared you are to defend against attackers who want your data, start with a compliance assessment.

Security Controls Assessment. The next assessment you should perform is a security controls assessment based on the security framework that your organization aligns with. NIST (FISMA) is popular among companies who do business with the U.S. Federal Government, while the ISO 27000 series works well for organizations with an international footprint. The CIS Critical Security Controls are another fan favorite, although smaller organizations may find the Common Sense Security Framework a little easier to tackle.

Risk Assessment. These assessments are a little trickier. The goal of a risk assessment is to identify potential threats to your organization, to determine how likely it is that those attackers could do damage, and how bad would it be if they were successful. Risk assessments can cover everything from the physical safety of your employees to the mobile apps you have in iTunes and Google Play.

Vulnerability Assessments / Penetration Tests. This is where the rubber meets the road. By the time you get to these assessments, you should have a decent understanding of where you’re most exposed (and where an attacker could do the most damage). Vulnerability Assessments help you validate that all your technical controls are working as intended (e.g., your patch management solution is really patching your Internet-facing servers).Penetration Tests allow authorized (ethical) attackers to test your defenses and identify gaps that have gone unnoticed by your security team (and, hopefully, by your attackers).

If you’re doing all of these assessments on a regular basis, then the bad guys are going to have a HELLUVA time getting what they’re after. If you’re skipping any of these assessments, then you have a blind spot, one that criminals won’t hesitate to exploit.

Depending on your organization’s business model, a Privacy Assessment might also be on the table, but that’s an article for another day.

Once you find an assessment process and schedule that works for your organization, turn your attention to automation. There’s no reason to exhaust your people (and your budget) with manual processes that can be replaced by a very small shell script.

Don’t automate everything, though, ESPECIALLY the pen test. If you think automated pen tests are sufficient, then it’s only a matter of time before your organization ends up on a list of publicly disclosed data breaches.

The short version: assess all the things! Your employees, your customers, and your shareholders will thank you for it.

Time magazine recently published an article summarizing CareerCast’s research on the most/least stressful jobs.

At the top of the Most Stressful list: Enlisted Military Personnel. That makes PERFECT sense. High physical and travel demands, ridiculously low salary, and life-threatening situations that leave many physically and mentally scarred for the rest of their lives.

I did a little digging into CareerCast’s methodology, and in that context, it actually makes sense. InfoSec pros don’t put their lives on the line day in and day out. We’re paid well, and there’s such a RIDICULOUS shortage of qualified information security professionals that the job market is, well, pretty damned spectacular.

There’s one important factor that I wish CareerCast had included in their methodology, though: Appreciation.

Had CareerCast found a way to measure that variable, I think the end results of their survey would have been a little different.

Let me offer a bit of perspective.

I went to school to be a music teacher. I’ve studied multiple instruments over the course of my life, including piano, trumpet, guitar, bass guitar, and voice, and I love both teaching and making music. When a musician delivers a performance, that musician leaves something with the audience: a memory, an emotion, a connection.

Other artists produce more tangible artifacts. Our society has preserved sculptures, statues, and paintings for literally thousands of years. Filmmakers and recording artists have produced visual and audio creations that we repeatedly enjoy, whether in a movie theater surrounded by hundreds of other moviegoers or in our favorite solo spot with nothing but our headphones for company.

Artists produce artifacts.

But what about folks who work in other industries? What do they produce?

Quite a bit, actually.

If you work in manufacturing, that’s a gimme. Medical? You produce life-altering, often life-saving, medications and procedures. Utilities? The power that keeps the zombie apocalypse at bay is kind of important.

Even if you work in a back office or shared services role, it’s likely that you produce something.

HR? I’d argue that you produce jobs. You help people get hired. Finance? You produce budgets that pay for all the things. Payroll? You produce paychecks. ‘nuff said. IT? As unappreciated as you are, the fact remains that you produce systems and applications that end users rely on.

On a good day, the bad guys don’t circumvent application vulnerabilities or system misconfigurations and steal the keys to the kingdom. Websites don’t go down due to denial of service attacks or hardware failures. Malicious employees don’t abuse their access to change data, and overly-trusting employees don’t click on malicious links in unsolicited emails, no matter how desperately they want that $100 Amazon gift card.

Nothing. Bad. Happens.

In other words, information security professionals comes in early, stay late, work through lunch, work crazy on-call hours, attend professional meetings, attend conferences, attend training classes, chase certifications, read blogs, and practice hacking virtual machines in their home labs (Yeah, we have home labs. Big whoop. Wanna fight about it?), all with one goal in mind:

To make sure that nothing bad happens.

And at the end of another day when nothing bad happened, when we don’t have anything tangible to show for our efforts, that desire for appreciation (both from others and from ourselves) is often left wanting.

That, folks, is the curse of the information security professional. The fortunate few get decent paychecks and recognition from the powers that be, but all of us… ALL OF US… put in the blood, sweat, and tears necessary to keep the lights on, to keep the websites up, to keep the personal data safe, regardless of whether or not that recognition ever materializes.

We put in the extra hours, driven by a passion to do the right the thing, and we both acknowledge and embrace the stress and burnout that comes with the gig. We support each other both online and in person (no easy task for a bunch of socially awkward introverts), and we keep at it day in and day out to ensure that… You guessed it:

Nothing. Bad. Happens.

Personally, I think a career in information security is time well-spent. It’s a stressful gig in an important industry, and I’m grateful to be a part of it. Even more importantly, I encourage folks who want to help out to learn more about working in InfoSec and then apply for one of the hundreds of thousands of open jobs that we’re trying to fill.

And to all my fellow InfoSec pros out there, know this: I appreciate what you do. So do the folks who depend on you, even if they can’t always find the words to express that appreciation.

That said, I hope you can find some small comfort in reciting the successful InfoSec pro’s mantra.

“Do you remember that awful, horrible, expensive incident that NEVER happened? You’re welcome.”

The reason we have passwords is to make it harder for attackers to get to our stuff. Ideally, strong passwords ensure that we’re the only ones who can access our email inboxes, our social media profiles, our bank accounts, and our Amazon shopping carts.

Unfortunately, passwords by themselves aren’t always strong enough to accomplish that goal. Don’t believe me? Just head on over to Pastebin and spend some time searching for pastes that contain user account + password combos. It won’t take long for you to find them. Trust me.

The worst part? Users often find out about these breaches after it’s too late, after the damage has been done. It would really be swell if we had a way to make it even harder for attackers to gain access to our online accounts, wouldn’t it?

The short version: Some of the most popular websites have added another layer of security that makes it a lot harder for attackers to get to your stuff. The cool part is that these same websites have worked really hard to make sure this extra layer of security isn’t a huge hassle for legitimate users.

If you turn on two factor authentication, you’ll be asked to plugin your username, your password, and another factor to prove you really are who you say you are. In many cases, that other factor is a short numeric code texted to your smartphone, a random number generated by an app like Authy or Google Authenticator.

To make it even more convenient, some of the websites will remember your computer as a trusted device, meaning that you don’t have to plug in that second authentication factor every time you login from your home machine.

I HIGHLY recommend that you turn this on wherever possible. Attackers are getting more and more sophisticated, and people who start using two factor authentication now are less likely to be impacted by an account compromise.

If this sounds like something you want to check out, here are links that will help you enable two factor authentication on a number of sites that you’re probably using today.

It’s amazing what you can learn about a mobile app using a zip utility and a text editor.

As someone who has spent years working in the mobile app security space, my two favorite Windows tools are 7-zip and Notepad++. Why? Because every .ipa file you download from iTunes and every .apk file you download from Google Play is just a zip file by another name.

When you unzip one of these apps and start examining the contents with your text editor, you can learn a lot about how the app was put together, including some of the security tricks used by the developers.

In the Dropbox app / zip-file, you’ll find a folder named assets. In the assets folder, you’ll find a subfolder named js (JavaScript?), and in that folder you’ll find a single file named pw.html.

If you’ve worked in infosec for more than 3 minutes, those two letters (pw) should instantly trigger one word in your mind: password.

If you open that HTML file, you’ll find an elegant bit of JavaScript that’s all of 52 lines long. The purpose of that script? To make sure that Dropbox users who are registering their accounts from within the mobile app choose a strong password.

Yay, security!

(Seriously, I want to give Dropbox props for enforcing this control. I’ve used mobile and web apps that allow for single-character passwords, which is a blatant disregard for the security of the users and of any data they might store in the app.)

I have a hunch that Dropbox may have started paying a little more attention to enforcing password security after their 2014 security incident. Whatever the reason, I’m glad to see them doing it.

UPDATE 2015-06-08: Luigi Rosa pointed out that the JavaScript is a compiled version of zxcvbn, a Dropbox project on GitHub meant to serve as a “realistic password strength estimator.” Not only has Dropbox implemented a script to enforce strong passwords for their users, but THEY’VE PUBLISHED THE CODE ON GITHUB SO OTHER MOBILE DEVELOPERS CAN USE IT. (Thanks, Luigi, for the info!)

The bit about this little HTML file that fascinates me is that ONE LINE of their script contains 85,100 words that their mobile app users are forbidden from selecting as a password, even if these words meet Dropbox’s password complexity requirements.

If you work in the infosec industry, especially if you’re a pen tester, you might want to consider adding this word list to your toolkit. If it’s good enough for Dropbox’s 300+ million users, it ought to be good enough for you, too.

(I know it goes without saying, but I’m going to say it anyway. This word list contains a handful… well, more than a handful of NSFW passwords. Don’t be stupid and end up in an HR disciplinary meeting because you decided to send this word list to all your users as part of your security awareness training program.)

A friend and fellow geek recently reached out for some career advice. He’s currently working as an app developer, and he was wondering what steps he could take to steer his career more toward application security.

The next thing I told him was that he should start attending the local OWASP chapter meeting. If you want a career in appsec, you need to talk to other security-minded developers, find out what they’re doing in their day-to-day work. Side note: if your city doesn’t have a local OWASP chapter, start one.

I also told him to download some free appsec tools like Burp Suite or Samurai WTF and just start playing around. There are a TON of hackable practice apps available for you to practice on, including:

Running tools is one thing, but developers who are familiar with the OWASP Testing Guide can dive so much deeper than those who react to only the vulnerabilities that an automated scanner identifies.

I also sent him a copy of a presentation I’ve been working on for integrating application security into the SDLC. As of this writing, I haven’t posted the presentation to my SlideShare account, but feel free to drop me a line if you want a copy.

Finally, I told him he should ultimately apply that book and lab knowledge toward some real world work. Growing security companies (like the one I work for) are always on the lookout for security talent, and the sooner he (and you) can join in the fight to help these companies secure their web apps, the better.

I’m a dad, which means I am more familiar with the Elf on the Shelf than I ever dreamed I might be. For the uninitiated, this cute little creature comes to life each night while the kids are fast asleep, usually to get into some sort of mischief before the kids wake up in the morning.

Our elves (that’s right: plural) have climbed into stockings, dangled from high places with pipe cleaners and candy canes, and even started giving each other piano lessons (Up on the Housetop seems to be their favorite). What continues to amaze me is that these damn little elves do this EVERY NIGHT, no matter how late everyone stays up, or how tired dad is, or how much he just wishes he could go to bed and get a good night’s sleep.

I was so impressed, in fact, that I wanted to share it on Facebook and Twitter. Sharing a Facebook pic on Facebook is easy, especially when the owner of the page has lifted the privacy settings so that anybody can find their page. Sharing it on Twitter, though… could I do that?

I do a lot of application security work at Jacadis. One of my responsibilities is to hack into client applications, then show the developers how I did what I did and (more importantly) what they can do to fix those security holes. With that experience under my belt, I have a behind-the-curtain understanding of how web apps work: GETs and POSTs, calls to other domains, identifying parameters you can tamper with… fun stuff.

When it comes to sharing pictures of mischievous elves via social media, this kind of knowledge comes in handy.

Here’s a link to the Facebook album page that contains our carbonite-encased Elf. If you click on that link while you’re logged into Facebook, you’ll see the photo album page (and all the ads on the right). If you’re not logged into Facebook, you’ll still see the picture (sans ads).

The Facebook web app runs on one set of servers, while static content (like the elf pic) is stored on another server. It doesn’t matter which link you click, though, or even whether or not you’re logged into Facebook. Either way, you still see the elf.

But what about pics from users who protect their profiles, like my wife? Could someone see her pictures without her permission?

But if you aren’t her friend (or if you don’t have a Facebook account), that doesn’t mean you can’t see the picture. All you have to do is click here instead, and boom: picture. Screw Facebook’s ineffective privacy settings.

In web application security speak, this exposure is the result of an insecure direct object reference. When you try to get to the picture through the first link, you’re going through the Facebook web application (where they’ve built in some decent privacy controls). The web app checks to see who you are, checks whether or not you’re allowed to see the file (based on her profile’s privacy settings), and then makes a decision to either show you the file or display the error message. When you try to get to the picture through the second link, you’re skipping all of that program logic and going straight to picture.

This is security through obscurity. Facebook is counting on the fact that the URL is a long (seemingly random) series of numbers to avoid any more negative privacy-related publicity. If there’s on thing I’ve learned in infosec, though, is that security by obscurity is ultimately doomed to fail.

I mean, how could someone figure out direct links to my pictures? It’s not like they could:

Do a little bit of Google hacking to get the search giant to find them for me; or

Analyze the patterns of multiple file names and then use a tool like PeachFuzzer to try a large number of likely URL combinations until it starts finding valid URL’s.

Of course, if they really wanted to see my Facebook pictures, they’re much more likely to use social engineering tactics in an attempt to get my password. But I digress…

The worst part about this is that there’s NOTHING you can do to prevent people from accessing your Facebook pictures via these direct links. It’s up to Facebook whether or not they can/will make any changes to how their app stores and controls access to the pictures you upload to their site..

With that in mind, maybe you should think twice about what you upload to The Face.

It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles. – Sun Tzu

With the pressure for every company to have an app (or two, or three) in both iTunes and Google Play, the pressure is on both security teams and development teams to make sure those apps are secure. If you’ve ever read viaForensic’s 42+ Secure Mobile Development Best Practices paper, you know that Android app developers are at a disadvantage. Simply put, it’s harder for a developer to secure an Android app than an iOS app.

If you have an app available for download from Google Play, an attacker (or security researcher) can download that app and take it apart with relative ease. Maybe the attacker is looking for a tidbit they can use to gain a foothold in your organization, analyzing the app as a form of passive reconnaissance (i.e., never touching the systems you’re monitoring with your SIEM). Or maybe the attacker wants to develop a competitive app, and s/he has no problem with stealing your source code to get a “head start.”

Whatever the reason, the simple truth is this: if you have an app in Google Play, you need to make sure you’ve taken the appropriate steps to protect that app.

How easy is it to decompile an Android app? If you have an Android tablet and a Windows laptop with the right tools installed, it takes all of 10 minutes (if that). Here’s how:

In jd-gui, click File > Open, and point to the .jar file created in your Eclipse/ADT workspace directory

Boom. Source code.

If the process is so simple that a script kiddie can do it, then that should be an indicator that you need to take the necessary steps to secure your code. One technique you can (and should) apply is code complexity and obfuscation. This includes:

Anti-debug techniques

Restricting debuggers

Trace checking

Optimizations

Stripping binaries

You should also consider using tools like ProGuard and DexGuard. Otherwise, some security researcher is likely to download your app, tear it apart, and publish an article about how weak your app is when it comes to app security.

If you want a little more info on how to integrate appsec into your mobile appdev process, check this out.

It’s one thing to embrace social media, but it’s another thing entirely to embrace it securely. This presentation helps organizations understand what steps should be taken to ensure that their social media properties are abused or exploited to attack the organization.