Browsed byMonth: February 2015

So if the title made you immediately think of those scruffy, hard-working characters who set up and take down touring concerts, welcome to the club. But that’s not what this post is about, as you may

notice by the business and security tags I have given it.

Roadie is a start-up whose tagline is, “Discover the invisible shipping network.” The idea is, there are 250 million private vehicle trips per day, with a billion square feet of otherwise-unused cargo space. Some of them could be matched up peer-peer and make everyone happier. I read this Buzzfeed article and started to think like a hacker… how would I break this, if I were evil?

Disclaimer: Some of these things are illegal. Some of them are immoral. Roadie may well have already thought to include countermeasures against some that they would not, for good reason, publicize.

An obvious first one is, I tell Roadie I want to send, let’s say, a stand mixer to my buddy in Harrisburg who’s taking up baking. The driver and I meet, I give her a neatly taped-up Kitchen-Aid box that weighs about 35 pounds. She drives it from Rochester to Harrisburg and delivers it uneventfully to Bob. It’s a good thing she’s a mild-mannered driver in an inconspicuous Chevy because she just delivered 15 Kg of high-quality weed across state lines. Since Bob and I both used burner phones to set up the endpoints of the transaction, Roadie will not be of much help identifying anyone but the innocent driver.

Never mind legal trouble, some cargoes can be just plain trouble. Roadie has a list of restricted items and materials similar to the one you see at the post office, but it’s not clear how this can be enforced. Sealed boxes may be opened by postal inspectors at random but Roadie drivers should not be similarly empowered. Otherwise, the prospect of a Roadie driver pawing through the stuff being delivered might be seriously off-putting to prospective shippers.

For an even more obvious ploy, shipping an item with a substantial declared value opens up Roadie to all kinds of insurance issues, especially given the informality of the hand-offs at either end of the trip.

Receivers and senders are going to be strangers to the drivers, and strangers are terrifying, in our cable-news-fear-mongered society. To this end, Roadie has wisely teamed up with Waffle House to create a ready-made network of public meetup spots for exchanges. More safety measures to protect Roadie and its drivers are needed, and as I mentioned above, some may already exist.

I expect Roadie to attract the same kind of opposition to its business model as the hotel and taxi industries are already lavishing on Airbnb, Uber and Lyft. To some extent, I like seeing old crufty business models being disrupted. However, a certain amount of what looks like fluff in those models really does protect the participants and the public. We have a baby and a tub of bathwater here; some care is advisable.

In a summary report by a researcher from GFI Software, a security products company, we learned yesterday that the count of vulnerabilities discovered in 2014 was up over the previous year.

We got a lot of graphs.

Who wants pie?

We got tables, too.

OSX and Linux make disturbingly large ripples in the pool, for once.

But all this rather misses the point.

The counts of the vulnerabilities researchers have discovered in your software are only one factor in your overall security picture, and I would argue, a relatively minor one. Most attacks succeed because of misconfigurations and human factors. Malicious insiders and social engineering.

The vast majority of technologically vulnerable software is on machines that should not be accessible from the Internet, and perhaps not even from the majority of the company’s intranet. And yet audit after audit will find default-allow access rules, especially on internal firewalls. These, plus lousy defaults for on-the-box controls create many times more opportunities for attackers than should exist.

And for the most part, human failures are really design failures. IT architects design systems with an unspoken and largely unexamined assumption that the operators of those systems will do things correctly. This assumption is one that we security practitioners must challenge at every turn. Two things that security uber-consultant Bruce Schneier has said stick with me. The first is about the fact that good security people are people who break stuff, by breaking the assumptions under which they are designed. For example, here he wrote about a hilarious product called SmartWater, which is water with microscopic particles in it that provide a unique coding, to mark property as yours. Schneier said, “The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.” This should have given the architects of the whole SmartWater idea what we like to call an, Oh, $#!+ moment.

And the second one might be his most-quoted one-liner: “If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology.”

Ultimately, vulnerability counts are about nitpicking the technology. Good technology is important, and we should be pushing the manufacturers to make it better for security all the time. But getting the numbers on all those charts and graphs to zero won’t be the final answer.

A way to create strongly encrypted zip files the way we can now using a secret key. (For which, 7Zip rocks. In case you didn’t know.) Only the difference is, these zips would be encrypted using a public key algorithm. This would remove the need to include the secret keys in the scripts that handle the zip files. (Yes, I know that you can awkwardly bolt together 7Zip with GPG… but see the next item.)

Public-key infrastructure made smoothly-functioning enough for home users, with interfaces that include the top web-mail providers. People have insisted to me that this is fundamentally impossible, that PKI is for some reason theoretically required to be difficult to use. But I remember the blinking 12:00 VCRs, and I see TiVo now, so I call BS on that. If Facebook, Microsoft and Google decided to roll this out together, the matter would be settled in a month.

Truly universal two-factor authentication based on smartphone apps or grid-cards for people who don’t have smartphones — or who just don’t want the privacy complications of using a smartphone for 2FA. Again, if Facebook, Microsoft and Google decided to roll this out together, the matter would be settled in a month.

A tool that would audit the root certificates and CA signatures on a given set of systems and cross-check them against the content of news feeds. This sounds like a relatively simple plugin for Nessus.

These make me almost (almost!) feel like knocking the rust off of my developer skills and getting to it. Which one would get you motivated?

PC Manufacturers have been installing crapware in their machines for years, perhaps decades. I bought a Packard-Bell computer in 1996 that needed to have quite a few “sponsored utilities” cleaned off to make it usable. This week, Lenovo got caught red-handed installing actual malware: the Superfish utility added a bogus certificate to the root certificate store, enabling them to intercept and examine all HTTPS traffic via a simple-to-implement and impossible-to-detect man-in-the-middle attack. Superfish created a deliberate data tap in all your encrypted traffic.

So yesterday Lenovo issued this press release, as companies do in this situation. For the most part it was pretty standard eyes-glazing-over corporate doubletalk, most of which translates as “oh, s*, we got caught, how shall we walk it back?”

Still, a couple of key points stood out for me.

We thought the product would enhance the shopping experience, as intended by Superfish.

That’s what Superfish intended, is it? Enhancing my shopping experience? Well, I’ll tell you what would enhance my shopping experience: someone who follows me around and carries all the bags. This is not really accomplished with fake root certificates stealthed into my Windows certificate store. Also, notice how the “intent” is now ascribed to Superfish, not Lenovo. A kettle of lawyers are circling….

It did not meet our expectations or those of our customers.

Oh those pesky customers. Always expecting not to have their banking credentials stolen.

I was in a meeting and someone from another company (but with a good reason to want to know) asked me, Does your organization respect the need for security or do they view the requirements you bring to them as an annoyance and a burden? In other words, he asked if we have a good security culture.

I told him that I am indeed fortunate that when I add security requirements to a project, or alert admins to a newly-uncovered flaw that make their systems less secure, it is always a welcome addition. I know there are plenty of organizations where this is not true: where “Security” shows up and eyes begin to roll even before s/he speaks. So I know I am lucky this way.

But he went on to ask, How can you show me that? And that stopped me cold. I realized that, even though I am in a good security culture I don’t really have artifacts to demonstrate that fact. I can show that awareness training takes place… but not that people are happier for having been trained. I can show that risk mitigation is done (and on time!)… but not that anyone welcomed the tasks or was glad to do them.

We security practitioners always talk about wanting to have this kind of security culture in our organizations. How do we know when we get it? It’s like Justice Stewart’s famous non-definition of obscenity: “I know it when I see it.” But if it has business value — and I believe we’d insist to our last breath that it does — then it should be measurable. So how is it measured?

I don’t think a survey can truly measure something like this. I am fairly sure that responses to surveys of employees are skewed in the direction of “good news.” Employees know what answers their employer wants, and protestations of the survey manager that all responses are confidential and anonymous might be a tad more credible if the survey link didn’t arrive in the company email inbox sporting a 56-character random-looking string after the ‘?’ in the URL.

In any case, now I am now on the hook to produce artifacts of the good security culture in which I work, and I am not sure what those might look like.

Have you ever been asked for such things? Or perhaps you know of a way to measure “security culture?”