Browsed byMonth: May 2015

Or does GM? I’m not referring here to leasing vs. buying. I am referring to the fact that GM has recently declared that only mechanics they license are allowed to work on “your” car. And if you take it to another mechanic, or use less-expensive after-market parts, or connect the car’s diagnostic port to a home-brew or third-party device, the issue is not merely the possibility of voiding the warranty. The issue is, GM can more or less unilaterally declare you to be in violation of the Anti-Circumvention provisions of the Digital Millennium Copyright Act (DMCA). You can be charged with a crime at the Federal level.

Here is where the evils of DRM (that I started to write about here) intersect with the entertainment industry lobbyists’ power to get stuff enacted into laws, and affect how we can use technology we think we own. These laws have effects on our lives that are not at all well-understood, not even by the content-industry monopolists who paid to have them enacted.

Do farmers own their tractors? According to comments filed by John Deere with the Copyright Office, they do not. They are not allowed to modify any aspect of “their” tractor that is mediated by software, which is pretty much anything useful. This article in Wired brings up a case of a farmer — a neighbor of the author — who cannot get his transplanter fixed because he is not given access to the correct diagnostic software. And so he has a six-figure barn ornament.

In their comments in support of this policy, Deere points out that if they were allowed to tinker with the tractors’ software, farmers might change the engine tuning to violate the EPA pollution regulations. Well, OK, but then they would owe the EPA a fine, not John Deere. They might even use the in-cab entertainment system to pirate music. (Roll that around in your brain for a minute.) Yes, that’s why the farmer spends half a million bucks on a harvester — to evade paying $9.99 for a Taylor Swift CD.

We website developers put up with a lot from those security folks. We’re constantly hearing them nag us to do boring things like scrub inputs to prevent SQL injection flaws. Enforce up-to-date encryption standards. Quit putting auth tokens into URLs. All of these things would make our web applications more genuinely secure. None of them, however, is visible to the user as evidence that we Take Security Very Seriously™. What shall we do?
Well, nothing says “Security!” to our users who know nothing about security like passwords. Long, inconvenient, hard-to-remember passwords. Let’s make our password authentication as difficult as possible! Then they will know that we Take Security Very Seriously™!

We’ll require a diverse character set. Their passwords will have to have two capital letters, three lowercase letters, two numerals and a special character. Donald Duck, perhaps? Brad wanted it also to have to include the tears of a virgin, but HR sent us a really nasty email about the test we were going to implement for that.

We’ll not allow passwords shorter than 8 characters, but also no longer than 14 — the DBAs are worried about the space it will require for that. Why aren’t we hashing the passwords? Well, yes, that would make the storage a non-issue, since all we’d ever store for each password is a constant-length hash. But then how will we be able to send users those friendly reminder emails when they forget their passwords, with the password in clear text?

Of course, they won’t be able to use that clear text password to log in, because we have not yet finished demonstrating that we Take Security Very Seriously™! See, now that we’ve made the passwords inhumane, we’re going to fix the front end to be sure that the ONLY way they can enter those inhumane passwords is to type them, one agonizing character at a time. Never mind the users who want to use really random passwords, so they get password managers that load the clipboard or fill in passwords for them. That black magic seems like a hacking tool to us, we won’t allow it. No sir, only human fingers on a keyboard will be permitted here!

Today in Stupid Extensions of Biometric Authentication: this item from Sophos. Brainprints will apparently be the new fingerprints.

Here is what the press (and from the looks of it, half the security industry) seems unable or unwilling to get: you cannot change your biometrics. You cannot ever change your fingerprints. Nor can you ever change your iris, your retina, your “brainprint,” or any of the other too-clever-by-half schemes researchers may yet dream up for biometric authentication.

In fact, the whole idea of two-factor authentication has traditionally been based on “Something you know, something you have, something you are… pick two.” We need to drop the last, and go with “Something you know and something you have” – period.

Fingerprints are already easier to steal than a password ever was. Digital photography is probably good enough by now that iris patterns are equally easy, and retinal scans from afar cannot be that far behind. What was that twinkle? Oops, too late. Once the “brainprint” technology is usable, its targets will be equally pilferable.

Just because it looked cool in 1970’s SciFi does not mean it’s truly going to be valuable in this century.

When the FBI or some other government agency comes a-calling at any custodian of your private information, from Google or Yahoo! to the local public library, they bring something called a National Security Letter (NSL). This not only serves as a warrant for the information they seek, but it also includes a gag order — the institution is not permitted to disclose that they have been served, or what information they handed over.

But companies are fighting back, in a passive-aggressive way (don’t worry, this time it’s a good thing). As detailed in this article on ZDNet, companies have realized that post-Snowden, customer trust in protection of their data is quite important. And so many of them are implementing what is called a “warrant canary.” The name derives from the old practice of taking a canary down with coal miners, so that if gases start to accumulate the more-sensitive canary would die and hopefully give the miners sufficient warning to escape the local buildup of carbon monoxide or similar.

Low-tech warrant canary

A warrant canary is a statement that a company makes proactively that they have not received a demand for data — and silence — bundled into a NSL. Then, we in the public watch for the statement to go away. It can be a line in the text of a webpage, or a periodic statement perhaps in a quarterly report for a public corporation. It can also be a sign on a bulletin board as in the picture to the left.

Legal scholars wonder whether the NSL’s gag order can also be interpreted to require the subject organization to actively lie to the public, and continue to say, “no, they have not been here.” Moxie Marlinspike has stated his opinion that removing a warrant canary would “likely have the same legal consequences as simply posting something that explicitly says you’ve received something.”

But the Electronic Frontier Foundation (EFF) believes that a law specifically outlawing this practice would be required, and there is no such thing on the books as of now. So they have established a website, Canary Watch, that maintains a list of existing canaries and monitors them for changes.

ZDNet quotes EFF staff attorney Mark Rumold as saying, “No court has ever publicly addressed the issue,” and that it would be “unprecedented” for the government to force a company to keep that warrant canary in place. “I’m skeptical it would ever happen….”

Once a company has been served with a gag order, though, it’s too late. Verizon was forced to comply with a Section 215 order for phone records data of every one of its customers. And Twitter is suing with the Justice Department aiming to settle whether or not warrant canaries are protected under the First Amendment right to free speech.

Visit Canary Watch for more on this. I check it a couple times a week.

Wireless Car Locks are designed for convenience. Yours, and also car thieves’.

In this NYT story, the author describes why he now keeps his car keys in the freezer:

He explained it like this: In a normal scenario, when you walk up to a car with a keyless entry and try the door handle, the car wirelessly calls out for your key so you don’t have to press any buttons to get inside. If the key calls back, the door unlocks. But the keyless system is capable of searching for a key only within a couple of feet.

Mr. Danev said that when the teenage girl turned on her device, it amplified the distance that the car can search, which then allowed my car to talk to my key, which happened to be sitting about 50 feet away, on the kitchen counter. And just like that, open sesame.

He’s now using the freezer as a Faraday cage to prevent this – his Prius had been broken into three times as of the writing. This method is less useful for stealing the car than for entering it, because once it’s driven away there will be obvious difficulties without the key.

I think my plan will involve two things, none of them below room temperature. One, we will no longer keep ANYTHING of value in the car. And two, we will get Faraday bags similar to those that protect your new “secure” passports and keep our key fobs in there when not driving.

If you’re a student and you’re reading this, I just made you clench a little with that title, didn’t I? Well, here’s some news you can use: it never really goes away.
Ten years ago next month, I sat for the CISSP exam. Being a bit underemployed at the time, I had done little the preceding six weeks but study for it. I had to travel to NYC for the exam, which was a non-trivial financial risk, but lack of confidence has never been my issue. Even the night before in the hotel, though, I sat doing flash cards of the Legal & Regulatory elements, which was the one area I felt needed boosting. I could never get the hang of this due to its utter lack of internal logic or consistency. This is what keeps the courts in business, I suppose.

I went into the exam with a strategy of sorts. I was planning to give my brain “breaks” by doing 25 questions at a time, then reviewing those before moving on. I was never worried about the time limits. Right or wrong, I do these things quickly. I have yet to hear the words “pencils down” in a test, and that goes all the way back to the PSATs in 1972.

So there I was doing this answer 25, check 25 routine… and I started to notice something. The text of questions in the second half of the test started giving me clues to some answers I had not been so sure about in the first half. I know for a fact that there are at least three questions I would have had dead wrong on my test that I was able to fix, thanks to clues in the “givens” of later questions.

The only time-related distress I’ve experienced in a test was on the CISM exam. At that one, there’s one other CISM candidate among a gaggle of would-be CISA. For no discernible reason, the proctor seats us next to each other. We start the test at 9:00. At about 10:10, I’m on question maybe 110 of 200… and doesn’t she close her book, go up front, hand in her paper and leave?! This freaks me out in no small measure. But to this day, I have no idea if she scored 100% or “no better than random”. I just figure it has to be one of those two extremes.

This comes to mind because I have now started to hear the siren song of yet another certification exam, the CCSP. It takes the same body of knowledge from the Cloud Security Alliance that went into the CCSK exam and adds continuing CPE requirements and renewal. I have a feeling it will be better-recognized. And hey, one thing I appear to be able to do well is take multiple-choice tests, so… why not?

“Digital Rights Management” is one of those things that sounds so benign. Like “Patriot Act”. In fact, DRM is a willful effort to make sure that your computer is not really your property, and that legitimate uses of it are under control of the corporations you bought media from. Oh, sorry, “bought media” is a misstatement. Under DRM, you cannot actually buy media. You can give corporations money, yes, but they retain the ownership of everything. You have only bought a license to use the media until… well… until they decide you can’t use it anymore. When this day arrives, you will have no recourse.

Security? Broken software is not secure. Proprietary encryption algorithms make me pull my hair out. DRM requires that you hold all the information in your hands and yet you are subject to arbitrary restrictions about how it may be used. The theme of all DRM is, or should be, “Defective by Design.” Because the only way to make DRM start to work is to break your software or device in some way, and then arbitrarily forbid you from fixing them.

Why the sudden DRM screed today? May 6th is the International Day against DRM and this has been welling up for some time.

I set out this weekend to figure out how to get PRTG Network Monitor to tell me the Internet bandwidth being used by our various machines, and where on the internet all that data is coming from or going to. In order to get that level of detail, I have to enable SNMP and then tack a bandwidth monitor sensor to each device.

SNMP gets a pretty bad rap in the security world. It’s host to its share of vulnerabilities, and the default credential (community string, in SNMP parlance) of “public” makes it obvious it gives up too much info too easily. Every best-practices benchmark or manual will tell you to turn that off or reconfigure it so none of the defaults are taken. More to the point, most modern OS distributions no longer enable it at all by default, and you have to explicitly enable it.

Enabling SNMP with all non-default settings turns out to be a very finicky process. Unless an IT shop is operating at a scale where everything will be built from “golden images,” it is easier to understand why security inspections often find defaults taken. Even though this flies in the face of best practices, the defaults on SNMP agents match the defaults on SNMP sensors. How incredibly tempting to IT managers with thinly-stretched staffs to take zero over the double work of setting sensors and agents up non-default and then testing to make sure they set the exact same non-defaults on both sides?

This doesn’t make it right, but it sure makes it understandable. Any security manager needs to show some empathy when finding things like this in the environment.