Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

I claim my First Amendment rights include the right to speak in unbreakable code. And as far as anyone else is concerned, it may just be pure gibberish. It's up to me to decide what it means. I could even do this verbally, by reciting words that are translated to other words using a one-time pad. And if spending money is considered protected speech, speaking in unbreakable code must be as well. There's nothing in the First Amendment that says the speech has to be intelligible, by the Government or anyone else.

So, the neutrality debate is about network providers adjusting performance for different types of content. It seems to me, far more important question than if they are legally allowed to do it, is WHY ARE THEY ABLE TO DO IT? A much bigger problem is the fact that they are able to tell anything at all about the content, because that means it's not secure and should be a violation of privacy. I'd prefer net neutrality failed, because we shouldn't be depending on network providers to monitor our traffic for content type, but we should fix that by making it impossible, not by legal means which clearly no longer apply to the government itself and by extension, the companies that facilitate communications. FCC net neutrality ruling doesn't fix the problem, it just lets the government PRETEND they've fixed the problem. Plus, the govenrment wants net neutrality, else how are they going to "fast lane" their surveillance program traffic without exposing it to dweebs at the ISPs?

If they don't start calling you an engineer you'll likely hit the salary ceiling before long. I'm not sure who decided that, but it seems to be the case. And of course, even if they do call you an engineer, you'll hit the ceiling soon enough, just not quite as soon...

They are the intelligence community, not our national cybersecurity consulting firm, and they only ought to be notifying the public if the risk to national security involved in leaving the vulnerability open is greater than the risk to national security involved in losing the intelligence that could be gained from it.

What you're saying is we HAVE NO national cybersecurity entity whose purpose is to protect our infrastructure from bad actors using exactly the kinds of methods and exploits we're seeing here. And given that, we have to rely on Kaspersky to do it for us. Not only is it then a good thing, it's long overdue.

So everyone should just leave their doors wide open so the cops never have to break a door down to nab a crook? Yeah, right. If the NSA can hack into our computers, the bad guys can too. The best way to improve cybersecurity is to fix all the exploitable holes they've been using. But instead of helping us to secure our systems they've left them vulnerable because they're too lazy to pound the pavement, get individual warrants and plant bugs. Having every computer system in the world remain vulnerable made their job easier, so they chose that route, which also made the bad guys efforts easier too. But hey, it's job security, eh?

So, your manager asks for it, tells you we need it. You provide him the above explanation and his eyes glaze over. He clearly either doesn't understand your explanation or doesn't care. He repeats his original statement-- I want it, we need it. You go around that way a few times and get nowhere. What do you do?

I suppose you could look for another job.

So when that happened to me, pretty much just like that, what I did was use a hash on the passwords (SHA-256 IIRC, it was a long time ago), then asymmetrically encrypt/decrypt the resultant hash with hardcoded keys just so they could say they secured their passwords with asymmetrical encryption. And customers are very unlikely to know the difference (or at least, ours were), so there was no real risk if the sales force blabbed about it like that as if it were a useful feature. When management gets a security buzzword stuck in their heads and they think they want it and can't or won't be convinced it's not the solution they think it is, you give it to to them if you want to keep your job regardless of whether it makes any sense or not. Some developers won't even bother to find out what the right solution is, or have the luxury to actually implement it. I gave them what they needed, then bolted what they wanted on top as window dressing. And management will never read my comments on that code, which explain exactly what happened.

Why would you use it? Because someone in management read some unrelated article about security somewhere that said it was necessary for security and if you don't use it in your implementation you're not doing it right. Or someone in sales had a customer ask if our product uses it for security so now whether or not it makes any sense you have to figure out how to make use of it because management won't take "that's complete nonsense and it's useless in that context" for an answer.

I've been doing it for 35 years and only once was I asked to do anything with encryption. The funny thing is, what I was asked to do made no sense whatsoever, and would be completely ineffective towards their security goal, essentially demanding I use an encryption standard that was for a completely different job-- I was to essentially use a screwdriver to hammer nails. I was unable to convince them otherwise, so I decided to use the right tool to do the job, then bolted the bogus screwdriver on top so they also got what they thought they wanted. I didn't need to do that but I just couldn't see going through some useless security ritual without actually providing any security. They got what they needed only because I cared enough to spend the extra time to give it to them. The thing to realize is management is often incompetent as well, especially when they think they know something about a technical solution merely because an ignorant customer asked them, "does it do ?" I hoped they wouldn't advertise what they thought they were doing because any customer who knew the subject would recognize it as bogus voodoo.

Good security is hard. VERY HARD. The government is often bad at it. Sony is bad at it. Banks are bad at it. In fact, I can't point to anyone who's known to be good at it except maybe Zimmerman, and I don't even know that for sure. And users don't like it and will often bypass or otherwise subvert it themselves. But it's not because engineers are incompetent. Often they're not even asked to provide security and it isn't even on their radar. And sometimes when they are asked to provide security they are saddled with bogus requirements for how it should be done. Good security affects the user interface and the users behavior, and that's an area that companies prefer to stay out of because it's unpopular, at odds with productivity, and isn't readily seen to contribute to their bottom line.

Except the Democrats shifted RIGHT. Remember when unions actually made some difference? I do. Did the Democrats do anything to fight the "giant sucking sound" of jobs being outsourced? Remember who signed NAFTA?

Are you kidding? In the '70s we still had unions. Both the Republicans and Democrats moved to outsource labor which has now mostly neutered the unions. The "giant sucking sound" Ross Perot talked about came to pass. And since then wages have stagnated except for those at the top.

I don't see them automatically sending NSLs to every company in the United States, even just once, much less on a regular basis. That in itself would create quite a stir. And if they try to do it just to companies who've published canaries, they'll be playing whack-a-mole with them. And they can't pass a law pre-empting canaries in general without running into freedom of speech problems. No, I don't see they can stomp on canaries and still continue to fly under the radar.

I would suggest there is a much cleaner way for the TLAs to make warrant canaries ineffective. Send a warrant to every company that publishes a canary. In a short space of time, no company of any note will have a canary, and the whole point of issuing a canary is defeated.

Too risky-- it would show up in Canary Watch when they all dissapear, and you'd start seeing a lot of new canaries being published by companies who hadn't done it before, which would then all get their own NSLs, and the whole thing would continue to snowball until someone refused to comply with an NSL and the resulting stink would probably kill off NSLs alltogether.

What probable cause? You obviously haven't been paying attention. The Snowden release has proven without a doubt that probable cause restrictions are ancient history. The Constitution is supposed to set limits in government, but the government has been treating Section 215 as a one-size-fits-all loophole that permits anything.