Posted
by
samzenpus
on Wednesday June 26, 2013 @09:55PM
from the protect-ya-neck dept.

wiredmikey writes "Norwegian browser maker Opera Software has confirmed that a targeted internal network infrastructure attack led to the theft of a code signing certificate that was used to sign malware. 'The current evidence suggests a limited impact. The attackers were able to obtain at least one old and expired Opera code signing certificate, which they have used to sign some malware. This has allowed them to distribute malicious software which incorrectly appears to have been published by Opera Software, or appears to be the Opera browser,' Opera warned in a brief advisory. The Opera breach signals a growing shift by organized hacking groups to target the internal infrastructure network at big companies that provide client side software to millions of end users."

My guess is that you probably nailed it with the "to induce widespread panic" part. Nothing new here, hackers will use any method possible to trick people and conceal their true intentions, move along.

the whole idea of SSL is based around the trust of the certificate and signing infrastructure. it is a growing shift away from the assumption that SSL=safe+secure when shit like this keeps happening over and over.

The original poster claimed that Opera laid off most of its developers, which is not true at all. They laid off maybe 20-25 developers out of several hundred. In other words, the AC you are responding to is at least 100% more believable than the original poster who has been caught making demonstrably false claims.

For a company that just laid off most of its developers and resigned itself to being a rebranded Google Chrome, this cannot be coincidental.

Laid off most of its developers? Opera had nearly a thousand employees, and hundreds of people working on the browser. 90 people left or were fired, and only about half were engineers (meaning programmers or testers). So if we assume that around half of the engineers who left were developers, something like 20-25 out of several hundred developers are now gone.

Does this really signal a growing shift? Or are we just saying that whatever happens in a news story must signal a "growing shift" toward that thing to induce widespread panic?

Criminal gangs and individual crackers have been growing more sophisticated in their computer crime activity for some time. If you're going to move up the food chain of commercially valuable exploits, this is exactly the sort of thing that you would expect. It makes it much easier to get malware accepted on a system, which means it makes it easier to extract some sort of value from the system. (Stolen data, botnet, spam host, etc.)

The real tragedy of the non-user-controllable code signing features being baked into some popular operating systems. It does not make us safer but it dose create a barrier to entry in the market place for legitimate software developers.

Some people, such as a PlayStation fan on Slashdot who will remain nameless, would argue that a barrier to entry is a good thing. It ensures that anybody who wants to distribute software to the public is serious about creating quality software. It's a fallacy, but like other fallacies, appeal to accomplishment [wikipedia.org] springs from a heuristic: companies that have successfully published quality works in the past are more likely to publish quality works in the future. The example he likes to trot out is the North Ame

Whenever the topic of security comes up, there are always a bunch of people who go on and on and on about how certificates are always the answer to security problems.

How do we fix security problems with email? "Certificates!", they say.

How do we fix security problems with HTTP? "Certificates!", they blurt out.

How do we fix security problems with DNS? "Certificates!", they scream.

How do we fix security problems with passwords? "Certificates!", they yell.

How do we fix security problems with application executables? "Certificates!", they exclaim.

Yet we see so many stories about certificates getting compromised in one way or another. And then the infrastructure surrounding them is always so goddamn awful. They cause just as many, if not more, problems than they actually manage to partially solve.

It's time for the certificate advocates to stop and think. They need to look at the big picture. They need to realize that while certificates may have their place in some very specialized situations, they are not the ultimate solution that we so desperately need.

Whenever the topic of security comes up, there are always a bunch of people who go on and on and on about how certificates are always the answer to security problems.

How do we fix security problems with email? "Certificates!", they say.

How do we fix security problems with HTTP? "Certificates!", they blurt out.

How do we fix security problems with DNS? "Certificates!", they scream.

How do we fix security problems with passwords? "Certificates!", they yell.

How do we fix security problems with application executables? "Certificates!", they exclaim.

Yet we see so many stories about certificates getting compromised in one way or another. And then the infrastructure surrounding them is always so goddamn awful. They cause just as many, if not more, problems than they actually manage to partially solve.

It's time for the certificate advocates to stop and think. They need to look at the big picture. They need to realize that while certificates may have their place in some very specialized situations, they are not the ultimate solution that we so desperately need.

Are you saying "certificate" when you mean "PKI"?

This might be taken as evidence that you know very little about security...

In this instance it is critical to differentiate, certificates have not been broken/compromised at all, underlying implementations of the infrastructure and the people handling that infrastructure have been compromised or broken. Certificates in general are an excellent solution to many security issues, however it does require good PKI infrastructure and management otherwise they are pointless. For many of their uses you don't even need to trust or rely on any external authority, you can run your own which

SSH currently will do a key exchange using the first-time approach without a certification authority and we should use the same system for end to end email encryption.

When connecting for the first time, SSH shows the public key fingerprint of the host you're connecting to. If you don't bother to check it, you're leaving yourself wide open to a MITM attack (and in this case, the attacker doesn't even need access to any certificate authorities).

Your proposed email system that blindly accepts every public key upon first connection is even worse than using CAs -- with certificates, you can at least choose which authorities you want to trust.

There's nothing wrong with tracking prior public keys. That's a good option for knowledgeable users, but it's a no-starter for people who know nothing about cryptography.

See for example what would happen when a key is compromised or just lost. In this case you have to warn everyone that your key will change. Now think of how often will people receive the message "hey, my email key has changed, so the warning you'll get is not a MITM attack", and how soon will people start clicking "accept" without bothering

People ignore messages about certificates anyway. I managed to use a man in the middle attack to steal an old IT teachers password sent over HTTPS. I just used a self signed certificate and he accepted it like the warning was nothing out of the ordinary.

What would also need to be added to your proposal is to supplement with SRP or other secure password system that allows two users to easily exchange relatively insecure passwords out of band to verify the exchanged verifier. This also applies to SSH, especially when remotely connected to a box under your direct control.

You'd use this to supplement the base line protection of using a PKI system to verify the verifiers.

Once the public key has been reliably transferred, it can then safely be used to securely r

you could communicate a key id over another channel, (in person, via phone, mail, etc)

But what providers of shared hosting or a virtual private server are willing to do this for a customer? I've asked the tech support departments of a few such hosts, and the answer was "Just say yes to whatever key fingerprint your SSH client shows."

However, we can thin their ranks a bit. Support the death penalty for cyberthieves (at least in Texas).

I support a cyber death penalty for cyber thieves. But out right kill them? Seriously? I can think of a lot better type of people to put to death in Texas, starting with the lawyers and judges then moving on the politicians.

Did you recently...- copy any html codes from someone else's website?- save any pictures or files from the web?- cut and paste an article or link it to a friend?- take any screenshots of any interesting pages you found?- download any movies, music or porn?

Congrats, you may be a cyberthief. This way please, for your appointment with Mr. Noose.

That's right, cyber criminals must be made to eschew all technology post-1800 and be consigned to an Amish paradise for life and have sex with real women. No more computers, microwave ovens and clothes with buttons and zippers. Oh, and they have to go to Church too.

Opera is not the first nor the last victim of certificate theft. There is evidence that the use of digitally signed malware is increasing [techworld.com] since the Stuxnet incident gave this attack vector worldwide exposure.

Both Kaspersky Lab and BitDefender have confirmed seeing a steady increase in the number of malware threats with digitally signed components during the last 24 months. Many use digital certificates bought with fake identities, but the use of stolen certificates is also common, Craiu and Botezatu said.

i'm wondering about "The only effect of the revoke process is that the bad guys will not be able to sign any further malware with it." in the cited article. how would revocation prevent further signing ?using crl would (should ?) prevent signed software from working, but signing with a key already in somebody's possession wouldn't be impacted

I am the same AC that asked the original question but I meant how about other attacks that have happened in the past. Like personal user data stolen or something. So what your saying is that the only way to know if any data has been stolen is if your see it posted online somewhere?

some systems have access logs builts so that even if you manage to get the data, you might not be able to remove your log entries for doing so. varies case by case of course.

(I guess, given the scale of it, this means all the spooks at the NSA are judges. Maybe they'll soon make all the street cops judges too, that would work out well I'm sure. Theres probably a cadet at the academy now who can't wait to have 'Judge' in front of his name. Cadet Dredd).

and what I mean by "in" is that they do it while sitting in USA and argue that then it is not a crime for them to perform something that is a crime in Norway(try it the other way and they'll argue it's a crime that happened on US soil. fuckers.).

Reading the advisory from Opera, the only information on the possible consequences of the breach is that:-

It is possible that a few thousand Windows users, who were using Opera between 01.00 and 01.36 UTC on June 19th, may automatically have received and installed the malicious software. To be on the safe side, we will roll out a new version of Opera which will use a new code signing certificate.

Are users of other OSes similarly exposed to malicious software, such as those using Mac, Lunix, Android or iOS?

Apart from platforms that use OpenPGP, such as.deb-based GNU/Linux platforms, each platform has a separate signing certificate. OS X has its own, Android has its own, iOS has its own, and Windows has two: Authenticode for desktop applications and the Windows Store developer license for immersive applications. For small developers, it's a hassle to keep all of them renewed, but for companies big enough to draw targeted attacks like this, it's a benefit.

So if they removed that option from Opera 12, they would no longer be a browser company? That setting is what defines a browser company? Come on... you are making a fool of yourself

Admit it, you messed up. You claimed that all they do is to recompile Chromium, which is wrong since they've made their own UI. You then admitted that you were wrong but now insisted that they were just a UI company. I then pointed out that they are contributing to Webkit/Blink, and now you're just trying to change the subject.

But they didnt remove anything. They STOPPED MAKING browsers. Now they take Chromium codebase, add their skin and call it a day.As a user I dont care about them contributing to some rendering engine if the end product is no longer a browser I was using.

You are extremely confused. That it's not the same browser you were using still doesn't mean they stopped making browsers. Are you trolling?

Again: You claimed that all they do is to recompile Chromium, which is wrong since they've made their own UI. You then admitted that you were wrong but now insisted that they were just a UI company. I then pointed out that they are contributing to Webkit/Blink, and now you're just trying to change the subject.

You first claimed that all they do is to recompile Chromium, which is wrong since they've made their own UI. You then admitted that you were wrong but now insisted that they were just a UI company. I then pointed out that they are contributing to Webkit/Blink, and you changed your claim to Opera only making a skin, which is obviously wrong again since they coded their own UI.