Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

YouTube hasn't been tied to flash for a couple of months, with HTML5 the default video play mechanism since late January. Not all browsers will pick that up, apparently, since I've recently had Flash crash in Firefox during a YouTube playback.

There are plenty of sites still tied to Flash, and that includes internal corporate sites. Those will be even harder to dig out, and Chrome is about the only means of Linux users to access Flash these days (at least in a vaguely secure fashion, since Flash for Linux hasn't been supported by Adobe in some time).

The Chrome dev team has been trying to eliminate cruft from the code base for a while now, as is visible if you spend some time in the bug tracker. This may be a case where they got overzealous in trying to not have legacy code remain when they implemented a new feature. But given the number of distros running 3.17 or later, it should have been obvious that backports would be required for many (most?) distros, and that backporting is often seen as more work that distro devs would rather avoid so they can concentrate on standard code-bases.

I see both sides of this: Google wants the most secure environment possible, and Debian has a development freeze for good reason. It's easy to overlook a flag like TSYNC if it's not being mandated by something major when the review is done, which may be the case here. But Debian may have to fold on this because they're not a big enough slice of the user pie to force Google to back down.

Presumably, you're running RHEL/CentOS 6. If so, that's cool if it works for you--the stability is probably greater than just about any other major distro--but I think the expectation is that most who run Linux for their notebooks/workstations will run something newer and more flexible, and run something like that in a VM. But there's always the reality that RHEL/CentOS 6 isn't going to run the latest software in many cases (unless you go with non-standard repos), and here's a case where a browser has become one of those cases.

It's probably also surprising that you run a six-year-old notebook in a corporate environment. Even the fiscally conservative companies tend to upgrade notebooks at least every four years, even if they are Fortune-100 companies.

No, those who want perfect solutions want the impossible. I want a framework that can be improved over time.

What's the goal? With maybe a handful of exceptions, everyone does something that can compromise their security. HTTPS relies on a trust architecture that we're being reminded recently (Superfish, PrivDog) is actually extremely fragile. And yet it's being encouraged to make the job of the average surveillance tool more difficult. It's very much letting The Other Guy(TM) (remember, three caps minimum on the TM'ed stuff) handle security. It has flaws, but it raises the bar.

That's what we need for end-to-end crypto. It can have flaws, but it needs to raise the bar, and be able to keep raising the bar.

As for understanding how it happens, how many people can describe how an RSA key is generated, much less how a proper PRNG produces a suitably random number and then how AES/Blowfish/whatever encrypts the data? Does the average person need to know that? Not really. And even if they did, they don't care, which is why they don't use it now.

Right now, we have options where you can let a CA provide you your TLS certificate (usually 2048-bit and SHA1). If you know what you're doing, you can roll your own with better security. We need something with that flexibility (though I recognize the flaws of that exact model) for end-to-end crypto, too. We need clients that auto-update, that add or deprecate algorithms as they arrive or are broken without the user having to worry about it, and that can provide safe (and revocable) storage for the keys so they survive a catastrophic loss or be deleted with near-absolute certainty if the user wishes. We need common libraries or protocols that can allow new or existing clients to safely implement connections to these services without having to build them from scratch, thereby preserving and encouraging competition.

These don't lead to a perfect system. They lead to a good enough system with room to grow and improve. But I would argue (as I think Moxie does) that what we have now is far from a perfect system because it's too difficult to use.

Not remotely. He's encouraging good encryption, but calling for some updates (it hasn't significantly changed since the mid-'90s) and a better wrapper. GPG is still largely by geeks, for geeks. I couldn't get my parents to use GPG because they'd dismiss it as too hard, even if one of them is happy to stick it to the man. The suggested minimum settings vary based on where you look and when they were posted.

Example: An RSA key size of 2048 bits is largely considered secure, but NIST recommends 3072 bits for anything that one would want to keep secure into the 2030s. People still often see their e-mail as their private papers and may be concerned over who can read them well past the 2030s. But does that mean they use 3072, or go with the random crypto weblog guy who says to always go with 4096? And why can't I create 8192- or 16384-bit keys like that software claims to over there?

And what to hash to use? Plenty of sites still say MD5, but they were written years ago. Some have updated to SHA1, but others point out weaknesses there. OK, SHA2, then. But then there's SHA256, which must be better, right? (I know SHA256 is a subset of the SHA2 family, but those unfamiliar with crypto will not.)

Until GPG-style crypto becomes relatively automated, it won't be embraced by more than a handful of people. HTTPS is widely used because people don't have to think much about it. This has some downsides for poorly-configured servers and Superfish/Comodo-style backdoors, but browsers and other software help take up the slack by rejecting poor configurations. PGP/GPG were designed to reach near-perfect levels of encryption, but that bar is clearly too high for significant uptake. We should instead be looking for something that encourages end-to-end encryption that is good enough. We can build on if the underlying structure is properly designed, and as people get more accustomed to crypto in their lives, they'll be able to adjust to improvements.

When the majority of communications are relatively well-secured, it makes it far more difficult for a surveillance state to conduct its operations. Perfect security can still be a long-term goal, but we need more realistic goals to encourage uptake in the meantime.

The law is generally stated that for two vehicles traveling in the same lane with no immediate changes before a collision, the trailing driver is at fault in case of a collision. However, it's a valid defense if the leading driver performed an unsafe maneuver prior to the collision, such as changing lanes with insufficient spacing.

RMS hasn't been an active developer in years by his own admission. His role is largely advocacy and philosophy, and that appears to be the sole issue here. However, he doesn't seem, based on a reading of the thread, to have any formal ability to block the patch.

You're not factoring in the number of workers who would not have gone in anyway, the lost productivity from being late due to weather for at least some of those who did go in, potential losses to businesses that didn't shut down completely for paying employees to show up but who had little to no business that day, and the costs associated with personal and property damage due to accidents. It gets complex quickly.

Without government intervention, a lot of people would have simply gone in to work because they were afraid that if they didn't show up, they could be in trouble with their employers. When the city makes the call, it's easier to point to that as a justification, and it's more likely to be accepted by the employer.

Based on mentions that they will tow it into place, that's a billion dollars for something that would be used for a few weeks and then left to sit for the next 25 years. Better to spend a few million dollars towing it into place. Less cost, and less machinery to go wrong over time.

You would think with that volume of gas you would be up there with a nuclear sized detonation.

It has a capacity of some 430 million liters of LNG. At an average density of 0.463 kg/L at -160C, that's 199 million kilos of liquefied methane. At 22.2 MJ/L, that's 4.42 billion MJ, or a shade over a megaton of TNT if it were to all go off at once.

Though I doubt that's possible. The storage facilities will have separation, so at best there would be a chain that would dampen the impact somewhat.

It's not quite nothing--he did retweet it to give it some attention--but I thought it was iffy myself, and I am certainly no fan of Assange. I keep him on one of my Twitter lists just because his delusions amuse me (and because he sometimes posts something interesting). When something this unusual pops up, it's best to look into it a bit further.

I use sentences of my own creation. In the case of mandatory password changes, I will sometimes use some piece of trivia. For example, I might use the counties of a state. It reduces the entropy somewhat, especially if someone finds out what the reference is, but it allows me some room to work and embeds a new bit of trivia into my head.

I do use password managers (a couple of them, actually), and I know there are some enterprise password managers out there. There's a danger to stand-alone managers, but a well-managed enterprise should have all of the core passwords securely stored somewhere.

More launches mean more cost, especially if you're scattering it across launch pads located around the world. There aren't many sites that can handle significant launch masses: Cape Canaveral, Baikonur, Plesetsk, French Guiana, Jiuquan (China), Satish Dhawan (India), and Tanegashima (Japan). So you have enormous coordination between nations that have widely varying launch experience for their heavy lifters, that use different technologies and procedures, and have different goals for their space programs. This doesn't even get into the politics of "What do you do for me if I agree to lift this 15T payload into orbit?"

It also would cost more fuel, since launching from different locations means having to match inclinations. This has already led to one major limitation with the ISS, since its inclination is a compromise between the ideal inclinations for Cape Canaveral and Baikonur.

On top of that, you add complexity in having to dock so many more times, increasing the risk of an incident. While the potential loss from a single large launch is significantly more than that of a single small launch, the cumulative risk of any loss is greater with multiple launches. Putting a thousand tons into orbit would take eight SLS launches, but a minimum of 44 launches of the Delta IV Heavy or Proton, currently the heaviest launchers available.

I would rather see projects like the Falcon XX or MCT encouraged, and I expect they'll be showing up on the test schedule around the same time as the SLS. But NASA is going to have their own path despite the costs, and so they may as well work on an SLS-class launcher. If nothing else, it will give SpaceX (and maybe others) something to aim for and probably provide some valuable lessons along the way.

For one thing, that is likely the storage size, not the transfer size which is likely going to be way less due to compression.

The transfer size probably is smaller to some degree. But to hit that uncompressed volume of storage size, there is going to be a lot of data with poor compression rates. I expect that a lot of pristine, high-resolution digital video is in that, and that certainly won't compress all that well.

But as you point out, those can be terabytes in size. Even with the potential value of that, most people aren't going to download the raw files, and fewer still will go through the work of converting them to lower-res files more amenable to download. I'm not saying it won't happen, just that I think it's unlikely. Sony has more to worry about from the financial and personal information that was obtained than the revenue loss from any movies that were downloaded.

Why do random words? Use a sentence. I do that for many of my passwords. You get upper and lower case letters, symbols, and maybe even numbers, and it's not hard to go past 20 characters. It's highly customizable for each user and much easier to remember.

The problem with this is that there are still too many systems that have length caps that are too short. Not really many solutions for limits of 16, 10, or even 8 characters.