It is easy to argue that a perfect driverless car should act according to strict utilitarian principles, maximizing the number of lives saved. But a perfect driverless car, bug-free and unassailable, is still decades away, if it is even achievable. Imperfect driverless cars are close, but the rules are different. They must be. Until it can be proven that a driverless car is bug-free and utterly immune to outside attack, there must be no code path that allows it to deprioritize the lives of its own occupants. The three reasons for this are simple: bugs, attacks, and buggy attacks.

The issues with bugs and attacks are clear: bugs can cause random deaths, and attacks designed to kill the passenger create a new tool for those who would murder. But buggy attacks -that is to say, attacks that are not designed to harm anyone, but do so anyway because faulty attack code- may be the biggest threat of all. More than one piece of malware, particularly among the early viruses and worms has proved far more destructive than its creators ever intended, all due to bugs, not in the code of the system being attacked, but in the attack code itself. Even if the code in a driverless car's system can be guaranteed bug-free, we cannot assume this of attack code, which is what makes immunity to attack so important.

We are not at a state where we can guarantee such security. Until we are, we must not allow driverless cars to deprioritize their own occupants' safety, even in cases where doing so holds great philosophical appeal. Doing so would almost certainly take far more lives than it would save.

In an age where methods to create bug-free software have not been perfected and attackers of all kinds are pervasive and interested, there must not be any code path in a self-driving car's code that allows it to deprioritize the lives of its own occupants. Otherwise, that path will be taken, far more often than anyone intended, and this is a case where people would die for it. More, probably, than would be saved by kill-the-occupants code in the first place.

Truth be told, I am less worried about deliberate "murder by self-driving car" attackers -though that does merit concern- as I am about buggy code in attacks of other types. I look to early viruses and worms as an example: more than one has turned out to be far more destructive than its creator intended, due to bugs in the virus code.

Would it be a good idea to stick a small piece of aluminum foil under the tape? Not a huge amount, just enough to cover the lens. It might not even have to stick out from under the tape.

I ask this because not all Webcams have good infrared filtering, and tape by itself often lets IR through. Aluminum foil should theoretically take care of that. But do modern built-in Webcams still have IR filtering bad enough to even make this necessary?

Python compiles to bytecode, much like Java does. In fact, Python was doing it several years before Java was ever introduced.

Yes, one can argue that JVM bytecode then sometimes gets JIT-compiled to machine code, but by that point, the Java programming language is gone. What actually gets JIT-compiled is JVM bytecode, and this could have been compiled from any language. Like, say, Python.

Again, though, that misses the point. You offer a prize to hack an insecure browser as a means of shaming the browser's developer. That's how it worked, and more to the point, that's why it worked. Have the Pwn2Own folks perhaps lost sight of that original purpose?

I thought Pwn2Own was supposed to be all about shaming vendors into cleaning up their act. If Firefox's security is really so poor, then shouldn't these guys be directing more resources toward it, rather than less?

Is this not a large part of how Microsoft was pressured into finally making certain decisions which, while clearly necessary, were very inconvenient from its own perspective? Why are we to believe that it would not work again?

You could almost say this of Daesh even a few years ago, when they kept their antics mostly confined to their own territory. You'd still be wrong, because that argument requires you to ignore both the sheer enormity of the things they do to inside that territory and the blatant expansionism they practiced and continue to practice, but the argument could at least be considered semi-reasonable.

But that was before they started going after their own refugees: people who were outright running away. Quarrel or no, they posed no threat; indeed, posing no threat was, for many, a driving factor in their decision to leave. And if Daesh are willing to do this to refugees, then the only even half-baked argument left for calling them anything but pure evil is gone now. It is time to accept that we are dealing with the Nazis of our time, and treat them accordingly.

Yes; of course it does.When you log into a system, you expect to use the system. All the data in the system becomes human-readable, and of course non-encrypted.

That's what happens when you encrypt a full disk at once, yes. This is a useful tool for protecting from stolen drives, and it might even be what the author was thinking of when they mentioned "encryption". And just like the author said, it would have been inadequate to prevent this kind of attack.

But that's not the only way to implement encryption, and it's not the way that people are calling for here. Whether or not the disk is encrypted, individual files can be encrypted too. Thus, even when disk-level encryption is undone (so that the user can access the system), the file-level encryption is still in place. This is, at a bare minimum, what should have been in place here; there are ways to do even better, but this would have stopped the attack in question.

"Efficient downloading" is a nonissue. Existing compression, concatenation, and minification techniques yield file sizes that a binary format will have a great deal of trouble beating at all, and even when it does, the savings will be no more than a few bytes at best.

"Efficient parsing" is a nonissue. This has been true for decades. Browsers simply do not spend enough time parsing JavaScript for it to ever become an issue. This is sorely misguided premature optimization at best.

"Language choice" is a nonissue. Emscripten and its kin have already solved the problem of compiling other languages to JavaScript. These languages will still have to be compiled to the bytecode, and gain no benefit from doing so.

"High performance" is a nonissue. This is what asm.js is for, and indeed, the existing polyfill uses asm.js to achieve its performance gains. This is a newer solution than those for the previous problems, but either way, the problem is solved.

"A standard runtime specification" is a nonissue. We already have one of those. It's called ECMAScript.

There is no point to this. All it does is comply with buzzwords and kowtow to JavaScript-haters. And make closed-source Web applications that much closer to feasible, I guess, but no one would consider that a benefit, right?

The article's author makes it sound like logging into the system would have automatically unlocked the encrypted files, or at least have allowed a logged-in user to get at the keys without authenticating further.

I suppose an encryption scheme could be implemented that way, and as just as the article suggests, that would have been useless. But an encryption doesn't need to be implemented that way, shouldn't be implemented that way, and is in fact harder to implement that way. It would provide protection against stolen hard drives, but that's not the main model of threat for things like this, and a proper policy would protect against that equally well while handling additional threats.

It's a simple policy: some things do not go in your freaking keychain. Important data like this, if it must be encrypted with a password, should require that password to be entered manually, every time. Yes, it is less convenient, but some things are too important to afford shortcuts.

Is one molecule truly the limit? Certainly it is as long as we view the various components of electronics as discrete objects: you can split a molecule, but this results in smaller molecules (of different types, but molecules all the same), so miniaturization becomes a race to see who can make the smallest molecules act as the different kinds of components.

But the integrated circuit allowed for many components to be combined into a single discrete object. Does physics allow for the possibility of doing this on a molecular scale: a "molecular integrated circuit", where individual atoms within a molecule act as components that affect how charge flows through the molecule's chemical bonds?

Obviously, our technology is not at the point where such a thing could be created. It may very well require molecules to be assembled atom-by-atom. What I'm asking is physics as we currently understand it allows for the possibility of such a molecule.

Two equal candidates, but one who overcame greater adversity to reach that point, suggesting they have greater inherent potential.

Part of the point of an egalitarian system is the idea that inherent potential is not a thing. Not to any significant degree, at any rate. This argument runs directly counter to the underlying philosophy on which your basic thesis depends.