They happen when they happen. Friday is often much earlier than Monday or Wednesday, but need not be. We must flow with the currents and bend in the breeze.

I predict either that this thread will be deleted, shortly, or else folded into a suitable generic-chatter subforum thread (though I don't think there is one, nor maybe should there be - that'd be up to TPTB to arrange or allow). Especially because (otherwise) one could see people "First post!"ing each comic without even that comic yet being published yet. Would be a horrible precedent.

(My apologies to mods for making assumptions on their behalf, and perhaps improperly deigning to do their job for them. Just thought a quick word in the open, even if later deleted, might help more than a PM to the OP alone. Please remove and replace with your own guidances, especially if it's better. Like "read the stickies!", probably… )

I'd love to see how someone uses a hammer to exploit a server vulnerability. You can destroy the server, but that doesn't get you any access at any level. Hammers might be effective in exploiting the human element though.

Besides, there's the old adage "physical access is total access". If someone is able to bring a hammer to a server, they will most likely be able to do whatever the hell they want with that server, regardless of the hammer.

I think this, like rowhammer, shows mostly that we are getting sufficiently good at software security that attackers have to exploit the hardware. The problem of course is that hardware isn't quite as easy to fix as software. Sometimes you can, like meltdown, and sometimes you can't, like rowhammer. (except by replacing it).

What surprises me is how old these vulnerabilities are:Rowhammer can be applied to DDR3 chips from 2012 onwards, so that only took two years for the paper to surface.Heartbleed was also introduced in 2012, with discovery also two years later.But every CPU made by Intel with out-of-order speculative execution has been reported as potentially vulnerable to Meltdown - that goes back to the Pentium Pro from 1995.Presumably the Spectre vulnerability also has a similar vintage, depending on branch-prediction (the first Pentium was 1993?).(For reference, the Grub2 backspace bug went undiscovered for 6 years).

Pfhorrest wrote:I fail to see what moral dilemmas like the Trolley Problem have to do with speculative execution.

Where's my vitssagen when I need it??

I'm losing faith in the "many eyes make all bugs shallow" adage, and replacing it with the on-boot mantra: "these are not the bugs you're looking for".

Jose

Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

Rombobjörn wrote:Humans just suck at computers. That sums up the entire industry rather well – both hardware and software.

Once upon a time there was some really smart humans who figured out how to get rocks to think by trapping lightning into things, and then some other humans who where really smart but not the same kind of smart started making the rock smaller and smaller. To be efficient though the humans who managed the smart humans started to make them work in parallel to make the rocks think in parallel, as well as have memories. But the problem was that the manager humans didn't want to smart humans to talk to each other (smart human uprising potential, got to keep them isolated) so when it all came together issues arose that no one saw coming.

Plus side is third-party smart humans have figured out that there is an issue... It comes down to some parts of the whole are smarter then the rest, but they (smart humans) are kept isolated for safety reasons and don't get to see the whole typically. That is why Google/Alphabet is dangerous, they are not working on AI they are just exposing the smart humans to external stimuli and larger data sets.

Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.---If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

I'm going to have to stick up for humanity here. Humans a freaking awesome, but we've created things, by working together, that none of us can fully understand. We come up with incredibly clever things, but there are unforeseen and unintended consequences from time to time. These lead us to come up with even cleverer things in response. We should remember that it was humans, not AIs, gods or aliens, who identified these vulnerabilities.

I do have sympathy for the view that the software we create has already become too complex for our puny brains to understand, such that I have to put up with bugs on my windows PC that probably nobody could explain. But software engineering is a relatively young field and this might be just another problem for us to solve. After all, cathedrals used to fall down all the time until we figured out how to build them safely. We're already building software a little better than those medieval architects, but 60-70 years isn't a long time in human history.

orthogon wrote:Humans a freaking awesome, but we've created things, by working together, that none of us can fully understand.

I'd say this is true for any social animal, because it's a prerequisite of an evolutionarily stable society.

...well, it might be not true for societies that develop so fast that they never come close enough to their current Pareto optimality, even when their members understand where the current Pareto optimality is. But that's not really a biological evolution anymore.

fluffysheap wrote:I think this, like rowhammer, shows mostly that we are getting sufficiently good at software security that attackers have to exploit the hardware.

Actually, Heartbleed is only about 2 months older than rowhammer.

Who needs hardware exploits when you only have to press backspace 28 times?

Show me a computer secured from attacks from a hacker with physical access to the computer and I'll show you a computer that has been blasted by putting mains across the power input [also that shows just how nasty BADUSB can be (although I think BADUSB has to be custom crafted (out of say a raspberry pi instead of just an infected USB drive)).

Humans can make computers that can avoid all of this. They just won't be very fast, and they will be extremely expensive. And there will be some annoying cases that they just can't do (like survive direct contact with a malicious hacker. Or come up with a good replacement for passwords).

There was a book written long ago about "the mythical man month" which described all the problems with building highly complex systems (in that case building computers and writing operating systems [it might not have even touched on the hardware]). In the end (and later when a new afterward was written) the claim was there were "no silver bullets" to make humans suck less at this type of thing. I would argue that this is wrong, that we have a plethora of silver bullets and to prove it I suggest comparing any modern OS to OS/360 [the subject of the book] and comparing complexity. The problem is that writing software/building systems that can be constructed with those "silver bullets" is what mathematicians call "a previously solved problem" and economically lets you compete with the used market. The market always demands more, in the realms where no silver bullets exist yet.

PS: I think IBM Power 6 and sufficiently decrepit [in order/32 bit] ARM chips are safe. Maybe some of the earliest Intel Atoms. AMD chips are supposed to be "mostly safe", there are weaknesses but nobody has managed to create an exploit (and Intel has a lot of money, smart people, and would love to have one).

A recurring problem in computer security (and computing in general) is that most hardware and software is designed to work when used correctly, with the tacit assumption that anyone trying to use it is going to do so in good faith and with at least a minimal degree of competence and understanding. Then it gets into the hands of users, and they do all kinds of crazy stuff that the original designers never thought anyone might do, so unexpected things happen.

A system designed from the start to not do things it shouldn't rather than to do things it should is going to be less efficient at doing things generally (though by the time the other system's had patches stuck all over it to plug the security holes and remove the bugs, it'll probably be even less efficient), but will also be less prone to surprises...

orthogon wrote:Humans a freaking awesome, but we've created things, by working together, that none of us can fully understand.

I'd say this is true for any social animal, because it's a prerequisite of an evolutionarily stable society.

...well, it might be not true for societies that develop so fast that they never come close enough to their current Pareto optimality, even when their members understand where the current Pareto optimality is. But that's not really a biological evolution anymore.

Indeed; I hadn't considered how it's a result of being a social animal per se, and your knowledge of Economics is much better than mine, but I was definitely thinking that it's probably inevitable that an intelligent species will eventually build things that are beyond their own comprehension. That's why we shouldn't beat ourselves up about it. Superintelligent aliens would probably get themselves into the same pickle.

Current CPUs do not actually speculate both sides of a branch. Wikipedia calls this "eager execution" and if they did, then CPUs would not need branch prediction. Eager execution has a greater cost than branch prediction so the later is preferred.

Instead a phantom trolley runs ahead down the predicted side of each branch hammering everything and if the real trolley takes the same side of the branch, then the phantom trolley becomes the real trolley. The exploits involve deliberately sending the phantom trolley down the wrong side of a branch.

rmsgrey wrote:A recurring problem in computer security (and computing in general) is that most hardware and software is designed to work when used correctly, with the tacit assumption that anyone trying to use it is going to do so in good faith and with at least a minimal degree of competence and understanding. Then it gets into the hands of users, and they do all kinds of crazy stuff that the original designers never thought anyone might do, so unexpected things happen.

Also a problem outside of computing. A few people playing defense, versus the rest of humanity playing offense--or just milling around incompetently, not even realizing they're in the game--will never favor the defense.

You can have the best information-age technology defending your home, but anyone with Neanderthal technology can get in: a medium-sized rock to chuck through a window, possibly conveniently provided by your own landscaping. By the time you're safe from everything like that, you've basically decided to live in a prison. (See also: the TSA literally searching my arthritic 70-year-old mother's carry-on bag last year... to keep the interstate transport of a two-week supply of her prescription drugs from causing the next 9/11, or something...)

This is true for everything from street crime to epidemic disease to tax law to nuclear war. ("Mutually Assured Destruction" was a case of no neither user ever doing the unexpected; one of those rule-proving exceptions. So far, anyway.)

A system designed from the start to not do things it shouldn't rather than to do things it should is going to be less efficient at doing things generally

True. Consider a botnet-infected computer with no virus protection, versus a clean computer running very strong anti-virus software. The botnet member will be a better performer, in general. If it wasn't--if the botnet program was a noticeable performance hit--people would notice it and find a way to remove it. (I suppose part of the "botnet problem" is the number of people who don't need the performance a modern computer brings; web-and-email on a 5-year-old computer is the equivalent of taking the Veyron to the grocery store. My old Apple ][+ didn't have any cycles to spare. Nor a connection to any other computer faster than SneakerNet...)

And this is true of things that aren't very complex "systems", also. A truck or an airplane that cannot be used as a weapon... probably can't be used as a truck or an airplane. (Also wheelbarrows.)

(though by the time the other system's had patches stuck all over it to plug the security holes and remove the bugs, it'll probably be even less efficient)

And that's also true outside of computing. But for both types of systems; the "designed to not do other things" and the "designed to do things" systems both need patches, because the people who wrote the design weren't omniscient. Original US Constitution: "here are the things the federal government can do, and nothing else". Current body of US law: "[regulations] stuck all over it to plug the security holes and remove the bugs".

Democracy looks like a good system, but it takes time. Dictatorship gets shit done fast. Or, just within the USA, getting {51,60,67} senators (plus representatives, plus President) to agree on something takes time, while one guy writing executive orders (or un-elected bureaucrats in executive agencies writing "regulations" that have the force of law) is much more "efficient".

Back to your first sentence, I'm not sure there's a computing analog to Friedman's quote that "I do not believe that the solution to our problem is simply to elect the right people. The important thing is to establish a political climate of opinion which will make it politically profitable for the wrong people to do the right thing."

Maybe we should try Einstein's apocryphal "make it as simple as possible, but no simpler"...

Eternal Density wrote:Translation for those who don't speak OTTer:Speculative execution is likened to the trolley problem via a terrible pun.

Better translation: "I see what you did just there." But yeah, that's what he did just there.

Jose

Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.