And yet people still wonder why many people are hesitant to allow any
sort of software update to install. Philips isn't just turning their
product into a wall garden. They're teaching more people that "software
update"="things stop working like they did".

Below the fold, some commentary.

One major problem with the IoT is that in most cases software updates aren't available. But even when they are, consumers are very slow to install them. This is an important reason why, and I can sympathise with them. An iOS upgrade effectively bricked my second iPhone. After the interminable download, it took the iPhone 20-30 seconds to echo a keypress! I'm now very slow to upgrade iOS, waiting until there are numerous reports of successful upgrades to my particular hardware before trying it.

Selling a product that is clearly labelled as being intentionally defective because it is DRM-ed is one thing, but subsequently and without notice rendering a product defective that wasn't when purchased is quite another. Phillips got so much flak about this boffo idea that they restored interoperability after a couple of days.

But I'm sure others will not learn from this, and we'll see more attempts to cripple paid-for functionality. TKnarr at Techdirt proposed an interesting way to fight back:

Their controllers say Zigbee Light link protocol 1.0 certified. If the
firmware update renders the controllers incompatible with Zigbee Light
link protocol 1.0 (ie. will not interoperate with bulbs using that
protocol), that's a manufacturing defect. I'd simply return the
defective controllers to where you bought them and request a refund (a
replacement isn't acceptable since Philips has made it clear all of
their controllers are or will be rendered defective). Sorting out the
defective merchandise with the manufacturer is the store's problem.

The
store will probably balk at refunding your money. Your state Attorney
General's office would probably appreciate reports of stores refusing to
accept returns of defective merchandise, seeing as various warranty and
consumer-protection laws require them to.

What would have happened if I'd tried to return my iPhone as defective?

"This is interesting because although the first affected version was released in 2012, the authentication backdoor did not seem to get added until a release in late 2013 (either 6.3.0r15, 6.3.0r16, or 6.3.0r17)."

Ryan Gallagher and Glenn Greenwald quote Matt Blaze as skeptical that either of the Juniper vulnerabilities are related to FEEDTHROUGH.

"Matt Blaze, a cryptographic researcher and director of the Distributed Systems Lab at the University of Pennsylvania, said the document contains clues that indicate the 2011 capabilities against Juniper are not connected to the recently discovered vulnerabilities. The 2011 assessment notes that “some reverse engineering may be required depending on firmware revisions” affecting targeted NetScreen firewall models. Blaze said this points away from the sort of ScreenOS compromise behind the more recent Juniper vulnerabilities."

Back to the Internet of Things with Wheels That Kill People. Cory Doctorow has a fine piece at The Guardian arguing that the Trolley Problem is a distraction from the real problem with self-driving cars, which is that their code will be DRM-ed:

"Whatever problems we will have with self-driving cars, they will be worsened by designing them to treat their passengers as adversaries."

Nick Reed of the UK's Transport Research Laboratory similarly dismisses the Trolley Problem in his lecture for The Register.

Matthew Green weighs in on Juniper's Dual EC vulnerability. He and others have asked whether Juniper's initial value of Q, which suffered an unauthorized change, would have allowed Juniper to decrypt traffic:

"First, ScreenOS doesn't use the NSA's default Q. Instead, they use an alternative Q value that was generated by Juniper and/or NetScreen. If Juniper had really wanted to rule out the possibility of surveillance, this would have been a good opportunity to do so. They could have generated the new constant in a "provably" safe way -- e.g., by hashing some English-language string. Since we have no evidence that they did so, a conservative view holds that the Juniper/NetScreen constant may also have been generated maliciously, rendering it useful for eavesdropping."

As I see it, this possibility can be restricted. Suppose Juniper, or some agency they authorized, generated the original Q with a key and were using it to decrypt traffic. When Q was changed, they would have lost access to the traffic, so would have been alerted to the unauthorized change. Thus it seems that the original value of Q was not being used by Jupiter or an agency they authorized to decrypt traffic. This does not rule out that the original Q was surreptitiously created with a key which was leaked without authorization.

Some good comments:

Rui Sebra: Just pointing out something nobody seems to say: their internal repositories must have been attacked successfully. They're p0wned. Who's to say other products from them don't have other or similar vulnerabilities?

Guy Gordon: Wait. They reset Q, but left the bug? How does that make sense?

Anonymous: The real bug is the scoping of prng_output_index. It could have been global, as the comment says, or just static but with both prng_generate() and prng_reseed() in the same file. The coder should have made prng_output_index local on the stack frame of prng_generate() and all would have been fine. Safety critical code (avionics, medical devices, etc.) usually uses MISRA static code analysis to catch stuff like this. This code would not have passed a MISRA scan. Why is it that crypto code is not considered as safety critical as those other areas? Many times, it secures the communications link with such devices.

"Further inquiry into these routers showed that all were still using their default credentials. This made them very easy to take over by attackers, since both the default admin username and passwords were non-existent (blank spaces).

Additional investigation also revealed that some of the routers were also susceptible to various reflected XSS and CSRF attacks that would also allow attackers to take control of the device, even if using different login credentials.

Using Shodan, a search engine for locating Internet-connected devices, researchers found over 12,000 of Aethra routers around the world, 10,866 in Italy alone, and over 8,000 of these devices were of the model detected in the initial brute-force attack (Aethra Telecommunications PBX series). At that time, 70% of these Aethra routers were still using their default login credentials."

Today's vulnerable Things in the Internet are the ubiquitous payment terminals. What could possibly go wrong?

"Credit card users could have their PINs stolen, and merchants could have their bank accounts pillaged, in a set of attacks demonstrated by researchers Karsten Nohl and Fabian Bräunlein at the Chaos Computing Club security conference."

But not to worry, the companies behind the terminals are all over the problem:

"Nohl reported his findings to payment processors, but they have so far done little in response. Reuters writes that the German Association of Savings Banks, issued a statement on behalf of all German banks, saying the attack scenarios presented by Nohl were only theoretically possible. Nohl demonstrated this "theoretical" attack on stage at CCC, and says that he has made dozens of test transfers proving that the flaw is real."

I'm not the only one annoyed at Apple for bricking my iPhone. An anonymous Slashdot commenter reports:

A $5 million lawsuit filed in New York federal court alleges that Apple's iOS 9 mobile operating software significantly slows down the iPhone 4S. According to the complaint: "The update significantly slowed down their iPhones and interfered with the normal usage of the device, leaving Plaintiff with a difficult choice: use a slow and buggy device that disrupts everyday life or spend hundreds of dollars to buy a new phone. Apple explicitly represented to the public that iOS 9 is compatible with and supports the iPhone 4S. And Apple failed to warn iPhone 4S owners that the update may or will interfere with the device's performance."

Among the Tings in the Internet are power stations and electricity distribution systems, upon which the functioning of the Internet and the Things depends. So its reall bad news when a large part of Ukraine was blacked out by sophisticated malware:

"It's a milestone because we've definitely seen targeted destructive events against energy before—oil firms, for instance—but never the event which causes the blackout," John Hultquist, head of iSIGHT's cyber espionage intelligence practice, told Ars. "It's the major scenario we've all been concerned about for so long."

"how many things are showing up at the office this week that are an always-on conduit to your network from some external third party you really shouldn’t be trusting? Watches, streaming media widgets, phones, tablets and a whole host of other things are likely making their way into the office right now. You probably have a BYOD policy, but do you have an IoT policy? BYOD policies are meant to address your mobile handsets, tablets and personal laptops, but who’s addressing all the other gadgetry? ... What is your policy for things like the Amazon Echo, on your corporate network? Would your network even notice if one of these devices showed up, plugged in and pulled an IP address? Then what?"

“From a criminal hackers perspective, the prospect of subverting cheap and ubiquitous [Internet of Things] technologies such as WebCams (which are widely deployed in both residential and commercial capacities) is a highly desirable target – and high up on the target list,” Ollmann said. “More to the point, devices that can be hijacked and server as backdoors, yet be popular second-hand items or items that can be easily concealed and physically deployed or swapped with an existing installations, are vital tools in organized crime and espionage.”

"This is where the Internet of Things makes the problem worse. As computers get embedded into more of the objects we live with and use, and permeate more aspects of our lives, more companies want to use them to spy on us without our knowledge or consent.

Technically, of course, we did consent. The license agreement we didn't read but legally agreed to when we unthinkingly clicked "I agree" on a screen, or opened a package we purchased, gives all of those companies the legal right to conduct all of this surveillance. And the way US privacy law is currently written, they own all of that data and don't need to allow us to see it."

"the idea that such a response is okay represents a state of affairs that's far too widespread in IT: 'companies are building grossly negligent software ... and then simply not being held accountable when it all goes wrong.'"

"The significance of the initiative is that it's been agreed to by a collective of major carriers – the organisation's announcement lists AT&T, China Telecom, Etisalat, KDDI, NTT DOCOMO, Orange, Telefónica, Telenor and Verizon but there are plenty of others.

With a common set of security recommendations, carriers will also have a stick they can wave at vendors that don't care: do it right, or we won't connect your stuff."

I spent last Friday night at a hotel at Gatwick where the lights, etc. were controlled by an Android tablet embedded in the wall by the bed. Matthew Garrett had a similar experience and had time to find that the appalling insecurity of the system allowed him to query, and presumably control, all the rooms:

"the outcome of sending a constant stream of "Set room lights to full" and "Open curtain" commands at 3AM seems fairly predictable".