Eliot Lear's Ramblingshttps://www.ofcourseimright.com
Of Course I'm Right!Mon, 04 Dec 2017 08:39:48 +0000en-UShourly1https://wordpress.org/?v=4.9.1https://www.ofcourseimright.com/blog/wp-content/uploads/2015/09/cropped-Screen-Shot-2015-09-01-at-3.51.27-PM-32x32.pngEliot Lear's Ramblingshttps://www.ofcourseimright.com
3232Where a bad review really makes for poor securityhttps://www.ofcourseimright.com/?p=2267
https://www.ofcourseimright.com/?p=2267#respondMon, 04 Dec 2017 08:38:26 +0000https://www.ofcourseimright.com/?p=2267Most consumers do not take the time to upgrade their devices simply because vendors want them to: there has to be something in it for me. Apple, on the other hand, has been an exception. Studies have repeatedly shown that Apple users do regularly upgrade their phones. Just one month after release, their latest version was installed on 52% of their devices. By comparison, summing allAndroid releases from 2015 to present gets you that same number, with the latest releases coming in around 20% of the total.

This becomes a Big Deal when we start talking about vulnerabilities, and zero-day exploits. If there is a bug in your device and it is running an older version of the code, and you do not update, then that device can be used to attack you or someone else. This is something that Microsoft learned the hard way in the last decade when it snuck in extra software in a security update, losing trust and confidence and willingness of their users.

In his review, Gordon Kelly has told his Forbes readers not to upgrade to the latest Apple iOS release precisely because it may be too risky, that the release itself was rushed. When considering release timing, any vendor always has to balance stability and testing against other feature availability and security. Apple may well have gotten the balance wrong this time. The review in and of itself harms cybersecurity, not because the reviewer is wrong, but because the result will be that fewer people will have corrected whatever vulnerabilities exist in the release (as of this writing information about what is fixed hasn’t been disclosed). Moreover, such reviews reinforce a bad behavior- to delay upgrading. I call it a bad behavior because it puts others at risk.

This isn’t something that can be fixed with a magic wand. We certainly cannot fault Mr. Kelly for publishing his analysis and recommendations. If we wait for perfect security, we will never see another feature release. On the other hand, if things get too rushed, we see such bad reviews. Perhaps this argues that O/S vendors like Apple and Google should continue to provide security-only releases that overlap their major releases, at least until they are stable, which is what other vendors such as Microsoft and Cisco do. It costs money and people to support multiple releases, but it might be the right thing to do for the billions of devices that are each and every one a point of attack.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22670Ain’t No Perfect. That’s why we need network protection.https://www.ofcourseimright.com/?p=2262
https://www.ofcourseimright.com/?p=2262#respondFri, 01 Dec 2017 07:50:31 +0000https://www.ofcourseimright.com/?p=2262When we talk about secure platforms, there is one name that has always risen to the top: Apple. Apple’s business model for iOS has been repeatedly demonstrated to provide superior security results over its competitors. In fact, Apple’s security model is so good that governments feel threatened enough by it that we have had repeated calls for some form of back door into their phones and tablets. CEO Tim Cook has repeatedly taken the stage to argue for such strong protection, and indeed I personally have friends who I know take this stuff so seriously that they lose sleep over some of the design choices that are made.

And yet this last week, we learned of a vulnerability that was as easy to exploit as to type “root” twice in order to gain privileged access.

Wait. What?

Ain’t no perfect.

If the best and the brightest of the industry can occasionally have a flub like this, what about the rest of us? I recently installed a single sign-on package from Ping Identity, a company whose job it is to provide secure access. This simple application that generates cryptographically generated sequences of numbers to be used as passwords is over 70 megabytes, and includes a complex Java runtime environment (JRE). How many bugs remain hidden in those hundreds of thousands of lines of code?

Now enter the Internet of Things, where manufacturers of devices that have not traditionally been connected to the network have not been expert at security for decades. What sort of problems lurk in each and every one of those devices?

It is simply not possible to assure perfect security, and because computers are designed by imperfect humans, all these devices are imperfect. Even devices that we believe are secure today will have vulnerabilities exposed in the future. This is one of the reasons why the network needs to play a role.

The network stands between you and attackers, even when devices have vulnerabilities. The network is best in a position to protect your devices when it knows what sort of access a device needs to operate properly. That’s your washing machine. But even for your laptop, where you might want to access whatever you want to access, whenever you want to access it, through whatever system you wish to use, informing the network makes it possible to stop all communications that you don’t want. To be sure, endpoint manufacturers should not rely solely on network protection. Devices should be built with as much protection as is practicable and affordable. The network provides an additional layer of protection.

Endpoint manufacturers thus far have not done a good job in making use of the network for protection. That requires a serious rethink, and Apple is the posture child as to why. They are the best and the brightest, and they got it wrong this time.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22620The Road(s) To Singaporehttps://www.ofcourseimright.com/?p=2237
https://www.ofcourseimright.com/?p=2237#respondSat, 18 Nov 2017 09:03:37 +0000https://www.ofcourseimright.com/?p=2237Recently a number of us trundled off to Singapore to attend the 100th Internet Engineering Task Force meeting, during which we shared our ideas on how to improve the Internet. But precisely how did we all get there? Why, by plane of course! In the case of yours truly, I went from Switzerland by way of Bangalore, India. These are long flights: the short haul from Bangalore was four hours and twenty minutes. The non-stop return flight was just over twelve hours, thanks to favorable winds.

But what if you wanted to drive? After all, instead of flying from San Francisco to Las Vegas, I drove; and I very much enjoyed the scenery. What would it take to get all the way to Singapore by car? Is it even possible? A little check on the map shows that it should theoretically be possible to travel the distance by land, with the occasional bridge crossing here and there. How would one even begin to plan such a trip? Well, for me it would be with everyone’s favorite navigation tool: Google Maps. We start there.

There’s that inviting “Directions” button. If I just click it, I’m hoping that it will show me a few alternative routes, and a driving time. Of course it will indicate the tolls and the fact that we are crossing borders.

Unfortunately, the invitation was quickly rescinded.

What’s the problem? Well, like a good computer scientist I began to bisect the route to see if I could determine where Google thought there was no route. I figured, ok let’s see if I can get to India from Switzerland. I got the same answer.

But when I asked if I could get to Lahore, things began to improve. That would be an eighty six hour route, covering 7,734 kilometers. There’s only one problem: it would take me straight through the heart of Iran, and I very much doubt I could get a transit visa for this purpose. But now at least we have a route to Lahore. A little dragging and dropping in Google Maps shows that with a mere six hour detour, one can go over the Black and Caspian seas, instead of under them, as such:

Well, very good! We’ve gotten ourselves half way there. To do so, we travel through Germany, Czechia, Poland, The Ukraine, Russia, Kazakhstan, Uzbekistan, Afghanistan, and finally into Pakistan. Right about now, Iran is beginning to sound pretty good, by the way. An airplane more so. Consider this little factoid: the route takes us through Eastern Ukraine which right now is not exactly being friendly with the rest of the country.

It turns out that one can in fact cross the Pakistan/India border with a car at Lahore if one has all the right paperwork. One enters by the city of Amristar. Now let’s see if we can get from Amristar to Singapore. Surely enough, one can!

That’s another 105 hours or 6,404 kilometers. One travels across India, avoiding both Bangladesh and Bhutan. While it is probably possible to drive into Bangladesh, Bhutan is virtually impossible to enter without serious amounts of paperwork. Of course, this whole trip would require serious amounts of paperwork, but Bhutan would require its own stack. One can do this because the Indian states of Assam and Manipur juts quite far to the east.

For those keeping score, this route is just under 14,000 kilometers, and would take, if driven straight, ignoring traffic (hah!), about 200 hours. That would be about 25 days, if one limits one’s driving to eight hours per day. The route changes based on which citizenship one holds, to be sure. Many countries would require visas, and car permits. One challenge that one has to consider is that this is the most direct route, according to Google Maps. That doesn’t mean it’s the easiest route. For one thing, many of the directions themselves, are written out by Google in the local script. For the Ukraine, that means Cyrillic. For Myanmar, that’s Burmese. Of course, this says nothing of the languages themselves, nor whether anyone would accept Mastercard. Hotels? There may be inns along the route. Google is pretty good at spotting these and (perhaps more importantly) gas stations.

Having performed the exercise, I think it would be fun to do parts of this route. In particular, traveling in north-western India and into Myanmar seems interesting. I wonder what Hertz would say. Apart from the collision damage waver, and all other insurance, I’m pretty sure I’d want a very simple vehicle that could be easily repaired and could handle varying qualities of gasoline. An old Range Rover with an extra tank might be a good deal. Probably not the trip to take a Tesla.

To play around with this route, have a look at the Google map. Be sure to expand out the directions. Note the occasional U-Turn one is required to make.

Some final geographical points: this trip, while long, roughly follows the great circle route, and so it’s fairly optimal, from a distance standpoint. It is also probably the farthest south one can travel from Europe or Asia without taking a ferry. Assuming one can travel it at all. With ferries, it may be possible to get as far as Timur, but I haven’t checked that.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22370The role of the CISO and the Equifax Breachhttps://www.ofcourseimright.com/?p=2231
https://www.ofcourseimright.com/?p=2231#respondMon, 18 Sep 2017 08:56:39 +0000https://www.ofcourseimright.com/?p=2231

I do not know Susan Mauldin, the now-former Chief Security Officer of Equifax, nor can I even tell you what her job was. That is because the role of Chief Information Security Officer (CISO) remains ill-defined: each company implements the role in different ways and has different expectations. It may well be that this person did not have the authority to implement policies that would have prevented the breach that revealed records of over 143 million US consumers.

What I can say is this:

The only way you can entirely secure a computer is to destroy it and melt down its components beyond the point that any recovery tool can glean information. Otherwise, there is always some security risk. You might be able to sufficiently secure a system such that the risk is so low as to be almost negligible, but to do that usually requires more resources than it will cost to mitigate a breach.

The goal of a CISO is to reduce the expected loss of a security breach to a level acceptable to the management. Expected loss has many components. It can include direct financial losses, losses in sales, reputation loss (and thereby future sales losses), stolen IPR, thus impacting product differentiation, and liability associated with stolen customer and partner information. In a world where information is worth its weight in gold, holding any information secret means that there is a risk it will be revealed. The decisions of a CISO or her management do not amount to loss due to a single event, but may be recurring losses, either due to expenses to mitigate risk or due to losses from breaches.

Equifax’s business is information about consumers. That means that they must retain the information necessary to report their findings to their customers, such as banks or employers who are assessing the trustworthiness of an individual. That can be a lot of information, such as credit card, mortgage, and utility payment histories. Equifax is a big fat target for information thieves, much the same way the US Office of Personnel Management is (they were breached in 2014).

It has been reported that the information thieves in this case made use of a vulnerability in Apache Struts that had been announced in March. Equifax stated that they detected anomalous behavior on the 29th of July. That left a period of roughly four months of exposure. In the grand scheme of things, this is not a long long time for an exposure. However, because the value of information that was at risk was actually quite high, and because the vulnerability in question was exploitable on the open Internet, there should have been a process in place to rapidly close the bug. There exist any number of patch management tools that spot open source software updates, and alert the customer.

Should Susan Mauldin have known all of this? Yes. Did she? I don’t know. Did she have the authority to effect change? I don’t know, but to be sure she was ineffective because the necessary processes were not in place. Will this sort of failure happen again? You can bet on it, but when and how much the loss will be is where CISOs make their money.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22310Secret sauce and sentencing? Say it isn’t so!https://www.ofcourseimright.com/?p=2218
https://www.ofcourseimright.com/?p=2218#respondTue, 02 May 2017 07:08:04 +0000https://www.ofcourseimright.com/?p=2218One of the things that we in technology understand is that we make mistakes, a truth we don’t like to admit to customers. What happens, however, when a mistake can lead to tragic consequences?

Yesterday’s New York Times reports about a case that the U.S. Supreme Court may soon hear, involving a man who received a six year jail sentence, in part due to a computer program. The software known as Compas was supposedly developed by Northpointe Inc. (although a search seems to redirect to a Equivant) to provide a risk assessment of a person’s reentry into society. Such a data-driven analysis is vaguely reminiscent of the movie, Minority Report. In this case, the defendant Eric L. Loomis was not allowed to examine the software that assessed that he was a significant risk to the community, even though at least one analysis showed that the software may be programmed with some form of racial bias. The company argues that the algorithm used to make the sentencing recommendation is proprietary, and so should not be subject to review, and that if they release their algorithm to scrutiny they will essentially be giving away their business model, and they may have a point. Patents on such technology may be flimsy, and they eventually do come to a halt. To protect themselves, they make use of another legal tool, the trade secret, which has no fixed term of protection.

One can’t say that a mistake is being made in the case of Mr. Loomis, nor can one authoritatively state that the program is formally correct. The Wisconsin Supreme Court argued creatively that much like college admissions, so long as the software is one input combined with others, the software can be used. Is it, therefore, any different from a potentially flawed witness giving evidence? The question here is whether those who wrote the software can be cross-examined, to what extent they may be questioned, and whether the software itself can be examined. Mr. Loomis argues that to deny his legal team access to the source is a violation of his 14th Amendment right to due process.

We know from recent experience that blind trust in technology, and more precisely, those who create and maintain it, can lead to bad outcomes. Take for instance the over 20,000 people whose convictions were overturned because a chemist falsified hair analysis results, or other examples where the FBI Crime Lab just flat got it wrong. Even Brad D. Schimel, the Wisconsin attorney general, conceded before the appeals court that, “The use of risk assessments by sentencing courts is a novel issue, which needs time for further percolation.” But what about Mr. Loomis and those who may suffer tainted results if there is a software problem?

While the Supreme Court could rule soon on the matter, they will only have very limited avenues, such as permitting or prohibiting its use. Congress may need to get involved in order to provide other alternatives. One possibility would be to provide the company some new intellectual property protection, such as an extended patent with additional means of enforcement (e.g., higher penalties against infringement or lower thresholds for discovery) in exchange for releasing the source. Even if they do, one question would be whether or not defendants could then game the system so as to score better on sentencing. How great a risk that is we can’t know without knowing what the inputs to the algorithm are.

It is probably not sufficient for the defendant and his legal teams to have access to the source, precisely because more research is needed in this field to validate the models that software like Compas uses. That can’t happen unless researchers have that access.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22180Addressing the Department Gap in IoT Securityhttps://www.ofcourseimright.com/?p=2214
https://www.ofcourseimright.com/?p=2214#respondTue, 18 Apr 2017 09:04:48 +0000https://www.ofcourseimright.com/?p=2214So, Mr. IT professional, you suffer from your colleagues at work connecting all sorts of crap to your network that you’ve never heard of? You’re not alone. As more and more devices hit the network, the ability to maintain control can often prove challenging. Here are your choices for dealing with miscreant devices:

Prohibit them and enforce the prohibition by firing anyone who attaches an unauthorized device.

Allow them and suffer.

Prohibit them but not enforce the prohibition.

Provide an onboarding and approval process.

A bunch of companies I work with generally aim for 1 and end up with 3. A bunch of administrators recognize the situation and fit into 2. Everyone I talk to wants to find a way to scale 4, but nobody has, as of yet. What does 4 involve? Today, it means an IT person researching a given device, determining what networking requirements it has, creating firewall rules, and some associated policies, and establishing an approval mechanism for a device to connect.

This problem is exacerbated by the fact that many different enterprise departments have wide and varied needs, and the network stands as critical to many of them. Furthermore, very few of those departments reports through the chief information officer, and chief information security officers often lack the attention their concerns receive.

I would claim that the problem is that incentives are not well aligned, were people in other departments even aware of the IT person’s concerns in the first place, and often they are not. The person responsible for providing vending machines just wants to get the vending machines hooked up, while the person in charge of facilities just wants the lights to come on and the temperature to be correct.

What we know from hard experience is that the best way to address this sort of misalignment is to make it easy for everyone to do the right thing. What, then, is the right thing?

Prerequisites

It has been important pretty much forever for enterprises to be able to maintain an inventory of devices that connect to their networks. This can be tied into the DHCP infrastructure or to the device authentication infrastructure. Many such systems exist, the simplest of which is Active Directory. Some are passive and snoop the network. The key point is simply this: you can’t authorize a system if you can’t remember it. In order to remember it, the device itself needs to have some sort of unique identifier. In the simplest case, this is a MAC address.

Ask device manufacturers to help

Manufacturers need to make your life easier by providing you a description what the device’s communication requirements are. The best way to do this is with Manufacturer Usage Descriptions (MUD). When MUD is used, your network management system can retrieve a recommendation from the manufacturer, and then you can approve, modify, or refuse a policy. By doing this, you don’t have to go searching all over random web sites.

Have a simple and accessible user interface for people to use

Once in place you now have a nice system that encourages the right thing to happen, without other departments having to do anything other than to identify the devices they want to connect. That could be as simple as a picture of a QR code or otherwise entering a serial #. The easier we can make it for people who know nothing about networking, the better all our lives will be.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22140Pew should evolve its cybersecurity surveyhttps://www.ofcourseimright.com/?p=2209
https://www.ofcourseimright.com/?p=2209#respondMon, 03 Apr 2017 07:13:00 +0000https://www.ofcourseimright.com/?p=2209Last year, Pew Research surveyed just over 1,000 people to try to get a feel for how informed they are about cybersecurity. That’s a great idea because it informs us as a society as to how well consumers are able to defend themselves against common attacks. Let’s consider some ways that this survey could be evolved, and how consumers can mitigate certain common risks. Keep in mind that Pew conducted the survey in June of last year in a fast changing world.

Several of the questions related to phishing, Wifi access points and VPNs. VPNs have been in the news recently because of the Trump administration’s and Congress’ backtracking on privacy protections. While privacy invasion by service providers is a serious problem, accessing one’s bank at an open access point is probably considerably less so. There are two reasons for this. First, banks almost all make use of TLS to protect communications. Attempts to fake bank sites by intercepting communications will, at the very least produce a warning that browser manufacturers have made increasingly difficult to bypass. Second, many financial institutions make use of apps in mobile devices that take some care to validate that the user is actually talking to their service. In this way, these apps actually mark a significant reduction in phishing risk. Yes, the implication is that using a laptop with a web browser is a slightly riskier means to access your bank than the app it likely provides, and yes, there’s a question hiding there for Pew in its survey.

Another question on the survey refers to password quality. While this is something of a problem, there are two bigger problems hiding that consumers should understand:

Reuse of passwords. Consumers will often reuse passwords simply because it’s hard to remember many of them. Worse, many password managers themselves have had vulnerabilities. Why not? It’s like the apocryphal Willie Sutton quote about robbing banks because that’s where the money is. Still, with numerous break-ins, such as those that occurred with Yahoo! last year*, and the others that have surely gone unreported or unnoticed, re-use of passwords is a very dangerous practice.

Aggregation of trust in smart phones. As recent articles about American Customs and Border Patrol demanding access to smart phones demonstrate, access to many services such as Facebook, Twitter, and email can be gained just by gaining access to the phone. Worse, because SMS and email are often used to reset user passwords, access to the phone itself typically means easy access to most consumer services.

One final area that requires coverage: as the two followers of my blog are keenly aware, IoT presents a whole new class of risk that Pew has yet to address in its survey.

The risks I mention were not well understood as early as five years ago. But now they are, and they have been for at least the last several years. Pew should keep surveying, and keep informing everyone, but they should also evolve the questions they are asking and the advice they are giving.

* Those who show disdain toward Yahoo! may find they themselves live in an enormous glass house.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22090Removal of privacy protections harms service providershttps://www.ofcourseimright.com/?p=2206
https://www.ofcourseimright.com/?p=2206#respondThu, 30 Mar 2017 18:32:33 +0000https://www.ofcourseimright.com/?p=2206As the media is reporting, the administration has removed privacy protections for American consumers, the idea being that service providers would sell a consumer’s browsing history to those who are interested. Over time, service providers have looked for new and novel (if not ethical) ways to make money, and this has included such annoyances as so-called “supercookies”.

Why, then, would I claim that removing consumer privacy protections will harm not only consumers, but telecommunications companies as well?

In the new world that is coming at us, our laptops, cell phones, and tablets will be a minority of the devices that make use of our home Internet connection. The Internet of Things is coming, and will include garage door openers, security systems, baby monitors, stereos, refrigerators, hot water heaters, washing machines, dishwashers, light bulbs, and lots of other devices. Many of these systems have been shown to have vulnerabilities, and the consumer does not have the expertise to protect these devices. The natural organization to protect the consumer is the telco. They have the know-how and ability to scale to vast quantities of consumers, and they are in the path of many of communications, meaning that they are in a position to block unwanted traffic and malware.

The consumer, on the other hand, has to be willing to allow the service provider to protect them. Why would would consumers do that if they view the service provider as constantly wanting to invade their privacy? Rather it is important the these companies enjoy the confidence of consumers. Degrading that confidence in service providers, therefore, is to degrade security.

Some people say to me that consumers should have some choice to use service providers who afford privacy protections. Unfortunately, such contractual choices have thus far not materialized because of all the small print that such contracts always entail.

What is needed is a common understanding of how consumer information will be used, when it will be exposed, and what is protected. The protections that were in place went a long way in that direction. The latest moves reverse that direction and harm security.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22060Yet another IoT bughttps://www.ofcourseimright.com/?p=2203
https://www.ofcourseimright.com/?p=2203#respondMon, 27 Mar 2017 15:31:01 +0000https://www.ofcourseimright.com/?p=2203The Register is reporting a new IoT bug involving Miele PG 8528 professional dishwashers, used in hospitals and elsewhere. In this case, it is a directory traversal bug involving an HTTP server that resides on port 80. In all likelihood, the most harm this vulnerability will directly cause is that the dishwasher would run when it shouldn’t. However, the indirect risk is that the device could be used to exfiltrate private information about patients and staff. The vulnerability is reported here.

Manufacturers expect that it will be very simple to provide Internet services on their devices. To them, initially, they think that it’s fine to slap a transceiver and a simple stack on a device and they’re finished. They’re not. They need to correct vulnerabilities such as this one. They apparently have no mechanism to do so. Manufacturers such as Miele are experts within their domains, such as building dishwashers. They are not experts in Internet security. It is a new world when these two domains intersect.

We need MUD

And yes, Manufacturer Usage Descriptions would have helped here, by restricting communication either to all local devices or to specifically authorized devices.

]]>https://www.ofcourseimright.com/?feed=rss2&p=22030Taxing Bitcoin? IRS gets involvedhttps://www.ofcourseimright.com/?p=2199
https://www.ofcourseimright.com/?p=2199#respondMon, 27 Mar 2017 14:56:20 +0000https://www.ofcourseimright.com/?p=2199The Wall Street Journal is reporting that a large Bitcoin exchange Coinbase has been served with a so-called “John Doe” warrant in search of those people attempting to evade taxes. A number of privacy advocates are upset at the breadth of the warrant, because it demands access for an entire broad class of people, and not specific people.

Bitcoin is used for all sorts of nefarious purposes, including online ransoming. Tax evasion would be the least of its problems. Were Coinbase a bank, they would be required to inform the federal government of transactions greater than $10,000 or of those individuals believed to be structuring transactions to avoid the $10,000 filing requirement. These are anti-money laundering provisions that go hand in hand with tax enforcement.

And so my question: if it is wrong for the federal government to make such a demand of Coinbase, is it also wrong of them to make the same demand of banks? If it is not, then why should Coinbase be treated differently? And if Coinbase is not treated as a bank, is Bitcoin then not a currency? If it’s not a currency, should it be treated as a capital asset for taxing purposes? If that is the case, how would the IRS be able to enforce the reporting requirements associated with assets?

The alternative seems to be to trust people to not launder through Bitcoin. If history, including recent history, is any measure, that’s a bad idea. Either way, Bitcoin has already shown that privacy has its downsides.