﻿Surveillance for a fee﻿

In a Canadian Press story in today’s Globe and Mail, Mounties refuse to pay Rogers fees for tracking suspects’ cellphones, we find that, “The RCMP and many other police forces are refusing to pay new fees imposed by Rogers Communications for helping track suspects through their mobile phones”, and further that, “Police say the telecommunications firm is legally obligated to provide such court-ordered services and to cover the cost as part of its duty to society”. Really?

Based on the most recent report we know that Rogers received a total of 174,917 requests, broken down thus:

Customer Name/Addres Checks: 87,856

Court order/warrant: 74,415

Government requirement letter (compelled to provide under fed/prov law): 2,556

Emergency requests from police in life threatening situations: 9,339

Child sexual exploitation emergency requests: 711

Court order to comply with an international request: 40

Rogers said in the 2013 report they assume all costs with most court-ordered responses, but in some cases they charge a minimal fee to recover costs. If we assume that costs are somewhere between $10 and $100 for most requests, that means that Rogers spent between 1.7 million and 17 million in 2013 meeting these requests, more if I’ve underestimated the costs (Add a zero if lawyers were involved). Last May, according to the CP report, Rogers wrote to the RCMP and other divisions to introduce the new fees:

The fees applied to help in executing warrants for tracking customers’ movements through cellphone data, and for production of affidavits certifying records in cases where testimony is required to explain the records in court.

Further, based on prior court decisions, we also know that the courts think that reasonable costs should be born by the company to comply with production orders.

﻿﻿No oversight needed if we don’t pay?

When the courts say that reasonable costs should be born by the company, what they are saying is that those costs will be passed on to the customers. So we are all, at least those of us that pay for Internet or telephone services, are sharing the cost of government surveillance in the same way that we all share the costs for banks absorbing the costs of some forms of credit card fraud. But is it the same thing? Credit card fraud will happen, initiated by criminals, and is a normal if regrettable cost of doing business - like shoplifting in a retail store. Does responding to government requests for access to private customer records qualify as a normal cost of doing business? Maybe in a police state.

We should all recognize that accessing confidential information about suspected criminals is part of the criminal investigative procedure. We also recognize that this is an infringment on individual rights - which is why we generally require warrants, exigent circumstances, or some demonstration that the policy have done their due diligence in a way to balance their investigative wants against individual rights.

By offloading the costs of doing the information searches to the company, the burden on the police is reduced. And if the bar for requesting is lower than a warrant (which seems to be the case given the numbers cited above) it becomes a trivial administrative exercise to obtain personal information about individuals who may or may not be reasonably suspected of a crime. Even if the police are asking for records that already exist, there needs to be some authorization, aside from an investigating officer’s unsupported desire for more information, to justify the request. And if the request is justified, then it’s a social cost that should be born by the government and subject to the normal oversight procedures for expenditures from the public purse.

Summary﻿

Rogers’ primary duty, as a publicly held corporation, is the fiduciary duty to its shareholders. That includes reducing costs and increasing revenues. Bearing the cost of policing it’s customers meets neither of these requirements. It would be better if the costs, and accountabilities, of surveillance were born by the organizations that claim this is in the public interest. If so, then it’s a legitimate cost of government and should be subject to scrutiny by Parliament as our representatives. To do otherwise would be to evade the transparency and duck the accountability that are the hallmarks of a health democracy. The police have a duty to protect public safety, which includes protecting our privacy unless there is a specific and overriding reason to gather particular information about us.

I’m reminded of an exchange in Orson Welles great noirclassic, "A Touch of Evil" (1958):

Quinlan: Our friend Vargas has some very special ideas about police procedure. He seems to think it don't matter whether killers hang or not, so long as we obey the fine print.Vargas: Captain, I don't think a policeman should work like a dog catcher in putting criminals behind bars. No! In any free country, a policeman is supposed to enforce the law, and the law protects the guilty as well as the innocent.Quinlan: Our job is tough enough.Vargas: It's supposed to be. It has to be tough. A policeman's job is only easy in a police state. That's the whole point, - who's the boss, the cop or the law?

Unless the police pay for, and have to justify on a case-by-case basis, customer identification, cell phone tracking, wire-tapping, and other invasions of privacy it becomes too easy for these invasions of privacy to become routine and for companies to become willing or unwilling partners in surveillance.

Privacy by Design: UX & UI

With the announcement of IOS 7 elements of the blogosphere have become awash in commentary back and forth about the new design. Does the fact that Apple has chosen Helvetica Ultra Light as the default font have implications for privacy? Not so much. But privacy and design are connected, and all the commentary that I’m seeing about Apple’s new mobile operating system are focussed on the immediate and and the transient. This makes me think about Privacy by Design (PbD).

The focus around IOS 7 is on what the immediate user experience (IUX if you will). Focussing on the IUX is, I would argue, what gets organizations in trouble and does not meet PbD principle #1 - Proactive & Preventative. This is because the user experience of privacy is not immediate, except in the obvious egregious cases such as where web sites demand personal information for registration. A user’s privacy experience with an organization is cumulative and evolves transaction by transaction.

This is not to say that the IUX is not important. Of course it is, and it is the result of well thought through user interface choices, one of which is Privacy by Design principle #2 - Privacy as the default setting. But have designers fulfilled their PbD goals by making privacy options both available and the default? Again, not so much. On the face of it, by doing this designers will have met most of the PbD requirements:

Designers have proactively included privacy interface features

The system has privacy protective default settings

The system has embedded privacy protective options

Designer ensures that there is full functionality

Architects ensure the site is designed with end to end security

Privacy officers ensure that the privacy is visible and transparent

By focussing on UI and UX, designers assume that they are user-centric

So what’s the problem? It’s a variant of the old saw in computer programming, when the programmer asks for a set of requirements and builds a prototype for their customer. When shown the prototype, the customer shakes their head and says, “You’ve given me everything I asked for, but that’s not what I wanted." Privacy, it seems to me, is the same thing. If designers focus on the immediate experience they are likely to encounter unintended consequences down the road. Data that is accumulated over time is called longitudinal data. This is the kind of data that is used for epidemiological studies, or changes in a population over time. So I propose to borrow the term and suggest that Privacy by Design requires an understanding of the Longitudinal User Experience (LUX).

Only when system designers study the long term impacts on user privacy will they be proactively addressing and preventing privacy issues. This includes checking back with users on a regular basis for privacy status checks and validation, proactively notifying users of changes impacting their privacy and not implementing changes that could reasonably be construed to be less privacy protective than existing design choices. Above all, it means recognizing that privacy is embodied in the relationship and transactions with the users, not in a series of policy statements.

Bear with me, but this reminds me about a joke about a couple. She says, “You haven’t told me you love me in a long time." He replies, “I told you once, and I’ll let you know if the situation changes". That attitude doesn’t work in relationships and saying, “We told you that we would protect your privacy when you signed on to the service, and will let you know if that changes" doesn’t work that well either.

Meeting the PbD Proactivity principle means regularly engaging with your users about privacy, without beating them over the head with policy statements. Their user experience, in every transaction, needs to reflect your ongoing commitment to giving them control over the information you collect about them. Sometimes that means sacrificing immediate gratification for long term satisfaction. That’s how adults behave, and that’s how you prevent the need for remedial action.

Definitions

User Experience: According to the Wikipedia entry on User Experience: ISO 9241-210[1] defines user experience as “a person’s perceptions and responses that result from the use or anticipated use of a product, system or service". According to the ISO definition user experience includes all the users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use. The ISO also list three factors that influence user experience: system, user and the context of use.

User Interface: According to the Wikipedia entry on User Interface: The user interface, in the industrial design field of human–machine interaction, is the space where interaction between humans and machines occurs. The goal of this interaction is effective operation and control of the machine on the user’s end, and feedback from the machine, which aids the operator in making operational decisions. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology.

In the security practice, we have our own version of no-man’s land, and that’s midsize companies. Wendy Nather refers to these folks as being below the "Security Poverty Line." These folks have a couple hundred to a couple thousand employees. That’s big enough to have real data interesting to attackers, but not big enough to have a dedicated security staff and the resources they need to really protect anything. These folks are caught between the baseline and the service box. They default to compliance mandates like PCI-DSS because they don’t know any better. And the attackers seem to sneak those passing shots by them on a seemingly regular basis.

[…]

Back when I was on the vendor side, I’d joke about how 800 security companies chased 1,000 customers — meaning most of the effort was focus on the 1,000 largest customers in the world. But I wasn’t joking. Every VP of sales talks about how it takes the same amount of work to sell to a Fortune-class enterprise as it does to sell into the midmarket. They aren’t wrong, and it leaves a huge gap in the applicable solutions for the midmarket.

[…]

To be clear, folks in security no-man’s land don’t go to the RSA Conference, probably don’t read security pubs, or follow the security echo chamber on Twitter. They are too busy fighting fires and trying to keep things operational. And that’s fine. But all of the industry gatherings just remind me that the industry’s machinery is geared toward the large enterprise, not the unfortunate 5 million other companies in the world that really need the help.

I’ve seen this trend, and I think it’s a result of the increasing sophistication of the IT industry. Today, it’s increasingly rare for organizations to have bespoke security, just as it’s increasingly rare for them to have bespoke IT. It’s only the larger organizations that can afford it. Everyone else is increasingly outsourcing its IT to cloud providers. These providers are taking care of security — although we can certainly argue about how good a job they’re doing — so that the organizations themselves don’t have to. A company whose email consists entirely of Gmail accounts, whose payroll is entirely outsourced to Paychex, whose customer tracking system is entirely on Salesforce.com, and so on — and who increasingly accesses those systems using specialized devices like iPads and Android tablets — simply doesn’t have any IT infrastructure to secure anymore.

To be sure, I think we’re a long way off from this future being a secure one, but it’s the one the industry is headed toward. Yes, vendors at the RSA conference are only selling to the largest organizations. And, as I wrote back in 2008, soon they will only be selling to IT outsourcing companies (the term “cloud provider" hadn’t been invented yet):

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure — power, water, cleaning service, tax preparation — customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

[…]

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products and new startups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users — and will no longer be a 17,000-person user conference.