There is a very real need for various parties who advocate against Deep Packet Inspection (DPI) to really work through what Packet Inspection appliances have done, historically, so that their arguments against DPI are as precise as possible. Packet Inspection isn’t new, and it’s not likely to be going away any time soon – perimeter defences for networks are essential for mitigating spam and viruses (and rely on Medium Packet Inspection).

DPI, as I read it, is problematic on the basis that it can potentially be used for widespread, and is currently being used for specific, alteration of communications flows. I’m not referring to just the throttling P2P traffic, but also the alteration of webpages (e.g. Roger’s insertion of messages on webpages) and tracking of individual behaviours and then injecting particular, very relevent, ads to individuals. If we operate on the assumption that communicative privacy is required for a democracy and individual alike to thrive, then the capacity to (almost) invisibly manipulate communications in real time has a debilitating effect on generating authentic discourse. Privacy, in this sense, acts as an umbrella concept, of one that is used to shelter other ‘core’ principles and values, such as autonomy, liberty, and freedom. Without the umbrella, other central values are at risk, and threaten both the individual and individual through compromising the digital communications networks we are so reliant on for discourse and deliberation.

DPI vendors are routinely involved in trying to sell their product – it’s what they do – but I think what is most telling isn’t what vendors say, but what the ISPs’ representatives say. When I talked to a Bell representative recently, and asked whether it mattered to Bell that throttling BitTorrent might affect the dissemination of information, the rep’s response was “they choose that business model, and now they get to live with the consequences of choosing it” (paraphrased). Is the technology itself inherently ‘bad’? I’m not comfortable with that. Are particular uses of the technology ‘bad’? Undoubtably.

The question becomes (as I read it): ‘how do we, as a society, mediate bad uses of technologies?’ Unfortunately, I haven’t figured out a real answer to that yet…

11 Comments

I don’t know how to engage with something like ‘truth’ in a theoretically satisfying way. What’s more, I think that deploying ‘privacy’ offers the benefit of thinking of communications infrastructures in interesting and accessible ways; ‘truth’ claims are notoriously hard to maintain without abandoning most typical understandings of what constitutes ‘truth’.

Specifically, with truth you get into questions of authenticity, discursive versus material realities (and associated truths), etc etc. It’s not a game that I’m intellectually equipped to play and have a prayer of winning, though it would be stellar to see someone else take up the topic from that line, with a focus on the complexities of contemporary digital communications.

There is a natural right to truth (against impairment), and thus sanction to prohibit impairment.

Of course, it is not possible to perfectly apprehend the truth, but it is possible to detect when there is an attempt to impair its apprehension.

That means it should be prohibited for an intermediary in a communication to modify that communication (if such modification significantly affects its integrity/veracity) without either the sender or receiver being informed of that modification.

Hence such modification by an intermediary, even though they are privy to the communication, is a natural rights violation (not a privacy violation).

I think that we agree in some ways, and likely differ in others. I would agree that it is wrong to unnecessarily intrude on a conversation/discourse in a manner that coerces a party to stay/alter their discourse. At the same time, I don’t think that this is necessary a moral right – morals are a space that I avoid talking about most of the time – though it is possible for a communicative right to be simultaneously ethical *and* moral.

While we might quibble over what it means for a data flow to be modified in such a way that constitutes upsetting its veracity, that’s a line drawing discussion (and thus not really all that interesting for us to delve into). I would *definitely* agree that some kind of real notice needs to be given should a communications provider decide to seriously interfere with consumers/citizens data flows, and in fact is one of the issues I have with how DPI has been deployed in Canada. There has been relatively little transparency in how and why the technology has been deployed, and that’s an issue given that the public depends on digital networks to communicate.

I think that, at the core, what worries me is that I have absolutely no clue how to assert a natural rights violation without stepping into a field of philosophical landmines. (My philosophical backgrown is heavily dependent on Habermasian understandings of right, justice, and communication.) Natural rights are tricky to work through, and are susceptible to contemporary post-colonial, post-modern critiques that I tend to find convicing a lot of the time.

This said, I think that we are in agreement insofar as:

1) we see there being an issue with interference of data flows when individuals aren’t given real opportunities to consent;
2) there is a violation of some ilk that is likely going on when a non-consentual interference is occurring.

We disagree about the privacy/natural right position on the basis of our own theoretical stance; I’m coming from a Habermasian point of view, whereas you are (if I’m not mistaken) approaching things more from the direction of people like Thomas Paine. (I might be wrong on that last point, and certainly don’t mean to attribute things to you unintentionally – sorry if I’m just totally wrong *grin*)

Question that might subsequently arise include: what, exactly, actually constitutes a violation when you’re dealing with the issue at the level of the technology? what, exactly, constitutes ‘consent’? how might we think through this issues with deference to contemporary policy and legal structures?

Yes, from the Thomas Paine direction. Primarily with respect to the premise that individuals have natural rights that can be deduced from nature, by inspection. The purpose and limit of a government is then to protect such natural rights for all as equals (rather than granting inegalitarian privileges or conferring personhood upon unnatural entities).

An impairment of truth is not ameliorated by consent, but by rectification.

I can lie to you, not by obtaining your consent, but by immediately correcting my lies (if non-deliberate), or informing you that I am lying (if deliberate). You remain free to select an author of fiction or non-fiction.

In other words, the person who impairs the truth is not absolved through the permission or tolerance of another, but by neutralising what would otherwise be an impairment.

In the case of the Internet and ISPs, an ISP can modify the communication as much as it wants, but only if it informs the receiver of any modification that would otherwise impair their apprehension of the truth (“You are not receiving the sender’s published work verbatim”), or it has been authorised by the sender to make the modifications (“Please redact expletives” or “Feel free to substitute alternative adverts”).

Natural rights provides an ethical framework. It isn’t the law. It informs those interested in using technology in an ethical way, especially in areas where the law hasn’t yet caught up with, or in areas where the law is wrong (anachronistic and unethical).

Philosophically, I don’t think that I’m in agreement with the ‘natural right’ camp, but that’s largely because of my own theoretical (discourse-ethics oriented) approach.

The notion of consent is intended to identify whether, and how, people are willing to accept data flows being modified. Where they aren’t willing, some kind of political/regulatory system should exist or, failing that, a genuinely competitive market. I can hope for the former, but the latter is in deep trouble in Canada. Case in point where modifying data traffic is ‘good’: we want to have Skype traffic put ahead of email, given the Skype is a low-latency application; maintaining communications using the technology demand some kind of QoS, or else jitter can cause problems with delivery.

I think that part of the issue with DPI is that, while it can be used for what I’ve somewhat blandly called ‘bad’ things (e.g. non-consensual behavioural tracking, unfair throttling of traffic based on particular applications as opposed to application-types/network load), it can also be used for ‘good’ things (e.g. QoS). From a privacy perspective (which is where I sit) what is worrying is the capacity (not necessarily the actualization) of DPI to capture and analyze data traffic on the fly, apply heuristics, and sort/categorize from a necessary gateway that subscribers must pass through. This can have very real effects, but I don’t think that there is a need to use a moral or state of nature argument to identify the problems/issues arising with such applications of the technology. Current regulatory systems and ethical codes are sufficient to identify that these are ‘bad’ or potentially unjust applications of the technology.

In short, I guess what I see theoretical problems with the SoN position, but agree at least what is occuring in telecommunications infrastructures has to be relatively transparent. Failing to do so runs risk of breeding a panic/excessive worry that can very real chilling speach effects – without clearly stating that the technology is not being used for coercive purposes, it can affect speach as though it were being used coercively. Hopefully over the next while I’ll get some clear statements from ISPs’ legal councils about this kind of question, so that at least some headway and transparency can be made about how and why ISPs are using the tech.

Unfortunately, legal perspectives can be unethical, e.g. the suspension or interference with communication that infringes privileges of copyright.

QoS is fine, but requirements should be indicated by the packets. It is not the responsibility of the ISP to deduce the application to apply its own QoS policy.

DPI may be a tool that might be used to detect compromised PCs (botnets, etc.) that misrepresent their QoS (and authorisation), but this detection is a pursuit of communications efficiency in the interest of communicants, not legally or commercially directed policy.

There is a Heisenberg effect when it comes to examining communications channels. You can look and not take action (without affecting communication). However, the moment you act is the moment you cannot help but affect the communication. There will thus be a continuous Sisyphean battle between those who wish to inspect communication and exploit that ability, and those who require efficient communication. Exploitation either directly or indirectly impairs efficiency. All that will remain observable is that communication which suffers no loss in efficiency as a consequence – which is that communication which no-one currently has any means of, or interest in, exploiting.

NB ‘exploit’ includes direct repercussions for the communication (censorship, throttling, advertising, etc.) as well as indirect repercussions for the communicants (prosecution).

So, irrespective of the ethics of exploiting or interfering with communication, ultimately there is a natural law that renders such activity futile. We simply end up with a less than optimally efficient communications system – one where thermodynamic equilibrium has been reached between the energy expended by those in pursuit of exploitation (extraction of value) vs that expended by those in pursuit of efficiency (maximal communication with minimal cost).

We can thus conclude that the more the law butts out in having any interest in private communication, the more efficient such communication will be (the law only needs to concern itself with public speech – unless it has warrant to invade the privacy of specific communicants). Similarly, for those who would commercially exploit or constrain it. However, as long as there is a free market in ISPs such that people can migrate to ISPs who sell efficient communications rather than ISPs who offer discounted prices at the expense of exploitation, then there is hope…

“QoS is fine, but requirements should be indicated by the packets. It is not the responsibility of the ISP to deduce the application to apply its own QoS policy.”

Nah – can’t happen this way. In an ideal world, sure. But what you’ll get is every application saying their packets deserve ‘high priority’. Can’t trust applications themselves; that boat sailed a long time ago. The issue is how to identify and prioritize packets without infringing on normative expectations of privacy (that are simultaneously adjusted for a digital communications network).

Re: your discussion of ‘exploitation’. I think that this is where clear regulatory policies that are in English (as opposed to legal speak) are needed. It would be *stellar* if ISPs communicated using Youtube, and generally adopted the KISS principles for informing users similar to what Google has adopted. I would honestly love it if, before you could connect to the ‘net at large, you got a 2-4 minute video stream that outlined the service, what was done using DPI, etc. I think that it would quickly alleviate some of the concerns and issues surrounding the hiddenness of these kinds of technologies. Put them in the sun, and I think that a lot of the worries will evaporate.

I’m admittedly not quite following you when you talk about the dymanics of efficiency. Efficiency is always something that is estimated, and is always in flux depending on variables. Something like DPI, which can be used as a prioritization system as opposed to an exploitation system, is meant to improve, rather than upset, efficiency. Efficient communications networks is a win for consumers, and as a result a win for ISPs. The question is how is this dynamic made, what is ‘efficient’, and who and how are ‘efficiency’ decisions made.

I think that the legal system needs to have clearly defined limits for how it can examine and survey communications – we have them for wiretaps (e.g. judicial warrant) and similar expectations should be required for a digital ‘tap. This can still have chilling effects, but I think they’re pretty minimal.

Whether a competitive market would mitigate the issues of what you’re terming ‘exploitation’ is interesting – I doubt it – but the cost of entry suggests that we’re unlikely to actually realize such a market anytime soon…

I’m talking about efficiency from a holistic perspective. If my bandwidth is reduced because of the ISP’s biased QoS policy against the application I’m using, this reduces communications efficiency. My remedy is to disguise my application. The loss in efficiency in doing this is outweighed by the loss of efficiency introduced by the ISP. We end up back where we started, as if the ISP wasn’t reducing my bandwidth, but now we are both expending energy needlessly. The same thing applies if a legal cost is introduced (file-sharing).

Incidentally, I’m not yet convinced that self-described QoS requirements would degenerate into everything giving itself ‘high priority’. I don’t think QoS is quite so crude. An ISP can still balance between low latency + low volume vs high latency + high volume, i.e. quality is not the same as quantity.

PS Typing black text onto a dark green background isn’t the most accessible means of entering comments (you are introducing friction into the comment process – which I doubt you intend).

I think there’s a difference between ‘efficiency’ and ‘bandwidth usage’ – the two are not necessarily the same. The former relates to delivering the best possible overall service, the latter refers to maximizing the theoretical speeds that ISPs sell.

What is perceived of as a problem is that ISPs oversubscribe. This is especially a ‘problem’ when customers try to actually use what they are paying for. It’d be great if you could get something like a ‘heavy use’ package and they rented ACTUAL speeds, as opposed to maximum speeds (maybe slower speed + higher caps?). Within that max speed you could do whatever you wanted (which, incidentally, is often possible with current business lines). That’s a very real issue with marketing, as I read it.

re: everything trying to receive high priority. I likely overstated the case, but applications like Skype and BT are well known for masking what they are. Botnets/viruses/etc do similar things. One can expect that as flash content is increasingly identified as a ‘problem’ that there would be similar efforts to allocate those packets with a high QoS.

While an ISP should be able to balance between what packets should be identified as low latency + low volume, high latency + high volume, how exactly would an ISP identify this if you have a known environment where packets mask themselves? MPI devices are limited in their actual effectiveness, to say nothing of SPI devices. This leaves app developers to be on their best behaviour – maybe this is where you get into a discussion along the lines of Lessig, whereby regulators empower some kind of law enforcement to punish app developers who improperly flag packets? No idea how that works in an international/global environment…

As a note – I appreciate the discussion. I’m taking a somewhat stronger stance than I normally do for the side of ISPs, just to ‘test out’ this side of the argument in order to understand it to subsequently critique elements of it *grin*

Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).

Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).

Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.

McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).

Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).