We need a new formulation of end-to-end analysis

End-to-end analysis is the major theoretization of the Internet that was proposed by Jerome Saltzer, David Reed and David Clark from 1981. In their seminal paper and later ones, they formulated what became known as the end-to-end principle, interpreted often as “application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes – provided they can be implemented ‘completely and correctly’ in the end hosts”. Ths principle is much quoted by proponents of strong network neutrality requirements, including myself. In reality, Saltzer, Reed and Clark derive this “networks better be dumb or at least not too smart” approach from an underlying analysis of what happens when bits travel from an end (a computer connected to a network) to another end in a network.

However both network neutrality and the end-to-end principle capture only part of what we try to make them say. What we have in mind is that the analysis of what happens in a network should be conducted by considering what happens between the human using one device and another human using another device or between one such human and a remote device, such as a distant storage device, server or peer computer. We need an end-to-end analysis which is understood as human-to-human or human-to-remote computer. What will it change? One must first acknowledge that with this extended approach, one can’t hope to extend the probabilistic model which makes the original formulation of Saltzer, Clark & Reed so compelling. The new formulation can’t replace the old one, it can only provide a qualitative extension to it.1 In the early 1980s, the reference model of a computer connected to the Internet was that it was a general-purpose computer (small mainframe, workstation or personal computer) controlled by the user, a trusted person acting on his behalf or a user organization (such as the MIT Laboratory for Computer Science). This is unfortunately not a realistic assumption today, at least until we succeed in recreating this situation. Smartphone or tablet manufacturers or OS providers severely restrict what users can run as software or control as parameters on their devices. Multifunction ADSL or optical fiber boxes are considered by many ISPs as part of their infrastructure and not the user’s property under her control. EBook readers consider not only the device, but the entire collection of eBooks on it to be their own. Many of the real-life impediments to having a non-discriminatory human-controlled decentralized Internet arise from the non-openness/non-freedom (to run the software of one’s choice) of either terminal devices or “spaces”, “slices” or “machines” used in “cloud” storage and other forms of centralized servers.

If we want a much greater share of citizens to understand what is at stake when we speak of network neutrality we must make it clear that it is human-to-human and human-to-personal data activities that we want to be under decentralized human control.