Buying or Selling IPv4 Addresses?

Watch this video to discover how ACCELR/8, a transformative trading platform developed by industry veterans Marc Lindsey and Janine Goodman, enables organizations to buy or sell IPv4 blocks as small as /20s.

The Internet is a great success and an abject failure. We need a new and better one. Let me explain why.

We are about to enter an era when online services are about to become embedded into pretty much every activity in life. We will become extremely dependent on the safe and secure functioning of the underlying infrastructure. Whole new industries are waiting to be born as intelligent machines, widespread robotics, and miniaturized sensors are everywhere.

There are the usual plethora of buzzwords to describe the enabling mechanisms, like IoT, 5G and SDN/NFV. These are the "trees", and focusing on them in isolation misses the "forest" picture.

The Liverpool to Manchester railway (opened 1830) crossing a canal.

* * *

What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic.

For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future.

It simply wasn't designed with healthcare, transport or energy grids in mind, to the extent it was 'designed' at all. Every "circle of death" watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future.

The fundamental architecture of the Prototype Internet is broken, and cannot be repaired. It does one thing well: virtualise connectivity. Everything else is an afterthought and is (by and large) a total mess. Performance, security, maintainability, deployability, privacy, mobility, resilience, fault management, quality measurement, regulatory compliance, and so on…

We have spent three decades throwing bandwidth at all quality and performance problems, and it has failed. There is no security model in the present Internet: it is a pure afterthought patched onto an essentially unfixable global addressing system. When your broadband breaks, it is nearly impossible to understand why, as I have personally found (and I am supposed to be an expert!).

It isn't just the practical protocols that are broken. The theoretical foundations are missing, and its architecture justification is plain wrong. First steps are fateful, and when you misconceive of networking as being a "computer to computer" thing, when it is really "computation to computation", there is no way back. The choice to reason about distributed computing in terms of layers rather than scopes [PDF] is an error that cannot be undone.

The problem is not just a technical issue. It is a cultural and institutional one too. Engineering is about taking responsibility for failure, and the IETF does not do this. As such, it is claiming the legitimacy benefits of the "engineer" title without accepting the consequent costs. This is, I regret to say, unethical. Real and professional engineering organizations need to call them out on this.

We see many examples of failed, abandoned or unsatisfactory efforts to fix the original design. Perhaps the most egregious is the IPv4 to IPv6 transition, which creates a high transition cost with minimal benefits and thus has dragged on for nearly 20 years. It compounds the original architecture errors, rather than fixes them. For instance, the security attack surface grows enormously with IPv6, and the size and cost of routing table sizes are unbounded.

The economic model of the Prototype Internet is absolutely crazy. We now have a system of quality rationing that incentivises edge providers to generate the most aggressive, inefficient and least collaborative application protocols. No other industry seeks to punish its most enthusiastic customers with data caps and "fair usage" policies. This problem is down to a persistent disconnect between pricing and inherent resource costs.

The regulatory system is also caught up in the incompetent insanity of 'net neutrality'. Such a monumental failure to grasp basic technical facts is an embarrassment to a supposedly advanced scientific civilisation. It delegitimises the role of the regulator in protecting the public.

What we are dealing with is an immature broadband industry grappling with unique problems. We operate at the speed of light. As such, it is difficult to adapt and adopt management and pricing methods developed in other industries that work at the speed of sound or less. However, it is possible to make the jump from skilled numerical craft to hard engineering science, if we accept our present shortcomings.

This is a complex system with many feedback loops and incentives that keep it 'stuck' in an unhappy place. Nobody is to blame, but everybody has a contribution to the madness.

The early 1970s ideas of how packet networks should be built have now reached their use-by date. The rotten smell from the back of the architecture cupboard is seeping out everywhere. It's time to face facts: more of the same beliefs and behaviors just leads to more of the same systemic failures.

The Prototype Internet is a canal system when we need an Industrial Internet railroad. There are no means to transform the former into the latter. The best we can hope for is to use canal transport to build the railroad.

Is this the best we can do in articulating value?The unsatisfactory nature of the present Prototype Internet is unspeakable, as it generates such intense anxiety, shame and fear. We have bet the development of our modern civilization on a digital infrastructure that is extremely fragile. Its quality is out of control. When you cannot measure and manage quality, you can only differentiate on quantity.

The scaling properties of the Prototype Internet are unknown and unknowable. Assumptions based on "it scaled this far so it must scale more" are extremely foolish. This fundamentally fails to grasp that there are hard limits imposed by physics and mathematics to the protocols we have adopted. This is not a hypothesis: there is hard evidence of new (and nasty) scaling problems emerging.

The problem I see is that we keep on pumping resources into a dead-end model. As erudite blogger, Chris Dillow writes in another context: When confronted with evidence against their prior views, people don't change their minds but instead double-down and become more entrenched in their error.

We collectively face a difficult dilemma: at what point do we accept that the present Prototype Internet is indeed just a prototype? And how do we begin to envision and architect its Industrial Internet successor? Do we have to wait for a costly disaster to happen before we make a move?

The good news is that the ingredients for an Industrial Internet are now becoming clear. The essential problems of science, mathematics, and protocols are largely solved, at least in theory. The practical reality of a new and better Industrial Internet is within our reach. It can be achieved in a relatively short timeframe.

The Industrial Internet is one for which security is a first-class design object. Different users and uses can be isolated from one another. Our approach to performance would be the exact opposite to the Prototype Internet. Rather than build networks and then reason about the (emergent) performance, we would reason about the performance and then build (engineered) outcomes.

With the Industrial Internet, we would from the very beginning design-in the features and capabilities to make it cheap to deploy, predictable in operation, and automated to support. We would work backwards from the essential business processes to ensure the right enablers were there from inception.

The management methodologies for 'lean quality' and efficient digital supply chains would be incorporated into the Industrial Internet from the get-go. The ideas of quality systems thinkers like Deming, Goldratt and Hammer would inform our choices and vision. The Industrial Internet is about fitness-for-purpose and low waste; the antithesis of the Prototype Internet's purpose-for-fitness and overprovision everything.

What it now takes is for that vision to be crystallized into a plan of action. The first step is to tell the story of an alternative future where we upgrade from broadband canals to distributed cloud computing railroads. This story then needs to be "made real" with examples of the new model being deployed in the real world to prove the benefits.

Who wants to join me in this mission? Hands up!

* * *

Yes: I am a dreamer. For a dreamer is one who can only find his way by moonlight, and his punishment is that he sees the dawn before the rest of the world.— Oscar Wilde, The Critic as Artist

If you are pressed for time ...

... this is for you. More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

Vinton Cerf, Co-designer of the TCP/IP Protocols & the Architecture of the Internet

The two key regulatory failures are BEREC and the FCC. See http://www.slideshare.net/mgeddes/fcc-open-internet-transparency-a-review-by-martin-geddes and http://www.martingeddes.com/1323-2/. But Ofcom got their house in order, did the science, and found that "neutrality" is not an objectively measurable phenomenon, and hence cannot be regulated. See http://www.slideshare.net/mgeddes/essential-science-for-broadband-regulation.

The work of Barbara van Schewick, on which the Open Internet regulatory approach is based, absolutely fails to understand the emergent nature of performance and the lack of intentional semantics to the service.

I have readed this post because a mail on the IETF (by Stephane Bortzmeyer) stated:

"[Only if you are bored and have nothing useful to do.]
A guy solved all the problems of the Internet, thanks to a new
mathematical theory he developed, "âˆ†Q" :
http://www.circleid.com/posts/20170214_lets_face_facts_we_need_a_new_industrial_internet/
He also calls us "unethical" but, among all its claims, this is the
least crazy :-)
Let's congratulate Circle ID (which, most of the time, publishes
interesting things) for its openness of mind: any random troll can
publish here.

So, since I use to trust Stephane ... But I only found a seriously point of view.

I am certainly interested in joining the effort (where to click?). To see where it could go. Some of the points are certainly true. Some others to be discussed as they seem to come from a post-1986 and initially telco oriented acknowledged pro. The internet was entirely defined in the 1978 "IEN 48 Objectives 20 lines section.

The last five lines were strangled by the NTIA/militaro-industrial en 1986 replacing its global effective operations and CCITT exploration by a group of public contractors' engineers called the IETF.

That IETF tried to do a good job with the 8.5 first lines and is still lost with the implications of the 6.5 middle lines. But TCP/IP is BUGged. There is no layer six presentation, so for it to work globally some of its job is to be politically, legally, etc. carried differently; some governance having to Be Unilaterally Global (hence the need for the NTIA and now ICANN).

IMHO the line and processing bandwidth permit to address the need, however not as they are used today (in accordance to RFC 1958). Not end to end: fringe to fringe. Yet as John Maynard Keynes identified it in 1930 : "The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds.". Today engineers (including yougsters) are too old for us "pre-human" veterans.

Related

The European Commission recently released technical input on ICANN's proposed GDPR-compliant WHOIS models that underscores the GDPR's "Accuracy" principle - making clear that reasonable steps should be taken to ensure the accuracy of any personal data obtained for WHOIS databases and that ICANN should be sure to incorporate this requirement in whatever model it adopts. Contracted parties concerned with GDPR compliance should take note. more

The chart here ought to be in every basic undergraduate textbook on packet networking and distributed computing. That it is absent says much about our technical maturity level as an industry. But before we look at what it means, let's go back to some basics. When you deliver a utility service like water or gas, there's a unit for metering its supply. The electricity wattage consumed by a room is the sum of the wattage of the individual appliances. more

There is an urgent need to clarify the GDPR's territorial scope. Of the many changes the GDPR will usher in this May, the expansion of EU privacy law's territorial scope is one of the most important. The GDPR provides for broad application of its provisions both within the EU and globally. But the fact that the GDPR has a broad territorial scope does not mean that every company, or all data processing activities, are subject to it. more

One of the chronic features of the Bitcoin landscape is that Bitcoin exchanges screw up and fail, starting with Mt. Gox. There's nothing conceptually very hard about running an exchange, so what's the problem? The first problem is that Bitcoin and other blockchains are by design completely unforgiving. If there is a bug in your software which lets people steal coins, too bad, nothing to be done. more

As discussed in previous analyses, the arrival of 5G will trigger a totally new development in telecommunications. Not just in relation to better broadband services on mobile phones - it will also generate opportunities for a range of IoT (internet of things) developments that among other projects are grouped together under smart cities (feel free to read 'digital' or 'connected cities'). more

Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are finally starting to pick up momentum. In the process, it is becoming clear that they are not the silver bullet originally advertised to be. While great for some use cases, emerging technologies like SDN and NFV have been primarily designed for virtual greenfield environments. Yet large service providers continue to run tons of physical network devices that are still managed manually. more

The Silicon Flatirons Conference on Regulating Computing and Code is taking place in Boulder. The annual conference addresses a range of issues at the intersection of technology and policy and provides an excellent look ahead to the tech policy issues on the horizon, particularly in telecommunications. I was looking forward to yesterday's panel on "The Triumph of Software and Software-Defined Networks", which had some good discussion on the ongoing problem surrounding security and privacy of the Internet of Things (IoT)... more

These days in Washington, even the most absurd proposals become the new normal. The announcement yesterday of a new U.S. State Department Cyberspace Bureau is yet another example of setting the nation up as an isolated, belligerent actor on the world stage. In some ways, the reorganization almost seems like a companion to last week's proposal to take over the nation's 5G infrastructure. Most disturbingly, it transforms U.S. diplomacy assets from multilateral cooperation to becoming the world's bilateral cyber-bully nation. more

With GDPR coming into effect this May, it is almost a forgone conclusion that WHOIS as we know it today, will change. Without knowing the full details, how can companies begin to prepare? First and foremost, ensuring that brand protection, security and compliance departments are aware that a change to WHOIS access is on the horizon is an important first step. Just knowing that the ability to uncover domain ownership information is likely to change in the future will help to relieve some of the angst that is likely to occur. more

It is interesting to observe the changes in the telecommunications environment over the last few decades. Before videotex (the predecessor of the internet) arrived in the late 1970s early 1980s, 90% of telecommunications revolved around telephone calls. And at that time telephony was still a luxury for many, as making calls were expensive. I remember that in 1972 a telephone call between London and Amsterdam cost one pound per minute. Local telephone calls were timed... more

The Caribbean suffered six major storms in 2017, including the record-breaking Category 5 hurricanes Irma and Maria. In the unprecedented destruction, the islands of Dominica and Barbuda lost all communication and telecommunications service, and eight other Caribbean countries were severely disrupted. Each hurricane season wreaks greater devastation than the last, yet decreased telecommunications competition, inadequate regulation, and high national debt burdens in the region yield ever-diminishing infrastructural investment. more

Unicode's goal, which it meets quite well, is that whatever text you want to represent in whatever language, dead or alive, Unicode can represent the characters or symbols it uses. Any computer with a set of Unicode typefaces and suitable layout software can display that text. In effect, Unicode is primarily a typesetting language. Over in the domain name system, we also use Unicode to represent non-ASCII identifiers. That turns out to be a problem because an identifier needs a unique form, something that doesn't matter for typesetting. more

To put it bluntly, the proposal cited in Axios story on "Trump team considers nationalizing 5G network" doesn't make sense on a number of levels. The real danger comes if this indeed represents the NSC's failure to understand Internet style connectivity. The proposal may just be the work of an NSC staffer who accepted all the 5G hype as if it were real. I credit the Axios article for having some skepticism... more

This seemed to be the reaction this morning worldwide to the leaked alleged PowerPoint slides detailing the White House strategic options for a U.S. national 5G infrastructure. The gist of the slides has apparently been confirmed to Reuters by unnamed "Trump security team members." The options apparently range between creating a U.S. Ministry of 5G resembling the old world of government Post, Telegraph and Telecommunication (PTT) agencies of bygone years, and sawing off the U.S. ICT infrastructures and services from the rest of the world. more

On January 24, 2018, ICANN's Business Constituency (BC) and Intellectual Property Constituency (IPC) co-hosted an event to discuss the EU's General Data Protection Regulation (GDPR) and its implications on access to the WHOIS database. ICANN's CEO and General Counsel joined the discussion, as did stakeholders from across the ICANN community. The event was timely and well attended with over 200 participants attending in-person or virtually. more

Promoted Post

Buying or Selling IPv4 Addresses?

Watch this video to discover how ACCELR/8, a transformative trading platform developed by industry veterans Marc Lindsey and Janine Goodman, enables organizations to buy or sell IPv4 blocks as small as /20s.

Avenue4 LLCRead4522

A World-Renowned Source for Internet Developments. Serving Since 2002.