Posted
by
timothy
on Saturday February 08, 2014 @08:11PM
from the often-dumb-is-at-the-top-instead dept.

CowboyRobot writes "Writing for ACM's Queue magazine, Paul Vixie argues, "The edge of the Internet is an unruly place." By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet's wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we're at the mercy of scale when it comes to the quality of the Internet's aggregate traffic load. Not all device and software builders have the skills and budgets that something the size of the Internet deserves. Furthermore, the resiliency of the Internet means that a device or program that gets something importantly wrong about Internet communication stands a pretty good chance of working "well enough" in spite of this. Witness the endless stream of patches and vulnerability announcements from the vendors of literally every smartphone, laptop, or desktop operating system and application. Bad guys have the time, skills, and motivation to study edge devices for weaknesses, and they are finding as many weaknesses as they need to inject malicious code into our precious devices where they can then copy our data, modify our installed software, spy on us, and steal our identities."

It's just the way TCP/IP was designed, back in the ARPANET days, you know.Putting all the intelligence in the hosts allows for more resiliency, since it takes a lot to the bring the whole infrastructure down this way.Mobile networks are quite the opposite, though (smarter infrastructure, a little more dumb terminals).Software defined networks are definitely a way to bring some intelligence back in the infrastructure of IP networks.We'll see if it will enable a smarter Internet or not.

Probably more than resilience, moving the intelligence to the edges of the network allowed for innovation. It's not as though POTS is a quagmire of reliability issues (indeed, it stacks up pretty well compared to any internet connection not expensive enough to have a proper SLA); but it's an ossified wasteland because essentially any change had to run the gauntlet of "Is it worth making the necessary modifications and upgrades to the intelligence at the center of the network and will doing it make AT&T more money?" If something new couldn't be squeezed through the network as though it were a voice call, or officially blessed by Ma Bell (as with 1-900 numbers and billing for them), it just didn't happen. Even with the introduction of mobile phones, and the opportunity to hammer out huge swaths of new spec, they added what, SMS? Virtually all the features of today's "phones", with the exception of voice calls and maximum-compatibility SMS snippets have gone IP because that is where the versatility is.

With intelligence at the edges, if you want something done, all you need is two or more endpoints with the right software and there you are. This goes for malice as well, of course, which is part of why the internet is kind of a rough neighborhood; but it's also why IP-based capabilities have changed so radically, while systems with more centralized intelligence have largely stagnated(even more impressive 'dumb endpoint' arrangements, like Minitel, have been eclipsed).

Putting all the intelligence in the hosts allows for more resiliency, since it takes a lot to the bring the whole infrastructure down this way.

It's the way to go. Any intellegence added to the core should merely be simple tweaks to enable more intelligence at the edges. For example, one might plausibly argue that making core routers select second/third most-preferred destination routes for a packet based on a TTL % on IP packets would allow end-systems to experimentally find the fastest performing route through the internet by trying different values on their TTLs/option fields. One could not reasonably argue for expecting core devices to maintain per-connection or even per-client/netblock state in an attempt to find alternate routes for each client connection.

Software defined networks are definitely a way to bring some intelligence back in the infrastructure of IP networks. We'll see if it will enable a smarter Internet or not.

From what I've seen of SDN it's a bunch of people who think they can abstract network services in a simple model, but who have no compreshension of the intrinsic differences in the heterogeneous mixture of devices employed, so they haven't even scratched the surface of being able to build a taxonomy/capabilities-enumeration for things like, for example, how many CAM entries are available for edge switch filters on a given switch model. Without that information, SDN applications have no way of doing any serious budgeting before launching a request into the network gear, and since the device might happily take the commands and provision a halfway-functional service that is dropping 5% of packets, rather than reject the request, and SDN has no real provisions for testing services before putting them in production, SDN is doomed to be confined to data centers where equipment has been carefully kept homogeneous.

Most people using SDN that I;'ve seen are doing so for enterprise (including server farm) LAN, not core internet.

Paul Vixie can pontificate on the Unevenly Distributed Intelligence at Dice that has resulted in this abomination known as Beta Slashdot...

I don't think so. Beta Slashdot is a consequence of the idiot staff that Dice has hired to run Slashdot, considering that the headline and summary have nothing to do with Paul Vixie's argument. The quotes are taken from the article, but in a stupid way, like CowboyRobot is some sort of robot...

The article is actually about the need for the addition of minimal state to stateless protocols in order to thwart DDOS amplification techniques.

The internet consists of hardware and software and things worth stealing. The first has long development cycles, and is more difficult to modify than the second. The second is extremely varied and full of vulnerabilties that are often easy to patch one instance at a time, but hard to patch simultaneously and comprehensively across the network. The third are things that shouldn't be accessible from the Internet in the first place, like our real names just so we can have a Google account, our credit card numbers just so merchants don't have to ask us when they want to charge us, our activity records just so we can be manipulated through ads, etc.

We can't change the first two without destroying the Internet, but there's no reason why computers should contain so much valuable information to steal.

that are the cause of breaches and insecurities of the Internet. Long ago that was not the case, because simply connecting a computer to the Internet would get it infected with malware. Computer and browser makers have learned how to largely avoid this, but no one has yet figured out a way to prevent trusting or stupid human beings from giving permission to install programs that subsequently are able to do severe damage. This is part of human nature and will never change.

Some aspects of software security have improved; but the decline in 'just put a computer on the internet and it gets rooted in about 15 seconds' attacks, at a population level, probably owes more to the prolific spread of nasty little plastic NAT boxes.

Those things are hardly real security(and more than a few have shipped with nasty flaws of their own); but they do tend to eat unsolicited inbound traffic pretty enthusiastically, which has really cut down on the number of totally helpless computers that end up being given a brutal taste of the open internet before they've even had time to patch.

If something is just too simple to be modified or hacked or manipulated by anyone including the rightful owners then its too simple to be perverted by a hostile agent. Simplicity is frequently a virtue.

Wrong. It isn't impossible to hack it. And therefore it will be hacked.

Systems too simple to be hacked can't be hacked. They are secure. Everything else is second class.

People need to stop cutting security corners. This chicken shit security no longer an option.

Perfect security is possible. It requires sacrifice. You need to limit complexity. You need to limit what can and cannot be done. Do that and you leave little wiggle room for hackers to exploit. Anything short of that and you're better that you are s

Wrong. Hackers hack by exploiting flexibility in a system to be multiple things. If a given system is so simple it can LITERALLY only work one way then it cannot be hacked.

Effectively you have to make things that are non-programmable. Or that have their programming hardwired/hardcoded. No flexibility.

You set them up once to do a job and then leave them alone. Core systems can be set up this way and should be set up this way. They cannot get viruses. They cannot get taken over. They are what they are... end

I have to agree with PP in that perfect security is possible. Proveably so. You can try to hedge around this fact with sophomoric arguments that show that it is possible to use a perfectly secure system in an insecure manner. That it an excercise in semantics since exhibiting the insecurity requires abusing the system. In order to define security you have to define what it is you are attempting to be secure against. A door with a deadbolt on the inside, when locked, is perfectly secure against lockpick

"Bad guys have the time, skills, and motivation to study edge devices for weaknesses..."

But you know, it's funny... I would have thought the giant corporations that are behind manufacturing these devices (and in many cases the software for them) have just as much skill to look at these things from the other end.

Apparently what they have lacked is the motivation to do so. That should change.

DNS can use UDP, yes, but it can also use TCP, so as an example of "a UDP", it is quite poor.

He was talking about DNS reflection attacks, which is done via the primary DNS protocol, which is UDP-based. The attacker puts the victim's IP address in the source IP portion of the packet and requests a large quantity of information so that the DNS server will send it to the victim. Scale this up for DDOS on the victim. Since the attack is UDP-based, there's no requirement for the sender's IP to match the packet's sender ID.

I spent a lot of time last summer fending off that stuff, since my older machines

That actually could be solved with proper router configuration. For example, don't route traffic sourced from a router that has no route back to the source address. Case by case exceptions if well justified by the source.

That actually could be solved with proper router configuration. For example, don't route traffic sourced from a router that has no route back to the source address. Case by case exceptions if well justified by the source.

Who says there's no route back? The route back is merely bogus.

If you mean that the response address doesn't match the source address, well, it wouldn't the minute it made its first hop. Which means that every router in the world would have to be 100% trustworthy.

Getting a route announced is more difficult than spoofing a source address. Also, if you manage to convince the routers between the multiplying DNS server and you that there IS a route back, you will get the flood, not your victim.

Note that MOST providers already discard spoofed source packets from their customers.

Getting a route announced is more difficult than spoofing a source address. Also, if you manage to convince the routers between the multiplying DNS server and you that there IS a route back, you will get the flood, not your victim.

Note that MOST providers already discard spoofed source packets from their customers.

Unfortunately, as my logs amply demonstrated, on a network the size of the Internet, "most" isn't nearly enough. And if the "provider" was a military or rogue ISP installation, they would likely be part of the attack.

Not on my networks, which comprise about 1 million people at the moment.

All of our infrastructure is open source and we don't have those issues. Been opeperating a standatf 3.x kernel on 25 routers with millions of people accessing them, along with the server software, also LINUX based running Apache, Tomcat Servlets, and PostGRES...OpenLDAP and TLS for the internal key management infrastructure.

so I don't see a problem with the internet as designed, works very well. It doesn't need change.

You are trying to change the internet for your own malicious purposes, in my opinion, than actually address the problem:

1) Internet security as far as functionality is concerned, works extremely well. I travel and I go to many places, and there has only been once in the past two years I couldn't access my VPN server due to a real internet outage. I say outage because the local admin at your so called "smart edge" made a few bad investment decisions, proprietary gear bankrolled with back doors.

2) Most of the problems you do see with sites, internet infrastructure is entirely not related to the internet as designed per se, but a frustration with governments who don't like what the internet is doing. That is, an obstruction to their spheres of power and political and industrial espionage which they require to gain an edge to stay in power.

The internet has a nasty habit of revealing the connections of two sets of laws that normally can't be seen by the plebs: That is the ones that say you have to spend 5 years in prison for 1 ounce of pot, complete with a criminal record so you will never be hired again vs. If you're say a Banker, and rob whole countries you get a pay raise and pat on the back or send you send the plebs to thier doom. For example, when the French found they couldn't get any of their gold back from the Fed they invaded Mali to stabilize their banks.

So I don't see any problems with the internet.

I do see a problem with governments and the internet coexisting together though, but that is not a technology problem.

As I see it, one or the other has to go and so far the internet is fighting a losing battle.