Posted
by
timothyon Monday August 22, 2011 @08:49PM
from the soft-creamy-underbelly dept.

An anonymous reader writes "Evolution has ossified the middle layers of the Internet, leaving it vulnerable but security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.' Extinction sucks, especially when it's my favorite protocols like FTP."

I understand that IP protocols predate the 7 layer ISO/OSI model, but that's what everything is mapped to in modern terms.

The article seems even more confused, when it reverses the layers, claiming that "at layers five and six, where Ethernet and other data-link protocols such as PPP (Point-to-Point Protocol) communicate..."

It's pretty freshmen-ish stuff. FTP hasn't been used in a long time. Glass-screen protocols went the way of the 386 long ago. I'm surprised these guys don't understand various secure protocols, key exchange methods, and so forth. Nice fluffy stuff, but very dated for the reality check. Show me someone using ftp and I'll show you a password theft followed by a crack. Ye gawds.

Variants of FTP are used widely in business to business transfers - sometimes secured with SSL, but often just by plaintext passwords, obscurity and/or IP whitelists. FTP is consistent between a large variety of platforms and lots of sysadmins like the simplicity of scripting, for example, a nightly FTP file transfer.

Is there better solutions? Of course. But FTP is still very common - and lots of businesses still employ much more arcane tech than it. For a lot of businesses, terminal servers were a real

Our customers demand FTP, no matter how much we educate about SFTP and show how easy it is, still they insist on using FTP.if ftp goes down that's likely to get complaints faster than http being down. loss of SSH access they barely even notice oO;

You'll notice the patches are not being funded any more and that none of the SSH enthusiasts are, well, enthused enough to have volunteered to help the maintainer. Don't get me wrong, I like SSH, but it doesn't write itself and I have very little sympathy for those who complain about under-utilization when doing nothing about helping to address the issues that

First question: Dunno. Probably neither. Not hard to get, though. Second question: I switched to dreamhost.com because I can use rsync over ssh. Not posting any referral link to discourage thoughts about having a financial reason to say anything. Also: I don't work for them. I merely use them. I understand they have a less-than-stellar reputation, but for my purposes, it's been nearly nothing but positives.

And which budget shared web host supports file uploads using such protocols?

Dreamhost [dreamhost.com]. Being able to SSH in and pull down something with their pipes using wget has come in handy a number of times as well.

The client thing, meh. If people are mucking around in command line FTP programs they're savvy enough to download one; if they're using a GUI an awful lot of them have SFTP support these days, including FileZilla (free/Free). I guess I could see an argument if they're just entering an FTP URL into the

The client thing, meh. If people are mucking around in command line FTP programs they're savvy enough to download one; if they're using a GUI an awful lot of them have SFTP support these days, including FileZilla (free/Free). I guess I could see an argument if they're just entering an FTP URL into their Explorer window.

Depending on the reason for FTP, they could just as well use Internet Explorer for FTP (or Firefox or whatever), because it comes standard. Or map the FTP to a drive on their PC (Windows supp

ummm.... which ones don't? Even godaddy supports sftp and scp now. As for windows, who cares if it comes with it or not? You can get filezilla, putty, or a number of other free alternatives. Heck, you can even install some of them using software deployment group policies.

FTP (and FTPS) uses two ports: one fixed port number and the other random. You also have passive mode and "active" mode for FTP (but everyone these days uses passive, except one particularly backward vendor I had to deal with).

This causes firewall headaches because now the packet filter must understand FTP and selectively punch holes in the firewall for the data connection, and close them when the data connection finishes. Either the packet filter in the OS kernel must understand FTP, or you must use an FTP proxy that can dynamically modify your packet filter rules.

SFTP requires none of this. It works on a single port and this port doesn't change with each file you want to transfer or directory listing you want to see. You can also use the scp command which is much cleaner for scripting than writing FTP scripts. SFTP is a *lot* easier and cleaner to support, and the encryption is built right into the protocol, not added ad-hoc some time later.

You can wrap almost any TCP/IP traffic inside of SSH. You can rsync, ftp and even web browse inside of a SSH tunnel.

In fact, that is exactly how I posted this message. I am at work, with an SSH tunnel to my home network which acts a SOCKS proxy to the internet for my work PC. Even my DNS queries go to my internal DNS server on my home LAN.

All my corporate overlords see is a fuck ton of SSH traffic to my home IP on some very unusual ports. All Slashdot sees is a normal web connection from my home

If you know what SSH is then why did you ask about SFTP? SFTP is just an FTP-esque environment to copy files over SSH, for when you don't want to do so one by one over SCP or you don't know all the remote paths or whatever. So you say you know what SSH is, but have you actually ever used it?

Haha. One system I had to build and maintain at a previous employer, not that long ago (1999):PC.BAT job runs a Qualcomm application that dials up Qualcomm periodically to connect to their satellite truck monitoring system, capture session into a file in a special directoryPC.BAT job looks periodically to see if a new file has come in; uses TFTP to transfer it to a Sun workstation - call it Sun-1.Sun-1 shell script mails the file to a special email account on another workstationSun-2 uses fetchmail and p

And something I noticed, files I transfer with SCP either fail or or things actually get done right. With FTP and others I've lost count of the times files actually got corrupted while transferring without any kind of warning.

That adding to security concerns should be enough to force the switch in an enterprise environment.

Technically speaking, yes, SCP and SFTP need a shell to call the subsystem that provides the functions needed. You can install a package called "rssh" which will restrict a user to the SCP and SFTP subsystems, and prevent access to any other commands.

5x faster does not shave off 5 bits, it shaves of log2 (5) ~ = 2.32 bits. So, 256 bit AES is still ~ 253.68 bits (and only if you have 2^88 bytes of very low latency storage, which is many orders of magnitude more storage than humans have produced in recorded history).

I wasn't going to pay any attention to that silliness, but I feel like saying that I use FTP all the time as well.

Not for server work (SSH protocols for that), but I use FTP between computers here. It's a fast and reliable way to transfer data. If it's a lot of small files I tar it up first though. (I would always want to archive that kind of stuff for any method of data transfer, though)

I still use FTP clients to download stuff where I can too. (e.g. kernel and other source tarballs, distro mirrors for ISO

There might be one practical use where you won't violate security ideals-- anonymous ftp. Otherwise ftp is swiss cheese. I'm not joking. SO it sounded like you were finding the smart ass narrow exception. But you may in fact not really know the dangers otherwise.

The problem isn't really the downloader. It's the fact that the host is vulnerable to iterative attacks until it cracks. Then it's hijacked. Ftp can be cracked like an egg in its Unix and GNU form.... and that's not the only problem.

Correction. FTP should not be used anymore. It is used. Widely. Why? Because it works, and because the person who could change it left the company years ago. But slowly.

Turn back the time a decade. We're at the downturn after the dot.com bubble blew up, a lot of more or less sane IT people are out of a job (along with all the duds that got their job by spelling TCP/IP halfway correct and knowing that it ain't the Chinese secret service), and all of them are looking for work, any kind of work will do. So the

Back then, people cared even less about security than they do today, what they wanted was an IT infrastructure that works.

Of course, I've seen ISP environments that used FTP heavily (as well as TFTP for a bunch of automated stuff). Why? Because when you're running an encrypted tunnel through another encrypted tunnel that runs between two trusted hosts on a segment of the network that does not allow incoming traffic from anywhere but the NOC it just seems silly to add another layer of encryption and the potential issues that could come with that for daily log transfers...

Something tells me you'll have a rude wakeup call if you get out of school and start working for some big business. FTP is still an extremely common way of transferring files in batch scripts and such.

Oh. Right. I was cleaning up DEC tape spews when you were a zygote. FTP may be common, but it's intensely insecure. Do your research. If you use it, you're irresponsible and endanger your organization.

You can only hope that a cogent argument, repeated until PHBs take it seriously, then think it's theirs, will do some good. Too many systems get p0wn3d because of stupid stuff, and ftp is old, and is just plainly irresponsible-- save places where a secure channel exists. Mostly, they don't; secure channels are another problem for a different day.

That's nothing, I spoke with a colleague and they have an intern from a large state college with a computer engineering school that is considered pretty decent. The intern didn't even know what FTP was, and it wasn't because they knew about more secure protocols like sftp. I was shocked to say the least. What are they teaching in school these days? I'm really at a loss...

I would beg to differ. I spent a couple of hours last week setting up a regression test environment to run a patched version of our FTP connection layer through its paces with (the errors were actually in SFTP error handling, but we re-test everything). Some of the equipment our customers must collect data from supports no other method of retrieving it. Generally, the network is itself *very* secure, and our box is sitting inside of it. I guess the customers don't see it to be much of an issue, and will

Unencrypted FTP with Kerberos? Anonymous FTP? Plenty of ways you can use FTP without putting an account at risk.

As for your claim that "FTP hasn't been used in a long time" - it's clearly bogus. FTP is widely used. More web browsers support vanilla FTP than support FTP over SSH. If you want the Linux kernel sources, or a distro ISO image, the overheads of encryption aren't gaining you enough to make it worth the effort - the higher throughput and lower server loads win every time.

I've been really surprised by all of the purported ftp use cited in this thread... tftp as well. Web hosting sites are in need of some updates. Using https would at least prevent part of the problem. Yet it's up to people that understand infrastructure to help educate those that don't understand the nature of hacks and cracks. Organizations get banged with hammers that most people aren't willing to understand. Yesterday, my primary web facing server was under attack from two different places trying to beat

Exactly what those updates are, that's more debatable. tftp is excellent for bootstrapping a machine with an OS and is independent of machine architecture (ix86, MIPS, UltraSPARC) and BIOS (Corelis, Phoenix, UEFI, etc) - I really, really, really do NOT want to try implementing SCP in Forth for bootstrap purposes. I couldn't afford the psychiatric treatment afterwards.

Likewise, I would not consider using any other authentication mechanism in environments already using SASLv2

ARPANET predates the OSI model, and the current Internet Protocols came after the definition of the OSI stuff. (That's a little hard to see in the current wikipedia articles, but it's there.) The IETF in fact deliberately chose to combine two of the OSI layers.

The article does have some issues. I'm not sure if the author actually doesn't understand the paper he or she is trying to summarize. Maybe the intent was to make it easier for the lay person to understand. But there is some creativity going on, and parts of the summary don't really reflect the paper.

The paper itself is offering a framework of analysis of the evolution of the Internet Protocols. It might have been interesting to see a bit more analysis of ARPANET and some of the other protocols the IP protocols eventually replaced. It might have been interesting to see them address the OSI model a bit more, but the OSI model never was really implemented fully, and might be considered not part of the evolution.

I see that the take IPv6 up as a competitor of IPv4 instead of the heir apparent, which is probably a useful thing to do, if we want to understand why so many IT managers are still failing to move in a timely manner.

I'm not sure I understand their work well enough to either agree or disagree, but I think it offers food for thought, including the idea that IPv4/6 doesn't actually have to be the only protocol existing at that layer.

You're absolutely right that it doesn't have to be the only protocol at that layer. The X protocols from Europe cover the full spectrum of the OSI model, including layers 3 and 4. The TUBA protocol (one of the candidates for IPv6) could perfectly well be implemented, again sitting at that layer. Infiniband has its own layers 2, 3 and 4. Other IP protocols exist - albeit in experimental form for the most part. (IPv0 could be said to still exist.)

You're missing the point. A good example would be fast food restaurants. There used to be a Mexican based fast food chain called Taco Bell. It used to be the only place to get burritos, but then McDonalds introduced their breakfast burrito and drove Taco Bells nearly extinct like FTP. Please ignore the fact that you drove by 3 of them this morning or that it's impossible to update your website without using FTP.

I've never really been a fan of the OSI model. The idea of the hierarchy is great; sandwiching it into discrete layers seems problematic.

Wikipedia's definition of the OSI model [wikipedia.org] states that "there are seven layers, each generically known as an N layer. An N+1 entity requests services from the layer N entity." Makes sense. So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP, so it should be in the layer above IP, but it doesn't actually provide transport (or at least, isn't meant to). HTTP is in layer 7, but it can be sent directly on top of TCP, which is in layer 4, skipping over two layers. (Or it can be tunnelled over SSL, but still skipping layer 5.)

I prefer to think of the IP stack being a directed acyclic graph of technologies, each depending on another, rather than an explicit linear division into layers.

Well, you can imagine a "null" layer that does nothing, just passes the data unmodified to the next layer.

For example, HTTPS would be HTTP over SSL, SSL wouls be level 6 (presentation). If you use HTTP without SSL then level 6 is empty or uses the "null" protocol.

ICMP is part of IP, while you could say that the ICMP packet is inside an IP packet it is easier to imagine ICMP as just a part of IP, because it is used that way (for example, to signal that some other packet could not be delivered).

Just because I can send the HTTP packet inside an Ethernet frame (without IP or TCP), does not mean that the model is broken, it's just that "null" is a valid protocol.

Good point about the null. I see that it works that way for non-SSL traffic, but I still don't see how the "session layer" sits in between HTTP and TCP (even if you consider it to be "null"). It seems like session layer protocols are an entirely different sort of connection.

As for ICMP, I see what you mean that it's sort of part of the IP protocol (IP wouldn't work without ICMP), but it is syntactically formed inside an IP packet, and I do believe it is constructive to think of ICMP as being "on top of" IP

Well, in that case ICMP is a transport layer protocol, I mean you can stuff arbitrary data inside an echo request packet, so you can use it as a way to send HTTP requests (and the recipient replies with the same data, so you can check whether it arrived correctly).

Well, another example - I take an HTTP packet and send it straight over the wire (let's say a serial or parallel port of a PC), now it only has two layers - physical and application, all others are null. Or if you want a network, try an I2C bus, i

Well that makes my point, though: you can arbitrarily nest protocols inside one another, so it doesn't make sense to talk about them strictly in layers. Rather than saying "HTTP can drop to a lower layer", why not throw away the concept of layers, and just have a more vague concept of "application level" versus "transport level" and so on, like the 4-level IP stack.

The OSI model is still useful to know in which order you want to do stuff.

For example, take the application data, if you need to, convert it to something that the recipient can read (XML, some encoding), then encrypt it and/or use whatever session management protocols you want, after that put it in a transport protocol, then a network protocol and pass it down to data link which will send it over a physical connection.

The fact that you can arbitrarily nest protocols inside one another is the result of the f

Because the Internet protocols are not in fact part of the OSI model, despite lots of teaching materials claiming this. The neat little OSI layer diagrams you see with all the layers filled in are mostly retcons invented long after OSI was dead.

The actual Internet protocol suite is not part of the OSI model but the 4-layer Internet model [wikipedia.org] (Link, Internet, Transport, Application). Link is like OSI layers 1 and 2, Internet is like OSI Layer 3, Transport is like OSI Layer 4, Application is like OSI Layer 7, but there is no actual Internet equivalent of OSI's layers 5 and 6. Pretty much everything above 4 runs at Layer 7.

In the Internet model, it makes perfect sense for DHCP, IP and ICMP and routing protocols like RIP and OSPF to be at the Internetworking level because they are both protocols dealing with datagram transmission between interconnecting disparate packet-switched services, while TCP and UDP are in the Transport layer because they make dealing with raw datagrams somewhat more pleasant.

It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.

Thank you. Yes, the four-layer Internet Protocol Suite thing makes a lot more sense. Rather than trying to say "there are seven layers stacked on top of each other," it seems like here, the protocols are arranged into four logical "protocol groups" with clearly-defined roles, and no sense of "protocols in layer N run on top of those in layer N-1". In the IP suite, it seems valid for protocols in the same group to run on top of each other (e.g., HTTP runs over SSL; ICMP runs over IP).

While the 4 layer model may make sense from the upper layers POV, I do prefer separating the Link layers, and not mixing the media used w/ the switching layers.

I think the key with TCP/IP is that you have two layers that are actually part of TCP/IP. Above those layers you have an application and below them you have a "link" . The application and the link may themselves be divided into multiple layers but that is outside the scope of TCP/IP. You may even have some layers occouring more than once in the stack.

TCP/IP was not designed to fit the OSI model therefore any attempt at mapping TCP/IP onto the OSI model will be imperfect

TCP sits above IP and is conventionally considered to be at OSI level four (though according to wikipedia it implements some functionality that is in OSI level 5). UDP also sits above IP and therefore is also conventionally considered to be at OSI level four (though it implements hardly any of the functionality OSI associates with that layer).

It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.

Thinking of a fixed set of layers stops being useful as soon as you get moderately complex network setups because these days encapsulations tend to happen at all sorts of layers. Modern networks can probably be thought of more as a stack of protocols with the link layer at the bottom, application at the top and chopped up repetitive bits of the stack in the middle.

e.g. take for example a modern connection to a website and we probably see this kind of stack:HTTPSSLTCPIPPPPPPPoEEthernetATM VC-MuxATMG.922.5 data link layerPhysical ADSL

And that's just for a plain home ADSL connection. In more complex networks it is common to encapsulate stuff further, for example using GRE tunnels or IPSEC tunnels, and it isn't uncommon to see something more like:

So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP.

The real answer to that is that it's a Berkeley UNIXism. Some early TCP/IP implementations, including the one I worked on, had ICMP at a layer above IP, in the same layer with TCP and UDP. The Berkeley UNIX kernel, like other UNIX versions of the period, had real trouble communicating upward within the kernel, because this was before threads, let alone kernel threads.

To get around that kernel limitation, ICMP was crammed in with IP. This had some downsides, including the demise of ICMP Source Quench for

Surely this article should be nodded "massive ignorance"! It's the simplicity of the middle layers that enables the development of the upper and lower levels. It also makes the middle layer much more immune to security issues.

Well, I know for myself a good swift "attack" on my "middle layer" does cause me to fall to the ground and writhe around for a while, so I guess the internet and I do have a lot in common, really vulnerable mid-sections.

Not only did they combine the presentation and application layers from the OSI model they completely misunderstand WHY that the transport layer is less diverse in number of protocols.

They propose that we should create new transport protocols that do not overlap with existing ones.... The reason we only have a handful of them is because of the fact that there are not many ways to differentiate a transport protocol.

There seems to be the unstated(but vital to the conclusion asserted) assumption that competition actually makes protocols more secure and that competition must occur at the protocol level, rather than the implementation level. Without those assumptions holding, all this article really says is that people use TCP and UDP a lot. Yup. That they do.

This seems like it might be true in the (not necessarily all that common) case of a protocol whose security is fucked-by-design competing with a protocol that isn't fundamentally flawed, in a marketplace with buyers who place a premium on security, rather than price, features, time-to-market, etc.

Outside of that, though, much of the competition and security polishing seems to be at the level of competing implementations of the same protocols(and, particularly in the case of very complex ones, the de-facto modification of the protocol by abandonment of its weirder historical features). It also often seems to be the case that(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...

(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...

Careful that we do not open Pandora's box here... (You know exactly what I am talking about, heh)

But on another note your exactly right. This article seems to talk about how protocols "evolved" but this is just as useful as painting a picture of the internet:Time and time again I will see models looking at a picture of the internet "all at once", but without knowing what and why with each individual link, protocol, implementation, etc... this is a complete waste of time.

As best I can tell, after going back and reading the paper, TFA is a miserable hatchetjob that has almost nothing to do with the paper.

The paper dealt with modeling the survival or culling of protocols at various layers, under various selection criteria, from a sort of evolutionary-biology standpoint. This did entail examining what conditions resulted in monoculture end states, and what conditions might result in stable multiple-protocols-at-each-layer end states; but all at the level of a fairly abstrac

... there is human error there will be weakness. Before innovation, there is caution and upkeep. Careless server admins just leave their gates open, a la Sony. A simple misconfiguration and the East goes dark, a la Amazon.

But like all things founded on good democratic freedoms, we are free to be idiots. And unless we add socialized security, the internet will always be full of gaping weaknesses. And all of us, including those that serve responsibly, will suffer their consequences. A la the United States of

Evolution always seemed to be too like MS Outlook to me, this article just seems to confirm that, judging by the odd intelligible snippet I can make out from the overuse of metaphors and confused language of the summary. But fear not, mutt does not suffer these problems, and nor does Thunderbird if you need your middle layers of the internet client to have pretty icons.

security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.'

Let's have a lot of protocols right, but to prevent too much diversity (that is, stuff that doesn't work) we'll need to make sure these comply with one or two protocols that everyone will use...

Hmmm, "Middle layer protocols whose functionality does not overlap"... does that mean that we prune the vast abundance of current protocols with sometimes overlapping functionality? I guess we could call that "diversification" though at this level of semantic mismatch, we could call it "Frank" with equal justification.

Evolution at the middle layers is also hampered by the proliferation of middleboxes [wikipedia.org]: monkeying with packet headers for policy-enforcement and profit. It's also pretty well de rigueur for IT departments to configure both middleboxes and "smart" switches to drop any unrecognized middle layer packets.

They don't on anonymous ftp, but ftp fundamentally sucks: it needs two ports, a fixed port and a random data port that gets opened and closed for each transfer or directory listing, meaning added firewall complexity (the packet filter now must understand and parse the FTP protocol to be able to punch the holes to allow the random port traffic to pass, then close them again afterwards).

HTTP is far better for doing what anonymous FTP does. It requires only one port. For anything authenticated, sftp beats ftp.

No - the figurative sense of ossified is correct and common. Petrified is usually used figuratively to mean something like "scared stiff". Ossified, in common figurative use, means that something has become stiff and inflexible (often through disuse or rot) - like tissue that has become bone.

Having skimmed the article, I am concerned that they seem to ignore the well-known network effect: the value of a network to those attached to it increases at a rate faster than linear as a function of the number of others attached. This property has generally meant that once a network-layer protocol is sufficiently well established, it is hard to displace; a winner-take-all situation. Telegraph network. Telephone network. In the data world, IP, ATM, and a handful of others slugged it out, and eventuall

More for integrity, but the service layer architecture is purely based on trust. It turns out, that you can more readily do the most when you have trust, which partly explains the rapid growth of the Internet. However, a bunch of trusting souls make an irresistible target for those who are willing to exploit their trust. I believe the only way to deal with them is to move faster than they can. FTP should have been enhanced to the point that few would use the older version, hence a smaller target. I don

There are plenty of those already. NetBIOS is an example of a non-TCP/IP peer-to-peer filesharing protocol (I'm talking LANMAN style NetBIOS, not NetBIOS over TCP/IP). It doesn't route outside your local network though. There's the good ol' IPX/SPX, which can actually be routed if your router supports them - while not filesharing protocols in themselves, they do support some very well-established filesharing protocols. You could probably adapt bittorrent to work on IPX/SPX.

The problem is we can't even get IPv6 routed on the internet, much less some obscure non-IP protocol. Hell, we never even really got all of IPv4 - multicast would have been great for streaming video if anyone had bothered to set up their routers for it.

That being said, you don't need to use TCP and UDP. You can create new protocols to run over IP, and the internet will generally pass them (your local firewall might be a different story). They'll stick out like a sore thumb to anyone searching for them, though.