Posted
by
ScuttleMonkey
on Monday May 12, 2008 @12:53PM
from the comcast-on-backorder-for-months dept.

An anonymous reader writes to tell us that Procera Networks is launching a new weapon on the deep packet inspection (DPI) front. At $800,000 these 80 Gbps tanks aren't going to be sitting in everyone's closet, but it could mean that more traffic shaping is on the way. "The PL10000 can handle up to 5 million subscribers and can track 48 million real-time data flows. That's certainly a potent piece of hardware, but larger ISPs will need more. That's why Procera designed the new machines with full support for synchronizing traffic flows where return traffic might be routed to a different PacketLogic machine. The machine receiving the return traffic can make the machine monitoring the outbound traffic aware that it sees the other half of a TCP/IP conversation, for example, giving the devices more accuracy than those which might only have access to one side."

It's not that funny. I live in China. We will even have slower traffic now.
As it stands forget watching youtube. All I can get is about 30KB/s download/upload on a single connection which is barely enough to listen to internet radio. The good news is that I can have more than one connection open with other countries, but from what I understand no media players or streaming servers have this parallel 30KB/s connection capability to total the necessary 4Mbps/download for watching internet video. That's why China's "Golden Shield" works so well. In order to circumvent it, one must have tools to open multiple connections for the single purpose intended i.e. media player, web serving one large page through multiple data sending connections.
Oddly enough if I connect to websites inside China I can get 4Mb/s connections.
The world's internet is crippled with equipment like this in my perspective and experience already.
I'm grateful I can actually express my opinion about this here.
BTW for the last four to five months slashdot has had this quantserve in-your-face job ad when accessing the site. From China, it often slows down the page access and takes sometimes 5 to 10 minutes before I can read the main page. Is this normal?

Yes, 2kBps would be the available average bandwidth. So, assuming that nobody is running p2p software, downloading pornos, or retrieving linux isos, the available peak bandwidth would be much higher. But that would mean that you'd have to advertise speeds that you can't provide during high-demand times and hide a "we'll provide whatever we feel like providing and you'll have to keep paying for it whether you're satisfied or not" clause in the contract. Would any ISP ever stoop so low as to try something like that?

2kBps would be 172.8 mB per day. If everybody spread their usage out evenly over the course of the day it might work, but since usage is not uniform it probably would not work. This box would be in the ballpark though, especially if it was effective at shutting down all of the P2P users on your network. Install three or four of them and it might actually work.

If you don't route all of the packets through this thing, what device will do the cursory inspection and decide which packets warrant "deep" inspection? (I'm really asking - If somebody has a good answer, I'd be interested.)

This is also assuming every single packet that an ISP manages goes through a single physical location. So unless Comcast routes every packet to their headquarters at the top of Mt. Doom for inspection before delivery, they're going to need a lot more of these.

Assuming everyone using my level of connection (10 megabit) maxes out their connection (unlikely), they could handle about 8200 users, making their cost about $100/userâ¦ which is still potentially reasonable. $50/user if people average 5 megabits (far more likely), and $25 if they top out at 250 Kbyte/s on average.

Better yet, force the telco's to put up the fiber networks they were awarded huge tax cuts to put up! They don't have bandwidth problems they have accountability problems created by the RIAA et el backed by people desperately trying to find a way to sensor the net.

At almost a million dollars a pop, is it really saving money for ISPs to use these? How many would a major ISP need to shape all of their traffic?

Not only that but it seems like a dumb technical solution for P2P traffic shaping.

Most ISPs would be geographically distributed. I can't think of to many places where you would actually see this much traffic. You'd need, what, 10 OC-192's to see 80Gb/s? Maybe they add all the GigE ports together and cheat to advertise a big number, but still.

Second, this is the kind of device you want closest to your customers, not down the line where your traffic aggregates. If you want to stave upstream traffic, do it as soon as possible in the network.

Third, it's better in almost every aspect of IT to scale out, not up. Every node would be different. You could have business customers in one CDIR or another and different configurations for each. I'm sure this thing is configurable per port, but I'd think it would be easier and more cost effective to have smaller distributed individually configurable devices only where you need them.

No, I don't think this thing is best suited to do traffic shaping for the typical ISP. If you can do DPI on that much traffic, there's bigger, less benign applications I can think of.

Yep, and how much were computers, originally? The price on these will drop when enough of them are bought.

No it won't. There is realistically only a market for a handful of these worldwide. Not several million of them like PCs. Its exactly like cisco hardware, it has remained astronomically expensive simply because only a very small select group of people (network admins) actually buy them.

DPI has only one option when presented with encrypted information however (at least afaik). Give the packet a low priority or pass it through normally (of course, it could also drop it entirely but doing that as a rule would be problematic to say the least).
So it would be possible to force a bet. Can the ISPs afford to give encrypted traffic a very low priority?

No, but if they wanted to be pricks they could identify p2p users and give THEIR encrypted traffic a very low priority.

Even if you ran with full encryption and encrypted the communication with the tracker it's still trivial to identify you as a p2p user -- not many VPNs make connections with dozens (or hundreds) of remote hosts.

The only way around that would be to VPN somewhere and use that VPN link to pass all your p2p traffic -- but if you have the means at your disposal to set that up then you likely have the means to find an ISP that doesn't throttle your p2p traffic.

https://www.relakks.com/?lang=en does exactly what you've described. I believe the cost is $10/month US.

Their netblock is known. Connections to the service for the VPN is a red flag. The system is designed to monitor both directions of a connection and associate them. How many ways can a VPN connection be intercepted by a man in the middle attack where all initial handshakes is known to the man in the middle?

It should be trivial to limit any end nodes to a maximum of, say, 8 encrypted connections with unique netblocks on the destination. Any new sessions negotiated after that will automatically be given very low priority.

Also, a TCP packet contains a lot more than just an encrypted payload: you can tell a lot about a packet from the other parts: source and destination ports, sequence and acknowledgement numbers, header length, reserved ID bits, urgent flag, ACK flag, push flag, RST flag, SYN flag, FIN flag, Window size, checksum, urgent pointer and even the options field. I'm sure that it wouldn't be very difficult to set up a bayesian detection ruleset using this data to identify what protocol is being used. The checksum and flags wouldn't be all that useful, but the port numbers, header length, window size, urgent pointer and seq/ack number progressions can be quite telling.

i agree that info is revealing.. but if it is done as a tunnle connection.. the revealing info will look like a point to point tunnle.. all the good stuff is going to be in header info inside the encrypted payload

Freenet runs over UDP with fully randomized ports. It acknowledges messages, but even the ACKs are encrypted. Window sizes are hidden behind the crypto as well. Except for the initial connection, handshaking is done by routing through previously established connections.

I'd like to see them DPI that. The best they can do is traffic analysis and decide it looks like P2P and throttle on that.

You seem to have missed the point... everything you state is obvious; the trick is that different OSes use different "random" port ranges, use a different random number generator, have different systime drift etc. If you examine all the extra packet information over time when someone is running, say, a torrent client (or even a Tor router or Freenet node), they affect your sequence number patterns. An ISP knows what standard traffic looks like on their network; if they see any signs of shift (including en

Definitely not. If people find that their online web purchases fail to complete because some marketing executron has decided to put shttp protocols in the slow lane, word will soon get round on the consumer newsgroups.

The problem with this whole "it's encrypted so they'd have to throttle SSL too" idea is that bittorrent doesn't use SSL, and lacks a Diffie Hellman exchange. Encrypted BT traffic looks nothing like any other traffic, so it can still be picked out of the traffic flows and thrown into another QoS bracket. Using SSL for BT would also be stupid, because SSL(the key exchange in partciular) is computationally expensive. You'd peg your CPU at 100% the whole time you were grabbing your porn.

That's right, each time the connection is established(and renegotiations after X amount of data or X amount of time). BT opens sockets constantly, and the key exchange is the expensive part, not the AES that comes after. Pop open top/taskmgr, and then pop open an SSH connection. Watch the CPU spike. Now consider that same spike happening constantly with multiple connections at once, happening over and over again after each chunk. Worse, you don't have control over the rate that this happens at because other

Look at the priority the user requested for that packet, check to make sure the router you received the packet from hasn't filled their quota for that priority, and if not, give the packet that priority.

Remember when the internet was supposed to be a "dumb" network that could therefore be easily and seamlessly improved by just improving the software at the endpoints? Those were good times.

Actually, the whole idea of DPI is *not* to detect things based on port. There's definitely legitimate uses for encrypted traffic - heck, even encrypted P2P, but it'd be a bit premature to say that you can't separate protocols from each other even if they're encrypted.It's a bit beside the point though. A sane approach to DPI is just to give some traffic a lower priority than other traffic. If the pipe goes full, you don't want to RED drop some WoW traffic (unhappy user) over some BT traffic (decidedly non-

Surely that money could be better spent improving their capacity by purchasing new equipment with better signaling methods or even extra lines rather than on equipment to inspect and shape (i.e. selectively throttle) traffic?

Even if improving the capacity costs a fair bit extra the space for more customers at higher speeds and more consistent service for existing customers will surely increase their profits by offering more than their competition right?

Investing in more capacity means a linear increase in customers and profits. Investing in network anti-neutrality, OTOH, means new and lucrative pricing structures for various services. They're just putting money where it stands to return the greater profit.

Installing more capacity doesn't help with congestion when all of the P2P apps on the network automatically increase their bandwidth consumption in response to the increase in available bandwidth

It does if you invest in more capacity without increasing the speeds available to your end-users. Put another way, my torrent seeding at 768k might be consuming 1% of a backhaul link -- if they triple the speed of that link without increasing my upstream bandwidth then I'm only using 0.33% of it.

If you can't supply 10mbit speeds to your customers then stop offering them.....

How much of this advertised speed is more or less advertising hype more than anything else??? We all know what it takes to do packet inspection and rules table lookups, so to me, this number seems a bit on the hyped up side...

only 80Gbps with 5 million subscribers? If my math isn't way off, that's about 16kbps - which is pretty pitiful speed. You'd have to throttle a lot just to be able to use one of these machines at max subscribers per machine.

Those 5 million subscribers are not all using their connections concurrently. Think about what just happened when I loaded this webpage: it downloaded a text file full of HTML/CSS/Javascript/Whatever else slashdot uses, and now it sits here while I type this comment. I'm not using my connection right now, and won't be using it again until I hit the submit button.

Sometime traffic shaping can be a good thing. For example, on a VOIP call you really do want to give priority to the packets associated with the call so that the codecs will be able to reconstruct a reasonable facsimile of a voice.

QoS issues and those that depend on connection latency need to be addressed, but deep-diving packets is unnecessary to do this. You need only look to the header, find that it's TCP and the service requested to accept or reject latency. The remaining issues are handled by various protocols. This is like swatting a fly with a freight train.... an eight hundred thousand dollar monopoly building freight train.

Deep packet inspection is necessary to identify and provide QoS for many modern internet applications. For example it is quite common for services to tunnel video over HTTP (example - YouTube). Skype cannot be identified without DPI.Of course it can be used for good or evil. But the fact of the matter is that DPI is in the mix as one approach to provide QoS for real time internet applications like streaming video and audio that don't play well with the 'best effort' delivery paradigm that packet switched ne

At huge speeds, I'm not sure that the 20KB/sec needed for fdx VoIP is going to get noticed. Voice is communications in full duplex as an app, and isochronous media needs a little room; add some more for multiple streams. YouTube or other video over who-cares-what protocol is entertainment and can get in line with all other entertainment. The amount needed to protect VoIP from latency issues for single streams is trivial, and routing problems inject more latency than do packet squishes and misfires.What's ev

"Sometime traffic shaping can be a good thing. For example, on a VOIP call you really do want to give priority to the packets associated with the call"Yes. And it's a good thing for your ISP to know you are, for example, on VoIP to really *slowdown* the packets associated with the call so they can push through your throat their "premium service for VoIP" which is just de-capping again your VoIP calls.

Oh! and *they* -not you, are the owners of the device so, what of those two "good things" do you thing you

Yes. And it's a good thing for your ISP to know you are, for example, on VoIP to really *slowdown* the packets associated with the call so they can push through your throat their "premium service for VoIP" which is just de-capping again your VoIP calls.

Yeah, and the bad fairies might come in the night, steal your firstborn and replace it with a gollum.

think about the original definition of ethernet and of IP, in general.

in general, it was setup to pass packets and ideally to keep them in the same order and not drop them. beyond that, the upper layers (tcp and udp) did any higher level functions.

this worked! for the longest (damned) time, it worked.

and now, ISPs (and large networks) are starting to try to break out the 'cable is a bunch of bits' into discrete 'services' and then try to re-order things, drop things, queue them differently or somehow treat things non-uniformly.

I think this is Evil(tm).

I've been in the networking field for a few decades (really) and I've seen traffic shaping (what a euphemism, btw!) try to argue its case over and over again. but I keep getting back to the basic design principles of ethernet (csma-c/d) and tcp/udp-ip and when you have large enough pipes, you don't NEED a 'fast lane' or diamond lane, so to speak. it just mucks up the works, makes things harder to design and manage and really isn't helpful since you still need large pipes and all the shaping in the world won't CURE that, it only DEFERs things. that's not a cure.

ISPs who employ shaping are simply RIPPING OFF customers from their rightful bandwidth and also passing along the COST of the packet snooping hardware to us, the users. (don't think they'll just spring for the hardware on their own; they'll pass the costs of this stuff to us, to be sure).

I think its evil. once you look at it from enough angles, you see that its not at all a good thing.

You are absolutely correct. For the longest (damn) time this did work. The problem is now the traffic doesn't burst like it used to. It's more sustained and oversubscription rules are breaking. Most ISPs are honestly trying to play a game of self-preservation so they can keep their service alive without being cost prohibitive.

DPI is not evil so long as it is used to make the network better as a whole. As with anything it can be bent to the will of evil, but I disagree with that completely. I believe

when you simply pass traffic as you get it, you can avoid paying (in real dollars) for equipment that looks inside.you can avoid the network management complexity if you simply let networks 'work' as they always have.

are you running into a lot of dropped packets? simple: you are over-selling. there is an EASY way to fix that.

oh, and an evil way. guess which one most ISPs and large public networks pick?

by the time you factor in the cost of the snooper silicon, all its overhead and the training/support ove

Again though, its not feasible to have a 1:1 ratio of bandwidth at the WAN

You don't have to have a 1:1 ratio. You just have to have a decent enough ratio that on the typical day your customers aren't competing for bandwidth with one another. Obviously there will be times that they do (a WAN link goes down, some event/disaster happens that causes a spike in traffic, etc, etc) but if that's happening more than occasionally then you need to consider investing in some network upgrades.

People are not going to just regulate themselves, nor should they have to.

Maybe the ISPs should invest in backhaul upgrades without raising the speed level delivered

The problem is now the traffic doesn't burst like it used to. It's more sustained and oversubscription rules are breaking

Cry me a river. Even ignoring the rise of p2p, did anyone seriously believe that the same oversubscription ratios that worked in the early 90s were still going to be valid in the 21st century? It's not like people didn't foresee the rise of streaming video and online content distribution.

Most ISPs are honestly trying to play a game of self-preservation so they can keep their service alive without being cost prohibitive.

"so they can keep their service alive without reducing dividends to the shareholders", there, fixed that for you.

Nothing, but don't pretend they have to throttle p2p to 'survive'. Lot's of ISPs (both here in the states and elsewhere) have managed to survive without throttling p2p. Verizon doesn't throttle. They seem to be doing just fine the last time I checked.

but I keep getting back to the basic design principles of ethernet (csma-c/d) and tcp/udp-ip and when you have large enough pipes, you don't NEED a 'fast lane' or diamond lane, so to speak. it just mucks up the works, makes things harder to design and manage and really isn't helpful since you still need large pipes and all the shaping in the world won't CURE that, it only DEFERs things.

ISPs will never actually purchase enough bandwidth to come close to meeting their customers' needs in a 1:1 fashion. More importantly, the problem is that P2P will expand to fill up whatever bandwidth is available.

So the ISPs have two choices:1. Buy ABC more bandwidth, multiplied by forever, to serve a fixed number of users, thus raising their fixed costs without adding new customers2. Buy XYZ worth of traffic shaping equipment, multiplied by once

I think if you buy your connection from a decent Isp (if that is possible in your area), then the Isp will specify how much capacity you're supposed to use.The question then is, what happens when you exceed that?

I think if you exceed it, then traffic shaping is reasonable (the alternative is to pay per byte- that's normally simply begging to be massively overcharged, don't do that).

The question then is, what type of traffic shaping?

I think that you should be given budgets, X high priority, Y low priority. A

If my ISP is going to inspect my packets to the point of identifying their content as p2p, then they should be 100% responsible for any and all illegal activities I may or may not conduct on their connections.

The entire concept of the DMCA safe harbor clause was founded on the understanding that it would be virtually impossible for providers to monitor and filter illegal or unlawful activities and data. However, now it has become perfectly reasonable that they can identify and reroute or slow this traffic. This clearly nullify's the safeharbor provisions.

> The entire concept of the DMCA safe harbor clause was founded on the> understanding that it would be virtually impossible for providers to monitor> and filter illegal or unlawful activities and data.

No. The "safe harbor" provision of the DMCA is founded on the understanding that it would be virtually impossible for providers to reliably identify material that infringes copyrights. It has no relevance to any other activity.

This is quite the impressive machine they're talking about. But what they don't seem to cover very well are the legitimate uses for such a device. Just because they call "monitoring your communications" deep packet inspection doesn't make it right.

It looks like a disaster in a box to me: not only does it allow anyone with the price of the machine to monitor and inspect each and every packet you exchange, it also is capable of destroying the legal protections that ISPs currently enjoy.

The ISPs are treated like common carriers and are exempt from many liabilities because they carry all traffic equally and don't know or control the content of that traffic. Now that they're insisting that they need to "prioritize" some traffic at the expense of others, monitor and drop traffic because of its content, and are installing machines like these that further refine their ability to monitor and control what traffic you'll be allowed to transmit - well, their "safe harbor" exemptions are based on them not doing any of this.

To everyone saying, well, I'll just encrypt everything: That's great, but this thing falls back on service fingerprints to identify traffic if it can't inspect packet contents. This is a similar concept to nmap's service and OS fingerprinting tech. Idiosyncracies of timings, handshake protocols, header flags, and traffic patterns can give away that a packet contains p2p content.

I'll bet in the war against p2p, making p2p data look like normal "priority" data is going to be far easier, and far cheaper than the ISPs trying to identify and block/slow the data they don't like. Consider that hiding p2p data takes one person with a keyboard and some smarts. In a month this guy will work around any solution the $800K machine guys have put together, and the next machine will be 8 million dollars to do the same job.

Encryption? Just the first salvo. Others have pointed out that p2p makes a lot of connections. That's fine, just create a secure queuing system where people wait their turns (and don't have multiple data streams). Or, a repeater system where you get one or two data feeds in, and feed to one or two other people. There's no reason why a p2p system has to have 50 different connections to different people. Start looking at the data itself and see if it's http-like? Okee-doke, just create an http wrapper around your data so it looks like http. These are just the dumb ideas I came up with on the fly. Real solutions would be a lot better.

This kind of asymmetric "war" has been fought before, namely with copyright protection in the 80s. The result? Cracked programs are more valuable than non-cracked programs (oh, and all copyright protection schemes were cracked)

In a system with untrusted intelligent nodes, you can't really create a priority system without some people making their non-priority data look like priority data. The internet was designed for the end nodes to be smart, and the network to be dumb. (The exact opposite of the phone system). It seems to me this is just a basic design principle of the internet.

OK then lets play a game. I can still spot your traffic and classify it as p2p with near 100% accuracy. I'm not going to tell you how, you have to guess and experiment. If you reply here with another attempt then I'll tell you if you pass or fail, but not why.

Still want to play? If this sounds unfair then consider how this machine will be deployed...

PS You still haven't defeated the encryption fingerprints that the DPI uses but there is something much more obvious that identifies your traffic as p2p

You obviously have underestimated peoples tenacity at solving puzzles. The is FUN to a lot of people, and all it takes is one guy to find out your secret.but there is something much more obvious that identifies your traffic as p2p

I'm sure there is, and the P2P guys will work around that problem. Are you (the ISP) will to continue shoveling money into the companies that develop this, or would you rather just either buy more bandwidth, or estab

With IPsec, they won't even be able to see what protocol is being used. The more we use IPsec for everything, the less these things will look like an attractive way to spend money that would otherwise go to expanding capacity.

for years. This is not new. They have products (marketed towards the intelligence side of the world initially) that have been able to do DPI in near realtime on OC circuits. They can even take it a step further and do near real time packet replacement and data insertion... This is what you should be afraid of.. not of the fact that they can read your traffic in real time, but they can manipulate it in near realtime. It goes much deeper as well when you tie these types of products into other things like

Encryption is a good idea, but ISPs can still detect undesirable content by the handshaking and unencrypted header info. Maxwell Smart's communications might be ultra-secure, but nearby KAOS agents still hear whenever his shoe rings, y'know?

Heck, to defeat this you could just use AES with a default key. Everyone can use the same key, and have it be publicly known. It's fine because this thing doesn't have the compute power to decrypt in real time, even if it knows what it needs to be decrypting and what the key is. Screw handshaking, key management, etc -- just make the CPU cost nonzero and you're done.

I doubt my bank will use that, so does it really matter? Anybody using this encryption to circumvent filtering gets prioritized.For that matter as pointed out elsewhere, theres more to track than l7 content. If your ip has more than N encrypted connections, or sent more than N bytes, you get deprioritized. I can't think of any legit real world use for sending >500MB a day of https traffic. Even >100MB really. Or more than 50 new encrypted peers per hour.

You'd need to be housing dozens of people all of which are doing MASSIVE online banking to hit the numbers I mentioned, and frankly if you have enough computers on the one connection your isp would probably care just as much -- i.e one cable connection being resold to everyone in your appartment building or similar.

None of that should legitly be higher priority than my low-bandwidth latency-dependent ssh session, no.

Yes... which this does not have. That was kinda my point. It doesn't matter what some other thing that hasn't been built could do; what matters is what the currently available stuff can do. And decrypt AES at 80Gbps isn't on the list.

They can limit each encrypted bank or IM connection to 10-20KB/sec and you wouldn't even notice. You would notice your torrents slowing down though. Many ISPs are already using deep packet inspection. Hell, rogers in canada is playing around with inserting messages into websites [thestar.com]! I can only hope that it pushes more of the web to https.

Maybe it's because they want to do more than just monitor traffic volumes. One of the potential evils of this thing is that it can, for non-encrypted traffic like web access, track your web visits, see what you like, and report the top ten keywords for you to their spammer partners.