Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

angry tapir writes "Google and a group of partners have released a set of tools designed to help broadband customers and researchers measure performance of Internet connections. The set of tools, at MeasurementLab.net, includes a network diagnostic tool, a network path diagnostic tool and a tool to measure whether the user's broadband provider is slowing BitTorrent peer-to-peer (P-to-P) traffic. Coming soon to the M-Lab applications is a tool to determine whether a broadband provider is giving some traffic a lower priority than other traffic, and a tool to determine whether a provider is degrading certain users or applications. 'Transparency is our goal,' said Vint Cerf, chief Internet evangelist at Google and a co-developer of TCP/IP. 'Our intent is to make more [information] visible for all who are interested in the way the network is functioning at all layers.'"

After RTFS, my first thought is that all the major ISP's will reverse engineer the tools, such that their traffic 'bandwidth shaping' methods will actually prioritize these packets, so that end users wind up getting lied to (that their network traffic isn't being slowed down AND that they are getting a faster internet connection than they actually are).

Developing a system to fool the tools would cost money. Traffic shaping seems to be more a problem with Cable ISPs, and for almost free, they can flood TV with the gripping and compelling story of "HOW CABLE IS FASTAR!!® THAN DSL BECAUSE OF THE TRAFFIC SHAPING!!!" (disclaimer: this movie may or may not be a work of fiction)

Developing a system to fool the tools would cost money. Traffic shaping seems to be more a problem with Cable ISPs, and for almost free, they can flood TV with the gripping and compelling story of "HOW CABLE IS FASTAR!!® THAN DSL BECAUSE OF THE TRAFFIC SHAPING!!!" (disclaimer: this movie may or may not be a work of fiction)

Way to preach to the choir, google.

There's actually no contradiction.

P2P is a problem for cable ISPs because well, the upstream bandwidth of cable systems (DOCSIS) is extremely limit

After RTFS, my first thought is that all the major ISP's will reverse engineer the tools, such that their traffic 'bandwidth shaping' methods will actually prioritize these packets, so that end users wind up getting lied to (that their network traffic isn't being slowed down AND that they are getting a faster internet connection than they actually are).

Yes, exactly. So the next step is for the users to start making their traffic look like these tools. The final solution for the user is for the test tool to

After RTFS, my first thought is that all the major ISP's will reverse engineer the tools, such that their traffic 'bandwidth shaping' methods will actually prioritize these packets, so that end users wind up getting lied to (that their network traffic isn't being slowed down AND that they are getting a faster internet connection than they actually are).

Yes, exactly. So the next step is for the users to start making their traffic look like these tools. The final solution for the user is for the test tool to be as much like file transfer tools that the ISP can't tell the difference, so must either play fair or be detected.

In other words, when you are caught downloading the latest movie releases, you can simply say it is for testing purposes.

This tool has made to make the broadband service connection more efficient. Google thinks that you should get quality service, and that is why they're about to launch a program that will tell you just how fast your internet is running. This tool will gauge the speed of your Internet connection and tell you how fast your connection is and what is slowing it down. You won't need a payday advance loan--it will be free. You could get a payday advance loan [personalmoneystore.com] in record time here.

those tools seem pretty useful, but i don't know how user-friendly some of them are. personally, i'm looking for a tool to see if our ISP (at the office) is hijacking our DNS errors, or all of our computers are just infected with malware.

also, is anyone else seeing a bunch of "" characters on the Network Diagnostic Tester [internet2.edu] homepage? is my browser/system screwed up, or are there a bunch of a little boxes with "FF FD" in them scattered all over the page?

well, we do have an aging file server at the office that needs to be re-purposed. it used to house two 120GB hard drives in firmware RAID 1, but one of the drives died recently and the other is about to kick the bucket (they're both about 7-years-old). and with external hard drives costing less and less these days, it seems more practical and cost-efficient to simply use a few pairs of external hard drives for back-ups. also, ever since we switched to wireless, working over the network (with 20~100MB hi-res

Get an external enclosure that does RAID and use that. Preferably a NAS device so you can leverage it a little. Not very expensive and that way your backup will actually be more reliable than the source. "Backing up" data to unreliable external storage like USB hard drives and flash drives is a bad idea.

ever since we switched to wireless, working over the network (with 20~100MB hi-res images) has become a pain in the ass

Wait for n or switch back. I recommend the latter. The practical limitations on wireless are serious an they're not changing anytime soon.

why would a USB external hard drive be any less reliable than an internal SATA drive?

we're a small indie label, so i'm not sure the cost of an external RAID enclosure is justified. we do have a lot of hi-res graphics to back up, such as album artwork, print layouts, poster/sticker/clothing designs, etc., as well as e-mails, invoices, and our retail & radio mailing/contact lists. but i think weekly backups onto one or two 750GB~1TB drives should be sufficient.

why would a USB external hard drive be any less reliable than an internal SATA drive?

Because you're probably moving it around more. External USB 3.5" hard drives rarely have anything close to decent shock protection, so it's fundamentally less secure that the drive buried in the guts of the PC. Even if it's just sitting on your desk 100% of the time if it accidentally gets knocked off your desk its MUCH more likely to fail than the one buried in the guts of the desktop. And because its a lot smaller it's more likely to get knocked off.

is anyone else seeing a bunch of "" characters on the Network Diagnostic Tester homepage? is my browser/system screwed up, or are there a bunch of a little boxes with "FF FD" in them scattered all over the page?

Actually, the entire content you're getting from what you think is "the web", comes from malware installed in all computers in your company.

This message, for example, was generated by worm4421__slashdot_replier, installed in the coffee machine.

I'm not sure you will be happy; the results of the test may lessen your opportunities to be snarky.
According to Glasnost [mpi-sws.org], Comcast is currently throttling 0% of torrent uploads and downloads.

What do they mean by slowing? You can "slow" Bittorrent by shaping or by giving it less priority? Again, is this being confused on purpose? To what end? From my post on the Cox story:

One issue is over subscription. Unless a company is large enough to have lots and lots of peer connections, your ISP is probably over subscribes their upstream connections. This is fine, because on average traffic goes in bursts. The problem is that everything starts to break down once you have a small pool of people running P2P 24/7. These people are just as greedy as the ISP's they complain about. They want a huge "dedicated" pipe, but have others subsidize it. I have no issue with someone like Cox de-prioritizing their traffic so that the people that just want their Vonage to work don't get squashed out. This is a temporary solution because the ISP will eventually have to up their pipe speed.

The other issue is granting certain companies privileges on a network and penalizing other companies they don't like (e.g. penalize Vonage and prioritize a VoIP partner). This should be illegal. This is a clear case of violation of neutrality. At the same time, the company should be able to directly peer with a company (say a VoIP provider) without violating the law. This may seem unfair, but peering has been a perfectly valid way of reducing traffic on a transit connection.

The last issue is traffic caps. I don't think there should be a law against it as long as the company is upfront about it. Putting caps on traffic allows ISP's to maximize their over subscription and cater to people that want low cost Internet service. We *want* people to afford Internet services. The market chooses. If you are a big user of P2P, then you will have to go with another ISP that does not have caps. You may have to pay more for this privilege... sorry, but that is how things go. The market must have a way to manage scarcity of resources. If you want more of a resource, you will have to pay for it even it if looks the same (e.g. 5mbit from Cox versus 5mbit from FiOS).

Don't confuse QoS with net neutrality. As long as the QoS is applied equally, then it should be perfectly fine.

So what? Yes, they want to manage costs. So does everyone. They have a business model, if you don't like it, go with someone else. If you think you have a better idea, then build out your own ISP and compete with them.

Data pipes are like realty, location location location. I'm in a rural area and a large part of the cost of a T1 is the local loop. The next factor is your type of upstream provider - Tier 1, etc. I can get a Tier 2 T1 for 595. If I was in a city it'd be cheaper but still expensive compared to cable or DSL. Also 1.5Mbps isn't what it used to be.

I just came up for contract renewal on my Sprint T1. One year is about a grand a month, a 3 year term drops it to 895. Do a NxT1 deal and it gets a little cheaper b

After a month trying to solve the tiny little problem of having a packet loss between 5 and 25%, the ISP simply didn't know what was happening and they politely told me to find another ISP and retry in a year or two, when the technology was more stable.

I got FTTH at home some months ago and I also get television and telephone via that cable. Telephone and internet have worked _flawlessly_ the whole time. No packet loss, no speed loss, just perfect. Television seems to skip a frame or two every once in a while though.

Then again, this is the ISP that for a long while resisted FTTH and only adopted it after all its competitors did. So I guess they wanted to make a really good service so as not to lose any more customers.

The ISPs have large areas where they are the only high-speed Internet providers, besides expensive and high-ping satellite connections. You know just as well as I that there's no feasible way to build your own ISP. Caps are only possible because of the ISPs' anti-competitive behavior.
What you're saying is, "Hey, DeBeers has a business model of 'managing costs'. They can do what they want. If you don't like it, find another player (never mind that DeBeers controls 90%+ of the market) or make your own diamond mining corporation."

Do you mean immediately? I'm not sure you should expect to, most companies would implement the cap at a position which is sufficient to avoid annoying 99% of their existing customers, ie it won't reduce their immediate costs much at all, rather it will give them a control mechanism going forward.

That said caps don't bring much to consumers if there is just a one-size fits all cap. For there to be any consumer benefit a set of tiered caps n

Maybe things are different in the US. When I first got ADSL, it cost me GBP25 per month for 512k down/256k up uncapped. When my ISP introduced 2M down/512k up plans with tiered capping, I initially stuck with my old 512k plan because it was going to cost GBP40 per month to get an uncapped 2M connection. Then one day I discovered the usage meter buried in their control panel and saw that I was well under the cap for a GBP20 2M connection. Today I get 6Mbps down/600k at GBP18 per month and still only use half

But now that Comcast has capped traffic, have they provided a new, inexpensive tier of service? Or has their prices gone up? Can you name any company that capped traffic and then lowered prices?

First, let me say that I do not know a heck of a lot about the business side of this. But, let me play Devil's advocate. Just because prices haven't changed or cheaper plans are not being sold doesn't mean they aren't doing this to profit. What if by capping traffic, we create a larger pool of available bandwidth.

> The last issue is traffic caps. I don't think there should be a law against it as long> as the company is upfront about it. Putting caps on traffic allows ISP's to maximize> their over subscription and cater to people that want low cost Internet service.

I don't think that caps should be illegal either but metered service would be much better.

Metered service makes sense, but only if there's a significant minimum charge. If an ISP charges, say, $1 GB, that wouldn't make sense because most users would end up paying $5 to $10 a month even though the ISP's cost to simply maintain the physical plant and support system for a minimal usage customer is at least $15 or $20 a month.

Best way would be to do something like $30 for the first 20 Gb or something like that and then an additional $1 for each 1 Gb over this. It could be pretty similar to how cell phone minute prices are structured. For $50, that would give you 40 Gb total. That way they can charge more for the heavy downloaders and less for someone who just checks their email and plays Punch the Monkey. Its great how I always win and I just have to sign up for all these other great deals to claim my prize...

> The last issue is traffic caps. I don't think there should be a law against it as long> as the company is upfront about it. Putting caps on traffic allows ISP's to maximize> their over subscription and cater to people that want low cost Internet service.

I don't think that caps should be illegal either but metered service would be much better.

You think they're going to cut do something that would cut the monthly bills of 80% of their subscribers? All the Joe Sixpacks and grandmas that check their email and visit CNN.com once a day? No, for them, metered service would mean $5/month (or $20 perhaps). That's down from the $50/mo they already pay.

I have no issue with someone like Cox de-prioritizing their traffic so that the people that just want their Vonage to work don't get squashed out.

Why deprioritize at all? Give everyone using the pipe at a given moment an equal portion of the available bandwidth. Divide it up evenly by customer, not by application. One person doing p2p shouldn't affect another person's Vonage phone call or vice-versa.

My understanding is that, currently, routers don't work this way. They pass each packet more or less equally (some ToS bits) as it comes in. A single customer running P2P can monopolize the traffic on a router. The P2P customer gets 90 out 100 packets coming in and the VoIP customer gets 10 of the 100. ToS change things up a bit, but not enough to balance out how much point to point traffic a P2P program can generate.

What you are talking about is a router that looks at all the IP's or customers (if a cu

QoS allows the router to drop packets coming in for a particular ip/port (your client) if they exceed a certain rate (normal and/or burst). TCP will automatically renegotiate the transfer speed to accommodate the loss of packets. In this way users hogging bandwidth can be contained. The real powers of QoS are in prioritizing outgoing packets in the queue, allowing protocols that require low latency (VoIP) to go out before those that have no latency requirements (torrent), but the outgoing queue can also be

Don't confuse QoS with net neutrality. As long as the QoS is applied equally, then it should be perfectly fine.

I fully agree with your first sentence. QoS is a necessary part of any network management plan, and it deserves to be seen as a tool like any other.

But it doesn't follow that QoS is always good if applied without prejudice. For example: A network that doesn't give adequate priority to anyone's VOIP is no more desirable than a network that gives priority to one VOIP supplier. (If you want VOIP servi

These people are just as greedy as the ISP's they complain about. They want a huge "dedicated" pipe, but have others subsidize it.

No, they don't. They want the bandwidth that was advertised when they made their decision about whom to pay for internet provision. If ISPs are not prepared to provide that bandwidth at that price, let them be honest about it.

so it's greedy to expect an ISP to deliver to you the service they advertised and that you've paid for? don't confuse your own solipsism & selfishness with other people's being greed. right now you're saying that VoIP should have priority over P2P because presumably "ordinary" people like you use VoIP but don't use P2P (a rather questionable assumption). so just because someone else's internet usage patterns are different from yours, your traffic should be given priority over theirs, even though you both pay the same monthly rate?

you also seem to be the one confusing the issue of file-sharing with so-called "bandwidth hogs." first of all, congratulations on buying into (or trying to perpetuate) the ISP's scapegoating of power users and file-sharers for their poor service--i'm sure all those Asian countries with cheap, symmetric high-speed broadband connections don't have file sharers or power users. secondly, even if we assume that a broadband provider has to oversell in order to remain profitable (an unlikely case), why could a simple bandwidth cap be implemented regardless of the type of traffic one has? protocol discrimination and deep packet analysis (which simply adds more network overhead) is not necessary even if you're trying to perform damage control after having over-sold by too much.

at our office i use BitTorrent maybe once a month to download 30-40 MB Photoshop brush sets, or an 18 MB Ad-Aware install file (the LavaSoft site requires you to sign up for Trialplay, and give out your personal information and CC# to get the Anniversary edition), and only very occasionally an up-to-date Windows XP disc image (700~800MB). on average, our monthly BitTorrent traffic totals less than 100MB on a 10Mbps connection.

on the other hand, we're a record label so we listen to band demos all day long, and these days most of it is done via MySpace, which is very convenient; we can see how many plays each artist has received that day, what shows they've played recently, and just gauge their popularity more easily. it also cuts down on the demo CDs being pressed/burnt/shipped, which is good for the environment. however, this means we're streaming music all day long (from 9 AM to 5 PM). assuming the average audio quality frm myspace is 96kbps, that's about 330MB of traffic from streaming audio alone, not to mention all the banners, photos, and other graphics on these bands' MySpace pages.

so if 2 people each consume, say, 500~600MB of network bandwidth each day, but one person uses it solely for BitTorrent while the other uses it solely for sending large files via e-mail, why should the BitTorrent user's network packets have lower priority than the e-mail user? how is he being greedy or asking others to subsidize his bandwidth?

ISPs have no business dictating how a broadband subscriber uses his internet connection. if they want to throttle people's connections after a bandwidth cap is exceeded, fine--don't advertise the service as unlimited, make the cap clear to your customers, and apply it equally to everyone regardless of whether they're an old grandma who's watching the Food Network in HD on her cable TV, or if it's a teenager downloading the latest Slackware ISO via BitTorrent.

lastly, if an ISP cannot meet the demands of their customers, they need to do one of two things: a.) upgrade their infrastructure to increase network capacity, or b.) don't oversell so much. the basic concept of overselling is sound. on average not everyone is going to use 100% of their pipe 100% of the time. but it's up to the ISP to calculate what their average network usage is going to be, and provide enough total network bandwidth so that the network doesn't become saturated during peak hours. what you don't do is try to scapegoat power users for your own miscalculations and continue to oversell while trying to dictate how the public uses the internet.

most countries are offering faster broadband at lower costs, following the usage trends that are shifting towards high bandwidth applications

so it's greedy to expect an ISP to deliver to you the service they advertised and that you've paid for?

They advertise residential Internet, and you get residential Internet. I don't see what the problem is. Oh, you wanted Dedicated Internet, but didn't want to pay 10 times the consumer DSL service? And you blame them for that? I think I see the problem now. The problem is that you are an idiot.

I have no issue with someone like Cox de-prioritizing their [P2P] traffic so that the people that just want their Vonage to work don't get squashed out.

I have a problem with my ISP giving my traffic lower priority based on the meaning of the bytes I'm communicating. All they should worry about is packet lengths and (maybe) QoS fields.

I'm fine with my bulk transfer being delayed during sparse bursts of interactive traffic from my neighbors: they get fast HTTP, I get all the pipe when they're reading the page (as opposed to downloading a new one).

That is, as long as we over time each get a fair share of the pipe up to the amount we're using individually: if

These people are just as greedy as the ISP's they complain about. They want a huge "dedicated" pipe, but have others subsidize it.

No they want a dedicated pipe of the size they paid for, hell most people wouldnt even care if they used QOS properly and just slowed down their torrents, but the if my isp detects that im torrenting all my packets are slowed down, pings take 4s.

This would be fine if they stopped advertising their service as "unlimited". Netflix doesn't sell you "unlimited" DVD rentals, but then caps you to a set amount per month with little to no notice. Spoofing RST packets to impede bittorrent and other P2P protocols sounds like a pretty big limit to me. Throttling you after a certain amount of data transfer is another pretty big limit. Capping your total download with no way to check how close you are to your limit is yet another. Good thing those are made know

The set of tools, at MeasurementLab.net, includes a network diagnostic tool, a network path diagnostic tool and a tool to measure whether the user's broadband provider is slowing BitTorrent peer-to-peer (P-to-P) traffic.

Will there be a tool to tell me if Digital Max [cox.com] is really my friend in the digital world, or if he's just bullshitting me?

These tools are no doubt going to be very useful to everyone that uses p2p software for _any_ purpose.

The flipside is that as an administrator of a workplace network i can also use these tools to ascertain whether or not the traffic managment and qos i've put in place on the corporate network is working.

It doesn't really matter so much on this particular network as p2p protocols are blocked (infact every outgoing port is blocked from the internal lan, some https sites are whitelisted, and all non-ssl web access is proxied.

But it will allow me to ensure the qos for our voip trunks is effective.

It got Slashdotted:caused by: java.io.IOException: open HTTP connection failed.
at sun.applet.AppletClassLoader.getBytes(AppletClassLoader.java:265)
at sun.applet.AppletClassLoader.access$100(AppletClassLoader.java:43)
at sun.applet.AppletClassLoader$1.run(AppletClassLoader.java:152)
at java.security.AccessController.doPrivileged(Native Method)
at sun.applet.AppletClassLoader.findClass(AppletClassLoader.java:149)... 9 more

Yeah I just tried the Bittorrent test and got a message indicating the service was busy.

I wonder if the worst offending ISPs would consider blocking these site's IP addresses. I can imagine their response now "Oh we're not 'traffic shaping' or blocking those sites... we're just 'data molding' or 'idea shaping'."

"ComObjectCast and a group of partners have released a set of tools designed to help broadband providers and researchers determine the algorithms used by Net Neutrality Measuring Tools. The set of tools, at MeasurementLabSucks.net, includes an enduser diagnostic tool, an enduser pathfinding diagnostic tool, and a tool to determine is the enduser is measuring whether the user's broadband provider is slowing BitTorrent peer-to-peer (P-to-P) traffic. Coming soon to the M-Lab-Sucks applications is a tool to determine whether an enduser is using a tool to determine that a broadband provider is giving some traffic a lower priority than other traffic, and a tool to determine if an enduser is using a tool to determine whether a provider is degrading certain users or applications. 'Obfuscation is our goal,' said Argle-bargle GlypfpGlopf, Chief obfuscation evangelist at ComObjectCast and a co-developer of ROFL/MAO. 'Our intent is to make more [information] visible for all who are interested in keeping customers from using what they paid for.'"

Why is it difficult to believe that neutrality can be measured? Is the bias against p2p packets 5% stronger than other packets or 500% stronger. That is a measure of neutrality. One can argue whether neutrality is a good thing or not but I don't think it is reasonable to suggest it can't be measured.

How much does a bias against P2P packets count versus a bias against (or cap on) outgoing e-mail or streaming video? Does a bias against outgoing e-mail serve a useful social purpose by limiting the damage that botnets can do?

I'm reminded of the scene in "The Aviator" where Howard Hughes has his meteorologist try to mathematically prove to the censors that the exposure of Jane Russell's breasts in "The Outlaw" was no greater than other films t

Hmm. Never heard of it. When I go to the Measurement Labs website I get sent to something called Glasnost. Not sure if it's the same thing or not. You said your was developed at the Max Planck Institute? Huh, so is this one. Man, I don't know what's up. Maybe you should email the MLabs folks and let them know.

I'm surprised that Google put it's name on this. First, all the servers are down. Second, the very first test on the main page is hosted on IP named servers. That is just tacky and makes me think of rogue servers hosted by a guy in Romania. Finally, all the tests point to separate pages, on different servers with no consistent look and feel.

It is like this was put together by a high school computer club in 2002--not the premier web based company in 2009. WTF?

I think it's great to measure connection speeds and all, but isn't it a little redundant? We already know Internet Providers are limiting and degrading certain types of traffic. The question isn't IF they are, it's how we get them to stop doing it. Incidentally, that might just happen if Obama, a supporter [youtube.com] pushes legislation through holding ISPs accountable for non-neutral networks.

Any ISP that is interfering with traffic is going to exclude the IPs for bandwidth testing sites from its traffic management policies? At least they will in a market where customers care about this kind of thing, and have a choice of ISP.

And how are they going to differentiate tiny differences in network latency from the huge sluggishness anything running in java experiences?

Add to that the ultimate irony of java - it was intended as something that would run on anything anywhere, yet in order to make it run on whatever you happen to have you usually have to find, download, and install huge sets of libraries and compilers, find you've got the wrong version, download another huge set, and find it still doesnt work becuase of some bogus assump

You've just been scammed! Enjoy the rewards of boosting some clipclown's unique visitor count so he can sell his domain next week.
Eh? Wha? Orly? Yeah dude. sascha@ucimc.org owns 10 domains, all registered in the past 3 weeks, one of which is your beloved article header. Did you really think this product would do anything other than capitalize on your fear that your ISP is screwing you?
Money made, hand in pocket with massive funds. Site soon to vanish. Oh surprise surprise. Let's call this news.
Ya'll j