Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Julie188 writes "This is a brilliant little Linux trick from Windows fanboy Tyson Kopczynski. He wanted to test a new Windows 7 feature called Branch Cache, which caches remote data on the local machine to reduce traffic on a stressed out WAN connection. But how to fake a crappy WAN? Linux. 'The command that I executed (tc) made use of Linux Traffic Control (a kernel thing) which allows me to easily interject 100ms latency on eth1. Boff, Bonk, Pow, Plop, Kapow, swa-a-p, whamm, zzzzzwap, bam ... instant WAN crappiness,' he writes."

I really don't know, I only used it when I worked at Microsoft, and obviously when you work for them the licensing isn't that big a deal. I'm nearly 100% sure it was, at the very least, part of the Xbox 360 development kits, which means it could also be a standard component of Visual Studio. But I really don't know.

I've found when transferring files across my local network, if I have any audio applications open I can't get more than 28-30 Mbps out of my wireless. If I close the audio application, I can often get over 40 Mbps.

No, XP, Vista, Win7 all have the limit, but I'm not so sure about Server versions.

They don't consider it a "flaw", as they boast it as malware limiting, and under most situations, it's irrelevant because 10 new connections a second is about 5 times more than most applications need. Excluding P2P, and a few Games.

Funny. In 1996, my Windows NT 4.0 workstation box running on a Pentium 166Mhz machine would never skip playing an MP3 no matter what I threw at it. I could start 12 simultaneous programs and the WinAMP MP3 still didn't skip.
I didn't get skip-free Linux MP3 playback until about 2002 with a 1.5GHz machine. Move a window, playback skipped.

I believe the skipping is a limitation with X windows and not the Linux kernel. When you click to drag, the process that spawned the window is "paused" until you let go of it. This prevents X windows from going crazy trying to redraw the windows while moving which could cause problems. At least that what I read somewhere.

The thing you read somewhere is wrong. There does exist the XGrabServer call, which some window managers use in some cases (mostly older WMs, I suspect), but the documentation strongly recommends using it as little as possible. In no case is anything like that inherent in X11.

Had the same experience. In winamp there is an option about size of the buffer used for decoded audio. I usually set it to 5000ms but I remember it can be set much higher. This allowed for winamp to play even during the blue screen for limited time, often until the song ends depending on cause of the BSoD.

Winamp runs in the "high" priority class, which meant that few things could interupt it. Despite this being something that many people would frown upon, it actually worked pretty well. You should be able to do basically the same thing in Linux on older hardware.

What is the point of doing this? Is this even of remote interest to anyone other than the author of the article? If there's a genuine reason for this to be important or at least intriguing, someone please speak...

This seems to be valuable in situations where you are developing an application that will be accessing a database behind a dsl firewall. It would be nice to be able to profile the performance on your local network, instead of having it run too slowly to be used in the field. This happened to me once, and I fixed the problem by using a subselect, instead of multiple sql commands, but this wasn't readily obvious as the library was hiding the details of the process, and the speed of the local network compensated for the ineffiency(sp) of the code.

Actually, I was thinking of adding bandwidth throttling to certain parts of a subnet. This info is exacly what you need when you don't know where to start (for me at least).

Shit wifi performace is a standard Linux 'feature'

Com on now, switch distro's. How long did you research? Oh wait, your friend said "here, this is the distro for you..."? Try http://www.pclinuxos.com/ [pclinuxos.com], or you could just accept that the US government has outlawed the OEM's of wi-fi from open sourcing the drivers. (some clueless dumbshit thought it would keep hackers from destroyi

Sure it is interesting. Lots of times you can't adequately simulate 'real world conditions' in an office LAN or even with consumer grade connectivity.

Example: At my job we operate a work-at-home business that transmits essentially a voip phone call from various locations of a certain restaurant chain to the worker's home over two dsl lines, but without the luxury of being able to 'redial'. The only DSL we can actually get in our office is too close to our datacenter (under 5 hops) to adequately simulate

My company has a Linux box (named "slow-router") that does exactly that, to simulate network latency talking to remote devices over the network. I think it might even simulate random packet loss and such as well. It's useful to be able to do, but it's also not all that difficult... or newsworthy... good blog post, poor Slashdot post.

I beg to differ, for me, this is geek news. I found TFA & the following discussion interesting. It touches subjects that interest me peripherally but that I never needed to research. Now, I've been able to discover some interesting tools it would have taken me a while to discover otherwise.

This really is a something that has been around for a long time. One of the most interesting uses I have found for it is; simulating the effects of satellite Wan connections. Most of these links have about 600ms of end-to-end latency and without something like this simple tc command it is difficult to simulate this without actually hooking up to a real satellite connection.

Other uses; I once bandwidth limited one of my old roommates. Every week I would shave a little bit more bandwidth off of his connect

I tried playing an online linux game called Daimonin. It is kind of like a multiplayer of the old Ultima's. AFAIK, it still suffers from a serious problem, in that it doesn't do any client side prediction, and so there is severe latency between every move and every action (about half a second, which makes the game too painful to play). I tried to fix it, and started by attempting to introduce some lag on my local connection, but didn't find a way to do it.

What is the point of simulating a slow, lossy network? Why, figuring out how your setup would behave if it were in a real slow, lossy network, of course!

I use tricks like this quite frequently when developing network software and network protocols. Especially when I'm working on my forward error correction protocol, because that is _intended_ for slow, lossy networks. Alas, my Ethernet is very fast and very reliable.;-)

It is genuinely useful. One of the things I needed to do for my PhD was test a protocol I'd been designing in high-latency environments. For the early testing, I used the FreeBSD box under my desk as the server and my laptop as the client, and just told dummynet to add 100ms of latency into the connection. Later, I added some real world tests, but this was very convenient because the latency was entirely deterministic and so the results were reproducible. You can control latency, packet loss, and throug

At my wife's company, most employees have Windows XP laptops and can connect to the file server at work using openVPN. Even though latency is only 40 ms, Windows XP is incredibly slow at accessing the file server. Even simple operations such as getting a directory listing can take several seconds. Opening a small Word document takes over 30 seconds.

If Windows 7 has a feature that speeds up this access, it's going to be of great interest to many people. Of course, if Microsoft fixed the poor performance of C

Author of TFA said his original intent was to highlight using Linux to simulate network crapfulness, but enough folks have asked your question that he's planning a followup with the actual caching results.

Such capability is very useful to network folks to predict application behavior and best management approaches in various environments. We used FreeBSD for that purpose, but the effect was the same. We injected 350ms latency in each direction, and presto - satellite communication. That is enough to cripple TCP connectivity through a sizable pipe (latency will preclude the flow from taking entire pipe). By testing various acceleration methodologies, you can see first hand which one will allow you to fully utilize the bandwidth you are paying for, all in the comfort of your lab.

http://wanem.sourceforge.net/ [sourceforge.net] is a great tool for this. We use it at work to test thin clients over simulated WAN links. It has a ton of options (latency, jitter, packet loss, bandwidth, etc).

Such capability is very useful to network folks to predict application behavior and best management approaches in various environments. We used FreeBSD for that purpose, but the effect was the same. We injected 350ms latency in each direction, and presto - satellite communication. That is enough to cripple TCP connectivity through a sizable pipe (latency will preclude the flow from taking entire pipe). By testing various acceleration methodologies, you can see first hand which one will allow you to fully utilize the bandwidth you are paying for, all in the comfort of your lab.

When I was a lead tester at Accolade/Infogrames/Atari (same company, two owners, multiple identity crises), I had younger tests who didn't think video games existed before the Playstation. They were amazed that I played Pong when it first came out. They were shocked when I introduced them to another tester who tested board games in the 1970's. You can't always assume that the youngsters know what you're talking about.

... and you might find out about NISTnet, which has been around for YEARS... NISTnet does the same thing as this, on Linux, and also includes a statistical latency delay model which simulates real world conditions.

Unfortunately, NISTnet can only delay IP traffic, netem works at the ethernet layer and can delay everything. One of the great things about netem is that it can be set up to act as a bridge. Think of it as an ethernet cable with 100ms delay. NISTnet is great but it can't do that.

tc basically allows you to activate netem (a network emulator in linux). I dont know about now, but when I had used it for a project a year ago, you had to compile your kernel with netem enabled. tc then allowed you to modify your link properties to emulate wan links.
Had used this with tcpprobe to analyze the performance of an Inverse Increase Additive Decrease congestion control algorithm that we had written for academic purposes (adapted from http://nms.lcs.mit.edu/papers/binomial-infocom01.pdf [mit.edu]) and com

and you wonder why we all choke with laughter when you expect to be considered journalists.

Have you ever looked at the quality of regular journalists? If kdawson/timothy make an error, it is quickly pointed out by the readers. Traditional journalists? Same or worse error rate, no corrections.

Though not free (there is a trial), I played around with an appliance with a program called Lanforge installed. It's pretty sophisticated. You can setup a number of different "errors" (packet loss, jitter, delay, etc) and it can cycle between them - never constantly the same. It runs on Linux and Windows for sure, but I'm unsure about other OS's. It will also "learn" link statistics between two particular nodes and save that configuration for testing.

One of the great features of netem is that it isn't restricted to being used on a router. If you bridge two network interfaces together you can essentially use netem to make a device which looks like a faulty link. This can be plugged and unplugged [or routed using a VLAN infrastructure] into anywhere in your network without reconfiguration of any IP details on the machines under test.

I know this is a troll, but I remember hearing someone say their wireless card works better on a linux driver than a windows driver. Unfortunately, I can't remember where, so no link. Will post again if I remember.

Intel 8945J integrated wireless on my laptop. Dual boot, Zenwalk Linux and XP MCE 2005. Until the most recent driver from Intel, the wireless card was *significantly* stabler under Linux. It's now just as stable under Windows (though I replaced by router with a new D-Link 802.11n router recently too), but the throughput at long range is still better in Linux.

As an example of the latter under Windows the useable range on my wlan caps out at about 25m. that's enough to cover my house, and much of the front lawn. Under Linux, I was able to connect to my network from the picnic table at the park across the street, about 100m. I was only getting 1mbit of throughput, probably less, but it was definitely getting better error correction and a more useable connection at that range than under Windows.

Where are your power settings in Windows vs Linux? My Thinkpad's Intel wifi driver defaults to an energy-saving power mode, which results in lower performance at long distances (but is fine in my small apartment). This might not be a fair fight.

Intel 8945J integrated wireless on my laptop. Dual boot, Zenwalk Linux and XP MCE 2005. Until the most recent driver from Intel, the wireless card was *significantly* stabler under Linux.

Who needs wireless - I've got an Atherlos L1 gigabit ethernet controller on the motherboard - despite it being years old, all vista drivers for it are dogshit slow AND crash under any significant load. Under linux it works just fine. For the one vista system I must run I had to waste as slot on a PCIe gig-e card and use that instead.

You know, I have noticed this with my laptop in my house. Under windows xp I get one bar of connection and its flaky at best. Granted this is on the other side of the house from the router and also on a different floor, so I am not surprised by that. But in the same location on the same laptop running an Ubuntu liveCD gets better signal and a much more reliable connection.

My guess is that the linux driver allows for a higher power setting, though over the years I've come to think that the Linux TCP/IP stack