I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use.

I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all.

For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit.

1 Answer
1

In my experience for altq to work well you really need to add the red, rio or ecn option to your limiting queues, otherwise as you approach saturation (physical or queue-virtual) you're in for some unpleasant situations. Take a look at the section on RED (Random Early Detection) in the altq howto for more info.

Also, two problems with your snippet above:

You have two default queues: This isn't allowed (in fact pf should throw a fit about it).

One of your queues has no bandwidth (0Mb)? -- This is probably not what you wanted/meant...

There should indeed not be two default queues; I was testing some different rule setups when setting this up, and modified the current rules I had to what I originally had (I forgot to remove the default parameter from the low-queue). Will look into RED to see if it will solve this. By the way, any ideas regarding why the queues only are applied to TCP connections? The 0Mb limit was basically just there due to that I only wanted to test with one queue; I could indeed only have one queue, though I wanted to test altq with different queues, but did first want to try it for only one queue.
–
user42511May 8 '10 at 13:47