Have a cool product idea or improvement?

We'd love to hear about it! Click here to go to the product suggestion community

Yet again: Download Throttling

Hello everyone,I know this has been covered a lot on this forum, and I have searched and read pretty much every post that contains useful information regarding download throttling, however I cannot seem to get it working.

Firstly, i want to know if my idea of 'Download Throttling' is correct...Let's say I have 10 workstations behind my UTM, i wish to throttle each workstations internet download speed to 1meg. I assume that the download throttling section of the UTM is what this is designed for?

This is how I have attempted to throttle my clients download speeds:Firstly, I turned on QoS on my WAN adapter:

I then created a Traffic Selector, which will trigger when it sees traffic from the internet, on port 80, 443 heading to my internal network.*edit* I understand now that this would not work due to port randomization during initial protocol handshaking between client & server.

The rule has been reversed.*edit*

I then created the throttle rule named "Limit Web Traffic" to 512kbit/s (i put it really low so i could see that it was working)

Things to note:I have tried this both with having the Web Filtering disabled and enabled in transparent mode, i have also tried creating throttling rules using the flow monitor - while this creates rules just like mine, they still do not function.

The results with the throttle rule active.

Do i have a configuration incorrect, or am i using this feature not as it was intended?

Hello everyone,
I know this has been covered a lot on this forum, and I have searched and read pretty much every post that contains useful information regarding download throttling, however I cannot seem to get it working.

I've never managed to get Download Throttling works.
But, you can achieve the same result with Bandwidth Pools, setting the Bandwith parameter to 1kbit/s and upper limit to desired value. It should be enabled and bounded to LAN interface.

Download throttling DOES NOT work. Forget about upload throttling too. However, you are on the right track by creating Traffic Selectors. Use Bandwidth Pools and select Traffic Selectors you created. Works for me. Message me if you like.

The good news is that it works really well and seamlessly in Copernicus.

Simple explanation: Download throttling may not be working on its own... turn it off so it doesn't mess anything up in the future, unless you need either distributed limits, or ingress limiting, then fix your rules based on the following link, and (possibly) create dummy bandwidth pools that do nothing on each interface to force correct tc config on the backend:

Elaboration: The bandwidth pool you created must be forcing tc to create the download throttling queues correctly on the backend. This might be fixed by now? Only assign bandwidth pools with upper limits, and throttling rules, to internal interfaces. You don't want to forcefully drop already received traffic on WAN interfaces, ever, for traffic flow management purposes.

You should always be using bandwidth pools with upper limits set, to limit traffic, be using bandwidth pools with upper limits for egress limiting, and download throttling rules for ingress limiting, respective to the interface on which the rule is assigned. UnlessIf you require the per rule bandwidth to be distributed in a per IP or per IP pair fashion, where these options only exist for download throttling rules, unless I'm missing something. Keep in mind, that as long as you leave Limit Downlink and Upload optimizer enabled on the interface the bandwidth pool(s) w/ limiting are assigned, you are already getting a result more or less identical to what a download throttling rule in shared mode would give you. Don't bother migrating / creating throttling rules just to use shared mode, you're probably already getting shared mode behavior and just don't realize it.

Revised 04/23/2017 - Per following discussion with Bob (thanks Bob!), and response from Bill (thanks Bill!)

You said, "You don't want to forcefully drop already received traffic on WAN interfaces, ever, for traffic flow management purposes." This is what the Download Throttling rule in the KB article does. What is your reasoning behind this comment?

My reasoning behind this comment is the following (off the top of my head):

Already paid for WAN traffic (bandwidth + PPS + processing overhead) should not be dropped to shape traffic flows, unless your WAN connection has more bandwidth available and lower latency, than your LAN connections, which should never really be the case.

When throttling traffic, it should always be done internally, where the price for dropped / throttled / re-transmitted packets is far lower and far less disruptive to the network as a whole.

Download traffic should be throttled on its way out of LAN interfaces, and upload traffic should be throttled on its way out of WAN interfaces, as bandwidth pools (and download throttling I believe) behave now, as long as you bind your pools to the correct interfaces. This enables the QoS device (the UTM in this case) to cache the packets internally, as opposed to dropping them on the WAN, which causes expensive WAN based re-transmits to frequently occur by default.

As the QoS device (UTM) sends the cached packets out to LAN hosts at the throttled rate, the LAN host returns time stamped ACK packets back to the sending host, based on when it received the cached throttled packets, which in turn causes the sending host to slow down the traffic flow, as it sees a large amount of delay (and TCP window scaling kicks in for TCP flows), without actually dropping / re-transmitting any WAN packets. In the case that the receive buffer on the WAN interface becomes full during an initial flow burst, it has no choice but to drop packets, which will also cause the sending host to immediately slow down the flow, and some expensive WAN re-transmits will happen, this is unavoidable in edge cases.

However, the overall effect is that, by only limiting download flows via internal LAN interfaces, we limit the worst case scenario of dropping + re-transmitting expensive WAN packets, to extreme edge cases only. We also avoid forcing expensive WAN operations to be the best case scenario, as it is when we limit inbound traffic on WAN interfaces.

Multiply this, for example, over the 32,000 traffic flows the Home license permits you (say you setup a single rule to throttle all LAN hosts), and you start to get a glimpse of the bigger picture, as far as how much WAN congestion this simple change can save.

Converting a worst case scenario into an edge case scenario, simply by changing which interface you throttle traffic on, with identical results as far as the download throttling feature is concerned, is a win win win in my book.

There is published information, synthetic, and real world tests, easily found on Google, on this very concept. They get a whole lot further down the rabbit hole than my post does. And everything backs up my overall understanding of the issue.

Thanks, Keith, for an excellent explanation of how it should make a difference...

I don't know how other solutions do QoS, but, overall, unless inbound packets are dropped and the sender doesn't slow their stream because they're not getting ACKs back, flows clogging the pipe will continue at full throttle. Some senders support Explicit Congestion Notification (ECN), but I don't think it's reasonable to count on that unless you control both ends of the conversation like might be possible in the IBM intranet.

My understanding is that there is little buffering in the UTM of traffic passing through, so whether you throttle traffic with Bandwidth Pools or Download Throttling rules on LAN or WAN interfaces, the result is the same - packets are dropped. That's the only way I know of to get a message consistently to the sender to slow down.

Another example of a complication is traffic where the packets are handled by the Web Proxy in the UTM. The Proxy will accept the entire downloaded file unless there's a Download Throttling rule in place on the External interface. In fact this is often the primary issue we deal with. My reservations for 2Mbps for VoIP would include, in order, with a total download bandwidth of 20Mbps, something like the following on the External interface:

Throttle 'Internet -> VoIP -> External (Address)' to 1Gbps [in other words, an Exception to the next rule]

Throttle 'Any -> Any -> Any' to 18Mbps

Now, I could be convinced to change my configurations if a developer that handles QoS confirms that there's a noticeable difference between slowing a few ACKs and dropping "excess" traffic. Using either approach, it's difficult with real-time streams like VoIP to prevent "jitteryness" if there isn't a good margin between available and reserved bandwidth.

Also agreed, ECN is simply not enterprise ready, and never will be unless they find a way to fix the MITM issue. It is far to vulnerable to manipulation, even if it were more widely implemented.

Senders will slow down / speed up flows when they see ack's coming in at a certain rate, it is part of TCP window scaling (sliding window specifically), which everything with a TCP stack supports. If it didn't, your downloads would start at one constant rate and never change, faster or slower.

For throttling the web proxy, now you've got me here. Unless SUTM would do something like add a virtual interface to enable applying QoS rules directly to the web proxy, assigning specific web proxy related throttling rules to the WAN interface is probably the only way to make that work with the current architecture.

Also with you here, lots of inbound real-time flows, like VoIP & IPTV, are largely dependent on the sudden increases in bandwidth they need, to be available when they need it, not ~250ms later... at 250ms, it's too late, you have service degradation and/or flat out interruption.

I'm just not seeing how shaping externally would help any of this though, apart from the internal UTM services. I'm seeing lots of situations where it could hurt, and have seen a fair few situations where it absolutely does hurt...

If you are relying on QoS to make intelligent decisions about which packets to delay and/or drop, the traffic has already made its way past your internet connection and into your WAN interface. Choosing where to drop said traffic at this point, simply moves the re-transmission point. I'd rather have my UTM re-transmitting dropped packets to LAN hosts, than the sending host outside my network. This is the buffer I'm talking about, not something specific to the architecture of SUTM, something specific to the architecture of network drivers / stacks. Everything, has a re-transmit buffer, however large or small, it always exists, unless we are talking about really really cheap hardware, and we aren't.

This is an interesting conversation indeed ;-)

I may not be able to get back to you again until tomorrow, don't take it personally. I have fun with this stuff, even if it turns out I'm wrong :-)

Good technical discussion. I just wanted to chime in about the different terminology and tabs used in UTM9. Long time users like Bob probably remember that we only used to have bandwidth pools that worked fine for most traffic and any kind of qos had to be applied to traffic leaving the firewall LAN or WAN interface didn't matter. So to throttle incoming WAN traffic, you had to apply a rule to LAN interface and throttle it there.

Download throttling was added during 9.1 beta, because people couldn't get the simple fact that you had to throttle traffic leaving the firewall and couldn't really drop it on the WAN interface without having unexpected page hangs etc. In any case they added the feature due to feature requests for download throttling that you could implement without understanding how qos functions or RTFM.

Here is my discussion with one of the devs when it was first introduced where I was confused about download throttling. They completely eliminated some of the fine tuning after the beta.

Hi Bill, thanks for the reply. Definitely helpful as to the "why?" part of it all.

That leaves me with a few more questions, of course, and it relates to a good point Bob brought up...

When dealing with the issue of managing traffic of internal components like the web filter, that on their own pull in traffic, process it, then forward it on to the appropriate interface, what would be the "SUTM correct" way of implementing rules that accomplish managing these flows? Can we use either one of bandwidth pools or download throttling, as long as the rules are correct in terms of the final source / destination? Or do we have to use one or the other, or even specifically formatted rules, to effectively shape traffic flows "middle man'd" by internal SUTM services?

One more, just for clarification. Are the download throttling rules also applied at the "tc" level of the kernel as the bandwidth pools are, with some extra logic, or tc's ingress mode applied, to make sure they are applied correctly on the back end? Or are the download throttling rules applied by a different layer / mechanism that allows ingress shaping outside of tc?

At present, there is no way other than Download Throttling to handle "internal flows" in the UTM. I don't know the answer to your "tc" question, but the difference is similar to the difference between a DNAT and an SNAT. Download Throttling rules are applied to arriving traffic as soon as the packet is accepted by conntrack or a firewall rule, or maybe before, and definitely before anything else happens. Bandwidth Pools are the last thing considered before the packet leaves the interface (just before SNAT/masq, is my guess).