Can anyone explain to me how the % min-max works? If I add up the % of the minimums. it goes over 100%. How is that possible?

Click to expand...

I'll address the "adding up" question here: Simple. It's called normalization. If it adds up to 123%, just divide each one by 1.23. That way the user can give rough values, and the computer scales it to make it add up.

I'll address the "adding up" question here: Simple. It's called normalization. If it adds up to 123%, just divide each one by 1.23. That way the user can give rough values, and the computer scales it to make it add up.

Click to expand...

Has anyone tested (or analyzed the code) to determine it really works that way? Are were sure there the heuristics of the rankings don't mean anything? (I.e., that "Highest" with min-max 1-5%, and "E" with 90-100% operates exactly the same as "Highest" with min-max 90-100% and "E" with 1-5%?).

To me it would be intuitive that Highest always has more priority than Lowest (if the percentages are the same). If that's true, it's not intuitively obvious how a total of percentages greater than 100% would be handled. Would it be an amplifier of the priority implied by the heuristics?

The question of normalization seems simple and unrelated to the questions about inherent ranking. Addressing the latter:

Some have expressed the opinion that there is ranking inherent in the sequence Highest ... E. Others have assumed that the labels are merely for convenience, and the only thing that matters with respect to ranking are the parameters set on the "Basic Settings" page.

If we regard the first ratio (what I have called Min%) as defining class rank, then everything is simple, well-defined, consistent, and complete.

But I am biased towards understanding, concept, design, and effectiveness. Not so much toward the details of other people's code. :smile:

If you study the working of Tomato QOS over time, it's obvious that there is some mechanism to cope with the overstatement of percentages, what we haven't got is an authoritative confirmation from anyone that normalisation is actually taking place. I think it can be safely assumed that it is - but nobody has confirmed it.

As for prioritisation of classes, that is fundamental to the whole concept of QOS. If there was no priority between classes Highest through to E, then it wouldn't work.

But I am biased towards understanding, concept, design, and effectiveness. Not so much toward the details of other people's code.

Some time ago I tried turning my QOS on it's head. I put rules from Highest into E and from E into highest etc. Immediately P2P took over and it was impossible to surf etc. You can do this with just a few rules to prove the point...

This gets back to what was discussed a couple months ago regarding how "minimum percent" > 100% is handled. Some have suggested the percentages are simply (equally) reduced so they total 100%. But, it's not clear how they know this without someone who understands the QoS code saying so.

I've suggested that the terms (heuristics) "Highest" and "E" imply priority by themselves. If that's true, then a > 100% condition may not be simply a matter of reducing each heuristic proportionally. If the heuristic contains priority by itself, it seems intuitive that that priority would play a factor in how *much* a minimum percentage would be reduced.

In an earlier post it sounded like you were saying there is an implicit priority in the heuristics when you said "As for prioritization of classes, that is fundamental to the whole concept of QOS. If there was no priority between classes Highest through to E, then it wouldn't work."

That's why I asked if we'd proven that inverse priorities *don't* operate inversely. Your answer seems to say they do (which contradicts what I thought you were saying in that quoted remark).

I think the bottom line is that we don't really know how it works without someone who understands the QoS algorithm documenting it. It's clear that you can't get a minimum 150% just by specifying minimums totaling that much. But, what isn't clear is *how* the different priorities are adjusted downward. (And, relatedly, if all heuristics have the same percentages, will they perform with the same priority?).

FWIW, apart from the 5 QoS-related web pages, all that Tomato QoS does is to translate the "Basic Settings" and the "Classification" into TC and IPTABLES rules, which you can see in /etc/qos and /etc/iptables. Any adjustments would be the result of the implementations of those two facilities.

Those who are interested in the algorithms may wish to consult the respective man pages -- iptables tc tc-htb tc-tbf -- perhaps starting at http://linux.die.net/man/8/tc

It gets much worse than this. If you look up the different papers that deal with aspects of QOS systems, by Linux developers, research groups, and router manufacturers, they often contradict each other. Also implementations of what are ostensibly the same qos method by different manufacturers differ depending on how well it worked for them and how much they modified it to get it to operate properly. I agree with you, I wish there were more documentation. In the absence of same we have to figure out from observed results what is going on, how it applies to our own useage, and perhaps not get too worried about the details. In the case of tomato QOS the answers based on observed results indicate (1) That a priority system is in use (as described by the author of this and other firmwares) and (2) That it is a fact that much better results can be obtained by overstating allocations, which implies that normalization (or something akin to it) must also be implemented. If you use that as a base, then you can make QOS work.

I think the biggest problem in all of these discussions is that people generally expect an instant response to a rule without considering the effect it has on other classes and overall performance. Like set a limit at 100K and it instantly goes there and stays at that level regardless of other traffic and varying conditions. But life ain't like that, is it? There are reasons why things don't work as anticipated and with a little thought it's generally possible to identify what the reason is and do something to prevent it, or at least reduce it's effect.

FWIW, apart from the 5 QoS-related web pages, all that Tomato QoS does is to translate the "Basic Settings" and the "Classification" into TC and IPTABLES rules, which you can see in /etc/qos and /etc/iptables. Any adjustments would be the result of the implementations of those two facilities.

Click to expand...

Thanks. I forgot that this seems to translate directly into those lower-level system tools. But, I still wonder if there's not something Tomato-specific about this. For example, I've seen quite a few people say that, after using DD-WRT, they felt Tomato's QoS worked better (i.e., more effective, not just easier to use). QoS is just a function of iptables, I wonder why there would be a difference.

Or, maybe those observations were just subjective and there is no difference.

BTW: Looking at my /etc/qos, it represents total minimums > 100%. So, any normalization is happening after the translation into TC and IPTABLES rules?

You've opened a can of worms here... if I change a fundamentally important thing in the tomato source code, while DD-WRT changed a different parameter for a different reason, can both firmwares truly be said to be running the same module?

That it is a fact that much better results can be obtained by overstating allocations, which implies that normalization (or something akin to it) must also be implemented. If you use that as a base, then you can make QOS work.

Click to expand...

I think we agree. I thought someone earlier in this thread asserted that the normalization is equal across all the categories (that each category's minimum percentage is reduced at an equal *rate*). That's what I called into question. I don't think anyone knows that with certainty without examining the underlying code.

Like you said, overstating the minimums seems to have a positive effect. This implies those overstated percentages aren't being adjusted down at an equal rate. If the were, you could adjust them down yourself and get the same results.

That was the only point I was trying to make. I don't think anyone has determined that it's as simple as reducing each minimum value by 1% at a time until the total is 100. It seems like it may have something to do with the size of each value, and reducing each value by some percentage based upon its size (so each value is lowered at a different *rate*). Or, as I suggested before, the implied priority of the category's heuristic plays a role in how equally (or unequally) the values are lowered to get to 100%.

Or, maybe they aren't normalized to 100% at all. Maybe iptables just uses the minimum bandwidth as it's stated, and juggles traffic in some kind of undefinable manner, staying below the max bandwidth. It just shakes out to certain rates without there being any intention to match the processing to actual specified minimums.

You've opened a can of worms here... if I change a fundamentally important thing in the tomato source code, while DD-WRT changed a different parameter for a different reason, can both firmwares truly be said to be running the same module?

Click to expand...

I guess that's the question. Is Tomato's (or DD-WRT's) tc and/or iptables plain vanilla? And/or, are there other factors that influence QoS?

BTW: Looking at my /etc/qos, it represents total minimums > 100%. So, any normalization is happening after the translation into TC and IPTABLES rules?

Click to expand...

(If there is any normalization.) Hence "all that Tomato QoS does ...". That's why I redirected attention from Tomato code (where others have purported the decisions to be) to TC and IPTABLES. And since I showed where to look, you can see exactly how the Settings and Classifications translate into /etc/qos and /etc/iptables.

The good news is that I have shown where to look for the algorithms, and thattomato/release/src/router/rc/qos.c is not it.

(If there is any normalization.) Hence "all that Tomato QoS does ...". That's why I redirected attention from Tomato code (where others have purported the decisions to be) to TC and IPTABLES. And since I showed where to look, you can see exactly how the Settings and Classifications translate into /etc/qos and /etc/iptables.

The good news is that I have shown where to look for the algorithms, and thattomato/release/src/router/rc/qos.c is not it.

Click to expand...

Thanks. But, does anyone know whether Tomato modifies TC and IPTABLES? As I said earlier, I've gotten the impression Tomato's QoS is different because I've heard former DD-WRT users say they felt it worked better than DD-WRT (more effective results, not easier to use).

As I said earlier, it could be possible that's just a subjective impression. But, I've heard it from others. And, I got the same impression myself when I tried DD-WRT for a day or two. It seemed there was something peculiar to Tomato's QoS. But, maybe also it was something (negatively) peculiar to DD-WRT's.