Whenever traffic through q10000 gets saturated (here: on purpose), there is significant packet loss on q10003 (catch-all for all traffic not assigned to q10000).Here, a simple ICMP going through q10003 experiences heavy packet loss (15%).

My understanding is that regardless of the traffic in q10000, there should be 5/6 (=5mbps) bandwidth available to q10003 (plenty for an ping), whereas q10000 should receive 1/6 (=1mpbs).The packages are actually dropped, not delayed: Adding a 3000ms wait to the ping also makes it time out.

We have also tried activating CoDel, with no noticable effect.

What's the best way to ensure q10003 will always have a guarantee of the bandwidth?

Even with weights 1 / 99 the voice is choppy and SNMP queries time out. Or in this case, ICMP is lost.

"Enable CoDel" on pipe and/or queue(s) makes no difference.

DFSR is pretty aggressive (It'll start 4-16 transfers simutaneously); but before with CBQ it was possible to keep it at bay nicely (Lower priority with borrow).My understanding was that the weight distribution is WFQ's "equivalent" of guaranteed bandwidth with borrows.

This is not exactly on topic but I was curious if you could try this and report back. I've been shocked at how well OPNsense's FQ_Codel implementation works with virtually no tuning or knobs required.

Edit the Pipe, change the scheduler type to "FlowQueue-CoDel" and leave all the other options unchecked. Set your desired bandwidth limit as per usual. Save/Apply these changes and re-run tests. I've been pleasantly surprised at how well FQ_Codel manages bandwidth and also results in minimal packet loss during periods of congestion.

My understanding is that using just CoDel will not assure a certain bandwidth for one application but divide them evenly.We have some requirements where about 80% of bandwidth should be made available on demand for RDP, for example, but if no RDP traffic needs it, it should go to lower priority background traffic.

I may be wrong, but this appears to not be achievable with only CoDel!?

Yes, I think you are technically correct. The reason I mentioned fq_codel is because I was a longtime (10+ years) user of the other *sense product and I found their traffic shaping options to be frustrating and always with a bad compromise.

I actually switched to OPNsense primarily because FQ_codel was included early on and it is has greatly simplified bandwidth shaping on my networks. You can still create individual pipes and rules to push traffic around and use the bandwidth limits to carve out bandwidth for your desired network traffic. But, using fq_codel as the scheduler often results is much less headache with dropped traffic, at least that has been my experience.

This has kind of reset how I look at traffic shaping. Using the old *sense product, it had strict queue limits and bad schedulers, which forced me to be overly precise with traffic shaping and I'd always lose something when I gained something. With fq_codel, I can take a flatter approach and have more basic queuing and still get better QoS on my network. Sorry for the long reply and I apologize if this isn't exactly on topic but, I thought I would mention it to see if you can try it and maybe it will help in your case too? I realize this won't be for everybody but in my use cases, it has helped me a lot.

I meant in the way that, should the link be satuarated, bandwidth is assigned according to weights whereas when some queues are idle, a low priority queue can eat up the entire pipe...

Not sure where to go from here with this; it just seems with any imagineable combination of settings high priority traffic gets packet loss, even if ipfw queue show never shows any dropped packets.Pipe bandwidth is much lower than actual link.

As said, the actualy problem occurred with SNMP and RTP; ICMP was incidental because it's easier to test.Either way, that should be caught in "real life" by the default queue (with a higher priority than DFSR), but that would add to the complexity of the minimalist example.