Comments

This is an alternative to earlier RCU/seqcount_t version of counters.
The counters operate as usual without locking, but when counters are rotated
around the CPU's entries. RCU is used in two ways, first to handle the
counter rotation, second for replace.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
---
include/linux/netfilter/x_tables.h | 10 +++-
net/ipv4/netfilter/arp_tables.c | 73 +++++++++++++++++++++++++++---------
net/ipv4/netfilter/ip_tables.c | 68 ++++++++++++++++++++++++---------
net/ipv6/netfilter/ip6_tables.c | 75 ++++++++++++++++++++++++++-----------
net/netfilter/x_tables.c | 43 +++++++++++++++------
5 files changed, 197 insertions(+), 72 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

> >>> + e = t->entries[curcpu];> >>> i = 0;> >>> - IPT_ENTRY_ITERATE(t->entries[curcpu],> >>> + IPT_ENTRY_ITERATE(e,> >>> t->size,> >>> set_entry_to_counter,> >> Hum, current cpu might be interrupted by NIC, since you only disabled preemption.> >> set_entry_to_counter() might get garbage.> >> I suppose I already mentioned it :)
Need to change preempt_disable to local_bh_disable.
> >>> counters,> >>> &i);> >>> > >>> for_each_possible_cpu(cpu) {> >>> + void *p;> >>> +> >>> if (cpu == curcpu)> >>> continue;> >>> +> >>> + /* Swizzle the values and wait */> >>> + e->counters = ((struct xt_counters) { 0, 0 });> >> I dont see what you want to do here...> >>> >> e->counters is the counter associated with rule #0> >>> >>> + p = t->entries[cpu];> >>> + rcu_assign_pointer(t->entries[cpu], e);> >>> + synchronize_net();> >>> >> Oh well, not this synchronize_net() :)> >>> >> This wont provide atomic sampling of counters for whole CPUS, and introduce large delays> >> on big machines (NR_CPUS >= 64)
For large number of CPU's it would be possible to allocate
a full set of ncpu-1 new counters, then swap them in and then do one synchronize net.
Or instead of synchronize_net, maybe a "wait for CPU #n" to go through
grace period.
> > Why would this not provide the moral equivalent of atomic sampling?> > The code above switches to another counter set, and waits for a grace> > period. Shouldn't this mean that all CPUs that were incrementing the> > old set of counters have finished doing so, so that the aggregate count> > covers all CPUs that started their increments before the pointer switch?> > Same as acquiring a write lock, which would wait for all CPUs that> > started their increments before starting the write-lock acquisition.> > CPUs that started their increments after starting the write acquisition> > would not be accounted for in the total, same as the RCU approach.> > > > Steve's approach does delay reading out the counters, but it avoids> > delaying any CPU trying to increment the counters.> > I see your point, but this is not what Stephen implemented.> > So.. CPU will increments which counters, if not delayed ?
The concept is that only the sum of all the counters matters, not
which CPU has the count.
> How counters will be synced again after our 'iptables -L' finished ?> > "iptable -L" is not supposed to miss some counters updates (only some packets> might be droped at NIC level because we spend time in the collection).> If packets matches some rules, we really want up2date counters.> > Maybe we need for this collection an extra "cpu", to collect > all increments that were done when CPUs where directed to a > "secondary table/counters"> > > > > So what am I missing here?> > > > Well, I saw one synchronize_net() inside the for_each_possible_cpu(cpu) loop.> Say we have NR_CPUS=4096, how long will it takes to perform "iptables -L" ?> > > > General/intuitive idea would be :> > switch pointers to a newly allocated table (and zeroed counters)> wait one RCU grace period> collect/sum all counters of "old" table + (all cpus) into user provided table> restore previous table> wait one RCU grace period> disable_bh()> collect/sum all counters of "new and temporary" table (all cpus) and> reinject them into local cpu table (this cpu should not be interrupted)> enable_bh()> > This way, "iptables -L" is not too expensive and doesnt block packet processing at all.
Pretty much what I said.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Stephen Hemminger a écrit :
>>>> General/intuitive idea would be :>>>> switch pointers to a newly allocated table (and zeroed counters)>> wait one RCU grace period>> collect/sum all counters of "old" table + (all cpus) into user provided table>> restore previous table>> wait one RCU grace period>> disable_bh()>> collect/sum all counters of "new and temporary" table (all cpus) and>> reinject them into local cpu table (this cpu should not be interrupted)>> enable_bh()>>>> This way, "iptables -L" is not too expensive and doesnt block packet processing at all.> > Pretty much what I said.
Cool then, sorry for misunderstanding your patch.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

So, is this patch stream at a point where I should try running the 32-core
netperf tests against it? And if so, against which tree should I try applying
patches?
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html