> If you need other smp barriers at the lock, then what about the non-locked > accesses _while_ the lock is held? You get no ordering guarantees there. > The whole thing sounds highly dubious.

The issues is not about protecting data, it was all about ordering anupdate of a variable (mm_cpumask) with respect to scheduling. The lockwas just a convenient place to add this protection. The memory barriershere would allow the syscall to use memory barriers instead of locks.

> > And all of this for something that is a new system call that nobody > actually uses? To optimize the new and experimental path with some insane > lockfree model, while making the core kernel more complex? A _very_ > strong NAK from me.

I totally agree with this. The updates here were from the fear ofgrabbing all rq spinlocks (one at a time) called by a syscall would openup a DoS (or as Nick said RoS - Reduction of Service). If someone calledthis syscall within a while(1) loop on some large # CPU box, it couldcause cache thrashing.But this is all being paranoid, and not worth the complexity in the corescheduler. We don't even know if this fear is founded.