Howard Chu wrote:> Kaz's post clearly interprets the POSIX spec differently from you. The > policy can decide *which of the waiting threads* gets the mutex, but the > releasing thread is totally out of the picture. For good or bad, the > current pthread_mutex_unlock() is not POSIX-compliant. Now then, if > we're forced to live with that, for efficiency's sake, that's OK, > assuming that valid workarounds exist, such as inserting a sched_yield() > after the unlock.> > http://groups.google.com/group/comp.programming.threads/msg/16c01eac398a1139?hl=en&

Did you read the rest of this post?

"In any event, all the mutex fairness in the world won't solve theproblem. Consider if this lock/unlock cycle is inside a largerlock/unlock cycle. Yielding at the unlock or blocking at the lock willincrease the dreadlock over the larger mutex.

The fact is, the threads library can't read the programmer's mind. Soit shouldn't try to, especially if that makes the common cases muchworse for the benefit of excruciatingly rare cases."

And earlier in that thread ("old behavior" referring to an old LinuxThreads version which allowed "unfair" locking):

"Notice however that even the old "unfair" behavior is perfectlyacceptable with respect to the POSIX standard: for the defaultscheduling policy, POSIX makes no guarantees of fairness, such as "thethread waiting for the mutex for the longest time always acquires itfirst". Properly written multithreaded code avoids that kind of heavycontention on mutexes, and does not run into fairness problems. If youneed scheduling guarantees, you should consider using the real-timescheduling policies SCHED_RR and SCHED_FIFO, which have preciselydefined scheduling behaviors. "

If you indeed have some thread which is trying to do an essentially infinite amount of work, you really should not have that thread locking a mutex, which other threads need to acquire, for a large part of each cycle. Correctness aside, this is simply not efficient.