- Thread A acquires the rwsem for read- Thread B tries to acquire the rwsem for write, notices there is already an active owner for the rwsem.- Thread C tries to acquire the rwsem for read, notices that thread B already tried to acquire it.- Thread C grabs the spinlock and queues itself on the wait queue.- Thread B grabs the spinlock and queues itself behind C. At this point A is the only remaining active owner on the rwsem.

In this situation thread B could notice that it was the last active writeron the rwsem, and decide to wake C to let it proceed in parallel with Asince they both only want the rwsem for read.

/* There's a writer at the front of the queue - try to grant it the@@ -98,7 +101,7 @@ __rwsem_do_wake(struct rw_semaphore *sem, int wake_type) readers_only: /* If we come here from up_xxxx(), another thread might have reached * rwsem_down_failed_common() before we acquired the spinlock and- * woken up an active locker. We prefer to check for this first in+ * woken up an active writer. We prefer to check for this first in * order to not spend too much time with the spinlock held if we're * not going to be able to wake up readers in the end. *@@ -111,8 +114,8 @@ __rwsem_do_wake(struct rw_semaphore *sem, int wake_type) * count adjustment pretty soon. */ if (wake_type == RWSEM_WAKE_ANY &&- rwsem_atomic_update(0, sem) & RWSEM_ACTIVE_MASK)- /* Someone grabbed the sem already */+ rwsem_atomic_update(0, sem) < RWSEM_WAITING_BIAS)+ /* Someone grabbed the sem for write already */ goto out;

/* Grant an infinite number of read locks to the readers at the front@@ -187,9 +190,17 @@ rwsem_down_failed_common(struct rw_semaphore *sem, /* we're now waiting on the lock, but no longer actively locking */ count = rwsem_atomic_update(adjustment, sem);

- /* if there are no active locks, wake the front queued process(es) up */- if (!(count & RWSEM_ACTIVE_MASK))+ /* If there are no active locks, wake the front queued process(es) up.+ *+ * Alternatively, if we're called from a failed down_write(), there+ * were already threads queued before us, and there are no active+ * writers, the lock must be read owned; so we try to wake any read+ * locks that were queued ahead of us. */+ if (count == RWSEM_WAITING_BIAS) sem = __rwsem_do_wake(sem, RWSEM_WAKE_NO_ACTIVE);+ else if (count > RWSEM_WAITING_BIAS &&+ adjustment == -RWSEM_ACTIVE_WRITE_BIAS)+ sem = __rwsem_do_wake(sem, RWSEM_WAKE_READ_OWNED);