This patch is required for the first half of requeue_pi to function. Itbasically splits rt_mutex_slowlock() right down the middle, just before thefirst call to schedule().

This patch uses a new futex_q field, rt_waiter, for now. I thinkI should be able to use task->pi_blocked_on in a future versino of this patch.

NOTE: I believe this patch creates a race condition that the final patch hits when trying to do requeue_pi with nr_wake=1 and nr_requeue=0. See that patch header (6/7) for a complete discription.

V5: -remove EXPORT_SYMBOL_GPL from the new routines -minor cleanupsV4: -made detect_deadlock a parameter to rt_mutex_enqueue_task -refactored rt_mutex_slowlock to share code with new functions -renamed rt_mutex_enqueue_task and rt_mutex_handle_wakeup to rt_mutex_start_proxy_lock and rt_mutex_finish_proxy_lock, respectively

/*- * waiter.task is NULL the first time we come here and+ * waiter->task is NULL the first time we come here and * when we have been woken up by the previous owner * but the lock got stolen by a higher prio task. */- if (!waiter.task) {- ret = task_blocks_on_rt_mutex(lock, &waiter,+ if (!waiter->task) {+ ret = task_blocks_on_rt_mutex(lock, waiter, current, detect_deadlock); /* * If we got woken up by the owner then start loop * all over without going into schedule to try * to get the lock now: */- if (unlikely(!waiter.task)) {+ if (unlikely(!waiter->task)) { /* * Reset the return value. We might * have returned with -EDEADLK and the@@ -684,15 +673,52 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,