128752011-06-10 17:56:56 +0000pthread_cond_timedwait can steal the wakeup of slower thread in pthread_cond_wait2017-06-27 21:46:35 +0000111UnclassifiedglibcnptlunspecifiedAllAllRESOLVEDINVALIDP2normal---1martinunassignedbugdaljakubsiddheshstephen.dolantriegeloldest_to_newest4926405787martin2011-06-10 17:56:56 +0000Created attachment 5787
Example source code
According to the definition of pthread_cond_signal, it should only unblock threads that are blocked at the time it is called.
The attached example demonstrates a bug in pthread_cond_timedwait that can allow it to "steal" the signal from a thread that was blocked in pthread_cond_wait when pthread_cond_signal was called, even though pthread_cond_timedwait was called after pthread_cond_signal.
This was tested on an Intel Core i7 970 CPU (6 cores, 12 threads) running Fedora 14 x86_64 with the master branch of glibc from 2011-06-10 and also with older releases.
There is no easy way to repeat this, because it depends very much on the timing of the threads, so I've had to cheat by using pthread_kill to artificially delay the thread that called pthread_cond_wait (cs_ptA).
The expected output of the program is
A waits
cs_timewaster starts
B signals
cs_delaywaster starts
C waits
D waits
B signals
C wakes
D wakes with ETIMEDOUT
cs_delaywaster ends
A wakes
Note that D wakes with ETIMEDOUT and A wakes afterwards.
Often the program hangs after outputing
A waits
cs_timewaster starts
B signals
cs_delaywaster starts
C waits
D waits
B signals
C wakes
D wakes with code 0
cs_delaywaster ends
Note that D wakes with code 0 and A never wakes.
The calls to usleep in the example are such that the first signal from thread B should cause thread A to wake and second signal should cause thread C to wake. Thread D should wake on the timeout.
I think the problem is that if thread A wakes from the futex but doesn't execute the rest of pthread_cond_wait quickly enough (e.g. because the signal handler cs_delaywaster runs), then thread D can steal the wakeup when its futex returns on the timeout.492821drepper.fsp2011-06-11 13:35:58 +0000The current implementation is correct. There is no guarantee of fairness.492912martin2011-06-13 12:39:04 +0000Can you take another look at this please? It isn't about fairness because the man page (and spec AFAIK) for pthread_cond_signal says:
"The pthread_cond_signal() function shall unblock at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond)."
In the example, only thread A is blocked at the time thread B signals for the first time, so I think it should wake whatever happens later The blocking of threads C and D occurs after thread B signals for the first time so shouldn't be affected by that signal.
The problem is that if pthread_cond_timedwait in thread D reaches its timeout before thread A has had a chance to increment __woken_seq, then thread D will claim the signal even though its true reason for waking is the timeout.508033bugdal2011-09-28 21:55:17 +0000This is definitely a bug. I hope this won't be another case where Mr. Drepper has got too much time invested in the over-engineered code for minimizing spurious wakes to admit that it needs to be thrown out and replaced with a correct implementation.
Actually I have a possible alternate explanation for his position: it's possible that he's viewing condition signals as events that always correspond to a new item in a queue that will be removed by any waiter who wakes. If this is all you use cond vars for, there's no need to avoid having a new waiter (arriving after the signal) avoid stealing an existing waiter's wake, because the new waiter could just as easily have experienced a spurious wake, and upon waking up and checking the queue, found and removed the next queued item - at which point, even if the existing waiter woke, it would find an empty queue again and wait again.
Of course condition variables have plenty of other legitimate uses, and the requirements on the implementation are governed by the specification, not by one narrow-minded idea of what they should be used for...540574bugdal2012-03-17 20:40:32 +0000Ping. There was a lot of analysis on this bug but as far as I know, no work towards fixing it...574655triegel2012-09-18 14:18:13 +0000(In reply to comment #2)
> Can you take another look at this please? It isn't about fairness because the
> man page (and spec AFAIK) for pthread_cond_signal says:
>
> "The pthread_cond_signal() function shall unblock at least one of the threads
> that are blocked on the specified condition variable cond (if any threads are
> blocked on cond)."
>
> In the example, only thread A is blocked at the time thread B signals for the
> first time, so I think it should wake whatever happens later The blocking of
> threads C and D occurs after thread B signals for the first time so shouldn't
> be affected by that signal.
Please revise your test case so that it is properly synchronized for what you intend to test. usleep() is not a reliable way to enforce a happens-before order between operations in different threads. Cond vars, locks, or something similar can enforce happens-before.
> The problem is that if pthread_cond_timedwait in thread D reaches its timeout
> before thread A has had a chance to increment __woken_seq, then thread D will
> claim the signal even though its true reason for waking is the timeout.
If A didn't consume a signal before D did, why shouldn't D consume an available signal? The manpage bit you quote says that _at least_ one of the blocked threads should be unblocked. This seems to be what's happening.
If you find a statement in the spec that disallows the behavior you see, please quote this statement instead. I guess what you might find to be surprising is that a signal operation might not have finished wakening a thread even though it already returned to the caller.574666triegel2012-09-18 14:28:05 +0000(In reply to comment #3)
> This is definitely a bug.
If so, please quote the specification or manpage that this violates. At the very least, we need this for documentation purposes.
> I hope this won't be another case where Mr. Drepper
> has got too much time invested in the over-engineered code for minimizing
> spurious wakes to admit that it needs to be thrown out and replaced with a
> correct implementation.
This is a bug report. Please focus on the technical matter.
> Actually I have a possible alternate explanation for his position: it's
> possible that he's viewing condition signals as events that always correspond
> to a new item in a queue that will be removed by any waiter who wakes. If this
> is all you use cond vars for, there's no need to avoid having a new waiter
> (arriving after the signal) avoid stealing an existing waiter's wake, because
> the new waiter could just as easily have experienced a spurious wake, and upon
> waking up and checking the queue, found and removed the next queued item - at
> which point, even if the existing waiter woke, it would find an empty queue
> again and wait again.
What you describe is a fairness issue. I'm not aware of any guarantee of fairness or absence of starvation. The cond var should try to be fair if possible, but the test case is not even for a PI cond var, so threads aren't even guaranteed to get to run on the CPU.
> Of course condition variables have plenty of other legitimate uses, and the
> requirements on the implementation are governed by the specification, not by
> one narrow-minded idea of what they should be used for...
So, please cite the bits of the specification that disallow the behavior. What Martin cite from the manpage does not seem to conflict with the current behavior.574777bugdal2012-09-19 03:21:17 +0000> If so, please quote the specification or manpage that this violates. At the
> very least, we need this for documentation purposes.
"The pthread_cond_signal() function shall unblock at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond)."
Source: http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_cond_signal.html
If it's provable that threads 1 through N are waiting on a condition variable C when thread X calls pthread_cond_signal on C N times, then in order to satisfy the above, all N must be unblocked. If another waiter arriving after the N signals consumes any of the signals and prevents those N from being blocked, then the above cited requirement has been violated; at least one call to pthread_cond_signal did not "unblock at least one of the threads that are blocked".
I admit the test case is poorly written and rather obfuscated, but I believe that it is showing a bug of this nature. Contrary to yours and Mr. Dreppers characterizations, this is NOT A FAIRNESS ISSUE. No complaint about fairness has been made by myself or the original poster of the bug. If the bug really exists in the described form -- and it seems to me that it does, although I'd like to find a simpler test case -- then it's an issue of the interface violating its contract.574788jakub2012-09-19 06:23:24 +0000(In reply to comment #7)
> I admit the test case is poorly written and rather obfuscated, but I believe
> that it is showing a bug of this nature. Contrary to yours and Mr. Dreppers
> characterizations, this is NOT A FAIRNESS ISSUE. No complaint about fairness
> has been made by myself or the original poster of the bug. If the bug really
> exists in the described form -- and it seems to me that it does, although I'd
> like to find a simpler test case -- then it's an issue of the interface
> violating its contract.
The testcase doesn't show anything like that, there are absolutely no guarantees that the usleeps result in whatever ordering of events in the threaded program.574799triegel2012-09-19 08:21:49 +0000(In reply to comment #7)
> > If so, please quote the specification or manpage that this violates. At the
> > very least, we need this for documentation purposes.
>
> "The pthread_cond_signal() function shall unblock at least one of the threads
> that are blocked on the specified condition variable cond (if any threads are
> blocked on cond)."
>
> Source:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_cond_signal.html
>
> If it's provable that threads 1 through N are waiting on a condition variable C
> when thread X calls pthread_cond_signal on C N times, then in order to satisfy
> the above, all N must be unblocked.
First, I don't think you can prove that they are waiting in this test case -- usleep doesn't give you any happens-before or other ordering guarantees.
Second, the sentence from the spec that you quote is vague on when those other threads would block (e.g., is there any guarantee of linearizability or such?). Specifically, it doesn't state which blocked threads should be wakened, or which threads blocked at which time.
> If another waiter arriving after the N
> signals consumes any of the signals and prevents those N from being blocked,
> then the above cited requirement has been violated;
I don't see how this is violated. You say it unblocked the waiter and N-1 other threads, so N overall for N signals. That's what the guarantee is, isn't it?
> at least one call to
> pthread_cond_signal did not "unblock at least one of the threads that are
> blocked".
It did unblock one of the threads that are blocked. Even at the end right before their timeout they are still blocked, right?
> I admit the test case is poorly written and rather obfuscated but I believe
> that it is showing a bug of this nature.
> Contrary to yours and Mr. Dreppers
> characterizations, this is NOT A FAIRNESS ISSUE.
Please look again at the paragraph that you wrote that I replied to in comment #6. You describe a fairness issue there (a newer waiter "stealing" an older waiter's signal). I did not say that this bug report is just a fairness issue. In some executions it might show that there's no fairness, but the core problems with this bug report are the two items I point out above.5748010bugdal2012-09-19 08:23:42 +0000Use of usleep does not automatically invalidate the test. However, looking at the test source again, I see this is a different test case (and different bug report) from the one I was thinking of, which claims the same issue (stolen wakes). I had a conversation with the author of that other bug report and was convinced it's valid; I'll have to go back and find it so that this report can be considered as a possible duplicate...5748111bugdal2012-09-19 08:49:36 +0000Here's the related bug report, whose thread concluded with the claim that this bug (12875) is probably a manifestation of it: http://sourceware.org/bugzilla/show_bug.cgi?id=13165
With that said, some comments on Torvald's last reply:
> Second, the sentence from the spec that you quote is vague on when those other
> threads would block (e.g., is there any guarantee of linearizability or such?).
> Specifically, it doesn't state which blocked threads should be wakened, or
> which threads blocked at which time.
The relevant text is in the specification for pthread_cond_wait regarding atomically unlocking the mutex and blocking:
"These functions atomically release mutex and cause the calling thread to block on the condition variable cond; atomically here means "atomically with respect to access by another thread to the mutex and then the condition variable". That is, if another thread is able to acquire the mutex after the about-to-block thread has released it, then a subsequent call to pthread_cond_broadcast() or pthread_cond_signal() in that thread shall behave as if it were issued after the about-to-block thread has blocked."
That is, if the mutex was held by a thread entering pthread_cond_wait and another thread successfully acquires the mutex, the first thread "has blocked".
> > If another waiter arriving after the N
> > signals consumes any of the signals and prevents those N from being blocked,
> > then the above cited requirement has been violated;
>
> I don't see how this is violated. You say it unblocked the waiter and N-1
> other threads, so N overall for N signals. That's what the guarantee is,
> isn't it?
Perhaps my typo in the above-quoted text is the source of confusion. It should have read "prevents those N from being _unblocked_".
With the above reading of the standard in mind, at the moment pthread_cond_signal is called, it's provable that exactly those N, and not the new waiter that's about to arrive, "have blocked" on the condition variable. So if the N signals don't wake all N of them, it's a bug - each signal is required to unblock at least one thread that "has blocked" in the above sense.5748612triegel2012-09-19 15:34:48 +0000(In reply to comment #11)
> Here's the related bug report, whose thread concluded with the claim that this
> bug (12875) is probably a manifestation of it:
> http://sourceware.org/bugzilla/show_bug.cgi?id=13165
>
> With that said, some comments on Torvald's last reply:
>
> > Second, the sentence from the spec that you quote is vague on when those other
> > threads would block (e.g., is there any guarantee of linearizability or such?).
> > Specifically, it doesn't state which blocked threads should be wakened, or
> > which threads blocked at which time.
>
> The relevant text is in the specification for pthread_cond_wait regarding
> atomically unlocking the mutex and blocking:
>
> "These functions atomically release mutex and cause the calling thread to block
> on the condition variable cond; atomically here means "atomically with respect
> to access by another thread to the mutex and then the condition variable". That
> is, if another thread is able to acquire the mutex after the about-to-block
> thread has released it, then a subsequent call to pthread_cond_broadcast() or
> pthread_cond_signal() in that thread shall behave as if it were issued after
> the about-to-block thread has blocked."
That states that the signaler needs to respect a happens-before relation established by the mutex. Thus, the prior cond_wait will be considered as a wake-up target. There is no guarantee there that it is the first to be wakened. There is no statement there about the relationship to other writers, nor when the signal is actually delivered.
> That is, if the mutex was held by a thread entering pthread_cond_wait and
> another thread successfully acquires the mutex, the first thread "has blocked".
Indeed. However, that doesn't mean that it will be the first to be unblocked. If there are no other waiters, it is the only candidate though, and will be unblocked (i.e., there are no lost wake-ups).
> > > If another waiter arriving after the N
> > > signals consumes any of the signals and prevents those N from being blocked,
> > > then the above cited requirement has been violated;
> >
> > I don't see how this is violated. You say it unblocked the waiter and N-1
> > other threads, so N overall for N signals. That's what the guarantee is,
> > isn't it?
>
> Perhaps my typo in the above-quoted text is the source of confusion. It should
> have read "prevents those N from being _unblocked_".
I read this as unblocked.
> With the above reading of the standard in mind, at the moment
> pthread_cond_signal is called, it's provable that exactly those N, and not the
> new waiter that's about to arrive, "have blocked" on the condition variable. > So
It says that those have blocked, not that those are exactly the ones that can be considered to have blocked (i.e., the first "if" in what you quoted is an "if", not an "iff" (if and only if)).
> if the N signals don't wake all N of them, it's a bug - each signal is required
> to unblock at least one thread that "has blocked" in the above sense.
No. See above.5748913bugdal2012-09-19 17:30:17 +0000> It says that those have blocked, not that those are exactly the ones that can
> be considered to have blocked (i.e., the first "if" in what you quoted is an
> "if", not an "iff" (if and only if)).
Short of an implementation-defined extension that must be documented, there is no way a waiter can block on a condition variable short of calling pthread_cond_wait or pthread_cond_timedwait. Under your proposed interpretation, pthread_cond_signal would be useless; in order to be useful, at least one thread needs to unblock, and the application has to be able to know (based on its knowledge of which threads have blocked) that the signal will unblock a thread that will allow the program to make forward progress. This of course requires that every thread which has blocked on the condition at the time of pthread_cond_signal be a thread whose unblocking would allow forward progress; if other threads have blocked on the same condition, then it's an application bug.
As I stated before, I'm not sure this bug report is valid and I was thinking of the other one. But there is a real issue here that the implementation needs to take care to satisfy.5750114triegel2012-09-20 12:28:36 +0000(In reply to comment #13)
> > It says that those have blocked, not that those are exactly the ones that can
> > be considered to have blocked (i.e., the first "if" in what you quoted is an
> > "if", not an "iff" (if and only if)).
>
> Short of an implementation-defined extension that must be documented, there is
> no way a waiter can block on a condition variable short of calling
> pthread_cond_wait or pthread_cond_timedwait. Under your proposed
> interpretation, pthread_cond_signal would be useless;
It's not useless. It just doesn't give all the guarantees you thought it would give.
> in order to be useful, at
> least one thread needs to unblock, and the application has to be able to know
> (based on its knowledge of which threads have blocked) that the signal will
> unblock a thread that will allow the program to make forward progress.
If the application needs to associate certain waiters with signalers, it can just use a separate cond var for this, for example.57505156639martin2012-09-20 14:01:28 +0000Created attachment 6639
Test case with explicit happens-before logic rather than usleep
As requested, I've attached a version of the test case that uses the lock, barriers and atomic instructions to enforce happens-before. There are 2 remaining calls to usleep, which are needed to ensure ordering between the calling thread and the internals of pthread_cond_wait than can't be controlled.
This bug may be different from Bug 13165, because it is caused by waking from a timeout rather than an extra signal.
Why I think this is a bug: my reading of the sentence "The pthread_cond_signal() function shall unblock at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond)." is that it only affects threads that "are blocked" at the time pthread_cond_signal() is called, not those that call pthread_cond_wait afterwards. The wording for pthread_cond_broadcast() says "currently blocked", but is that an intentional difference?5750716triegel2012-09-20 16:59:10 +0000(In reply to comment #15)
> Created attachment 6639 [details]
> Test case with explicit happens-before logic rather than usleep
>
> As requested, I've attached a version of the test case that uses the lock,
> barriers and atomic instructions to enforce happens-before.
Thanks.
> This bug may be different from Bug 13165, because it is caused by waking from a
> timeout rather than an extra signal.
Even this happens with cond_timedwait, it seems to me that it is conceptually the same issue.
> Why I think this is a bug: my reading of the sentence "The
> pthread_cond_signal() function shall unblock at least one of the threads that
> are blocked on the specified condition variable cond (if any threads are
> blocked on cond)." is that it only affects threads that "are blocked" at the
> time pthread_cond_signal() is called, not those that call pthread_cond_wait
> afterwards.
I don't read it that way. Please see the discussion in bug #13165 about this. To summarize, the spec only requires the threads to be considered as blocked that happen before the signal, but it does not require threads that wait after the signal to NOT be considered blocked. In your example, the waiter in thread D is not disallowed to be considered as a blocked thread.
> The wording for pthread_cond_broadcast() says "currently blocked",
> but is that an intentional difference?
I can't speak about the intent of the authors of the spec. A more detailed specification would certainly be easier to understand. But unless there is a change, I would stick to what's allowed/required by the current wording.
OK to classify as not a bug?9008617triegel2017-01-11 14:52:13 +0000(In reply to Torvald Riegel from comment #16)
> (In reply to comment #15)
> > Created attachment 6639 [details]
> > Test case with explicit happens-before logic rather than usleep
> >
> > As requested, I've attached a version of the test case that uses the lock,
> > barriers and atomic instructions to enforce happens-before.
The use of pthread_cond_wait is still wrong, because you expect it wake-ups to reveal an ordering -- but spurious wake-ups are allowed. In the general case, you should always put pthread_cond_wait in a loop and check an actual flag that is set before pthread_cond_signal is called. Otherwise, you are just using the condvar to optimize how you wait.
> > Why I think this is a bug: my reading of the sentence "The
> > pthread_cond_signal() function shall unblock at least one of the threads that
> > are blocked on the specified condition variable cond (if any threads are
> > blocked on cond)." is that it only affects threads that "are blocked" at the
> > time pthread_cond_signal() is called, not those that call pthread_cond_wait
> > afterwards.
In your test case (and ignoring spurious wake-ups), both C and D start to wait on the condvar before the signal is issued. Thus, they are both eligible to consume the signal, as is A (C and D enter the barrier while having acquired cs_lock, and B acquires the cs_lock after it exits from both barriers; C and D release cs_lock atomically with starting to wait on cs_cond; B signals after having acquired cs_lock, so C and D waiting happens before B's signal).
Therefore, I'm closing this as invalid (because the test complains about correct behavior).
The new condvar that is now committed upstream fixes all the bugs we are aware of (see Bug 13165).932741810140stephen.dolan2017-06-14 16:51:48 +0000Created attachment 10140
Simplified test case9327519stephen.dolan2017-06-14 17:07:10 +0000(In reply to Torvald Riegel from comment #17)
> The use of pthread_cond_wait is still wrong, because you expect it wake-ups
> to reveal an ordering -- but spurious wake-ups are allowed. In the general
> case, you should always put pthread_cond_wait in a loop and check an actual
> flag that is set before pthread_cond_signal is called. Otherwise, you are
> just using the condvar to optimize how you wait.
I have just added a simplified version of martin's test case, which uses pthread_cond_wait in this textbook style but still exhibits the strange behaviour.
On my machine (Ubuntu 16.04, glibc 2.23), it produces this output:
A waiting
signal #1 sent to waiters: a=1, b=0, c=0
B waiting
C waiting
signal #2 sent to waiters: a=1, b=1, c=1
B woke
C: timedwait returned [Success]
The program hangs at this point, and no other output is produced. Control never returns from pthread_cond_wait in thread A.
I have not yet tested this on more recent glibc, so it's possible that this has been fixed. Before trying other versions, though, I'd like to know whether you think this output is correct.
The messages are printed only when holding the lock, so the happens-before relation totally orders the lines of output. As you've mentioned, signal #2 can go to any of the waiters (A, B or C) with no guarantee of fairness, and spurious wakeups can occur at any moment.
However, when signal #1 is sent, only A is waiting. Threads B and C have not started. The fact that this signal does not cause A to wake seems like a bug.9357420triegel2017-06-22 09:05:44 +0000(In reply to Stephen Dolan from comment #19)
> (In reply to Torvald Riegel from comment #17)
> > The use of pthread_cond_wait is still wrong, because you expect it wake-ups
> > to reveal an ordering -- but spurious wake-ups are allowed. In the general
> > case, you should always put pthread_cond_wait in a loop and check an actual
> > flag that is set before pthread_cond_signal is called. Otherwise, you are
> > just using the condvar to optimize how you wait.
>
> I have just added a simplified version of martin's test case, which uses
> pthread_cond_wait in this textbook style but still exhibits the strange
> behaviour.
That test is better. However, the test is not guaranteed to terminate because C would consume the signal that both B and C are allowed to consume. Furthermore, I know what you're trying to do with the sleep in the signal handler, but is sleep() actually allowed in a handler?
> On my machine (Ubuntu 16.04, glibc 2.23), it produces this output:
>
> A waiting
> signal #1 sent to waiters: a=1, b=0, c=0
> B waiting
> C waiting
> signal #2 sent to waiters: a=1, b=1, c=1
> B woke
> C: timedwait returned [Success]
This execution should not happen. B and C consume the two signals (C doesn't time out), but A should be woken in any case.
> I have not yet tested this on more recent glibc, so it's possible that this
> has been fixed.
Please do test it with a current version of glibc. This behavior (ie, that more recent waiting threads could "steal" a signal from earlier waiting threads) is exactly why we needed a new condvar algorithm.9370321stephen.dolan2017-06-27 21:46:35 +0000> However, the test is not guaranteed to terminate because C would
> consume the signal that both B and C are allowed to consume.
You're correct. The test has multiple valid behaviours, including one
where it hangs after printing "A woke". However, none of these valid
behaviours fail to print "A woke", which is what I observed.
> Furthermore, I know what you're trying to do with the sleep in the
> signal handler, but is sleep() actually allowed in a handler?
POSIX specifies that sleep is async-signal-safe. (Oddly, it makes no
such guarantee for usleep or nanosleep).
> This execution should not happen. B and C consume the two signals
> (C doesn't time out), but A should be woken in any case.
I'm glad we agree! I was worried (by some of the comments in this
thread, and by the fact that this bug is marked INVALID) that you
thought this behaviour was correct.
> Please do test it with a current version of glibc.
I've now tested it with glibc 2.24 and glibc 2.25. 2.24 has the bug,
but 2.25 seems to work.57872011-06-10 17:56:56 +00002012-09-20 14:01:28 +0000Example source codecond-ordering-timeout.ctext/x-csrc740martinI2luY2x1ZGUgPHN5cy90aW1lLmg+CiNpbmNsdWRlIDx1bmlzdGQuaD4KI2luY2x1ZGUgPHNpZ25h
bC5oPgojaW5jbHVkZSA8ZXJybm8uaD4KI2luY2x1ZGUgPHB0aHJlYWQuaD4KI2luY2x1ZGUgPHN0
ZGlvLmg+CiNpbmNsdWRlIDxzdHJpbmcuaD4KCnZvbGF0aWxlIGludCBjc19jb3VudDsKdm9sYXRp
bGUgaW50IGNzX2RvbmU7CmludCBjc19tYXg7CnZvbGF0aWxlIGludCBjc19yZWFkc19kb25lOwp2
b2xhdGlsZSBpbnQgY3Nfd3JpdGVzX2RvbmU7CnB0aHJlYWRfbXV0ZXhfdCBjc19sb2NrOwpwdGhy
ZWFkX2NvbmRfdCBjc19jb25kOwpwdGhyZWFkX3QgY3NfcHRBLCBjc19wdEIsIGNzX3B0QywgY3Nf
cHREOwoKdm9pZCAqY3NfQSh2b2lkICopOwp2b2lkICpjc19CKHZvaWQgKik7CnZvaWQgKmNzX0Mo
dm9pZCAqKTsKdm9pZCAqY3NfRCh2b2lkICopOwp2b2lkICpjc190aW1ld2FzdGVyKHZvaWQgKik7
CnZvaWQgY3NfZGVsYXl3YXN0ZXIoaW50IHNpZyk7CgppbnQgbWFpbihpbnQgYXJnYywgY2hhciAq
KmFyZ3YpCnsKICBwdGhyZWFkX3QgY3NfcHQ7CiAgcHRocmVhZF9tdXRleF9pbml0KCZjc19sb2Nr
LCBOVUxMKTsKICBwdGhyZWFkX2NvbmRfaW5pdCgmY3NfY29uZCwgTlVMTCk7CiAgc2lnbmFsKFNJ
R1VTUjEsIGNzX2RlbGF5d2FzdGVyKTsKICBwdGhyZWFkX2NyZWF0ZSgmY3NfcHQsIE5VTEwsIGNz
X3RpbWV3YXN0ZXIsIE5VTEwpOwogIHB0aHJlYWRfY3JlYXRlKCZjc19wdEEsIE5VTEwsIGNzX0Es
IE5VTEwpOwogIHB0aHJlYWRfY3JlYXRlKCZjc19wdEIsIE5VTEwsIGNzX0IsIE5VTEwpOwogIHB0
aHJlYWRfY3JlYXRlKCZjc19wdEMsIE5VTEwsIGNzX0MsIE5VTEwpOwogIHB0aHJlYWRfY3JlYXRl
KCZjc19wdEQsIE5VTEwsIGNzX0QsIE5VTEwpOwogIHB0aHJlYWRfam9pbihjc19wdEEsIE5VTEwp
OwogIHB0aHJlYWRfam9pbihjc19wdEIsIE5VTEwpOwogIHB0aHJlYWRfam9pbihjc19wdEQsIE5V
TEwpOwogIHJldHVybiAwOwp9Cgp2b2lkIGRiZ291dChjaGFyICptZXNzYWdlKQp7CiAgd3JpdGUo
MSwgbWVzc2FnZSwgc3RybGVuKG1lc3NhZ2UpKTsKfQoKdm9pZCAqY3NfdGltZXdhc3Rlcih2b2lk
ICppZ25vcmUpCnsKICB1c2xlZXAoMTAwMDAwKTsKICBkYmdvdXQoImNzX3RpbWV3YXN0ZXIgc3Rh
cnRzXG4iKTsKICB3aGlsZSAoMSk7Cn0KCnZvaWQgY3NfZGVsYXl3YXN0ZXIoaW50IHNpZykKewog
IGRiZ291dCgiY3NfZGVsYXl3YXN0ZXIgc3RhcnRzXG4iKTsKICB1c2xlZXAoMzAwMDAwMCk7CiAg
ZGJnb3V0KCJjc19kZWxheXdhc3RlciBlbmRzXG4iKTsKfQoKdm9pZCAqY3NfQSh2b2lkICppZ25v
cmUpCnsKICAvKiAxLiAqLwogIHB0aHJlYWRfbXV0ZXhfbG9jaygmY3NfbG9jayk7CiAgZGJnb3V0
KCJBIHdhaXRzXG4iKTsKICBwdGhyZWFkX2NvbmRfd2FpdCgmY3NfY29uZCwgJmNzX2xvY2spOwog
IHB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZjc19sb2NrKTsKCiAgZGJnb3V0KCJBIHdha2VzXG4iKTsK
fQoKdm9pZCAqY3NfQih2b2lkICppZ25vcmUpCnsKICB1c2xlZXAoMjAwMDAwKTsKCiAgLyogMi4g
Ki8KICBwdGhyZWFkX211dGV4X2xvY2soJmNzX2xvY2spOwogIGRiZ291dCgiQiBzaWduYWxzXG4i
KTsKICBwdGhyZWFkX2NvbmRfc2lnbmFsKCZjc19jb25kKTsKICBwdGhyZWFkX2tpbGwoY3NfcHRB
LCBTSUdVU1IxKTsKICBwdGhyZWFkX211dGV4X3VubG9jaygmY3NfbG9jayk7CgogIHVzbGVlcCgz
MDAwMDApOwoKICAvKiA2LiAqLwogIHB0aHJlYWRfbXV0ZXhfbG9jaygmY3NfbG9jayk7CiAgZGJn
b3V0KCJCIHNpZ25hbHNcbiIpOwogIHB0aHJlYWRfY29uZF9zaWduYWwoJmNzX2NvbmQpOwogIHB0
aHJlYWRfbXV0ZXhfdW5sb2NrKCZjc19sb2NrKTsKfQoKdm9pZCAqY3NfQyh2b2lkICppZ25vcmUp
CnsKICB1c2xlZXAoMzAwMDAwKTsKCiAgLyogNC4gKi8KICBwdGhyZWFkX211dGV4X2xvY2soJmNz
X2xvY2spOwogIGRiZ291dCgiQyB3YWl0c1xuIik7CiAgcHRocmVhZF9jb25kX3dhaXQoJmNzX2Nv
bmQsICZjc19sb2NrKTsKICBwdGhyZWFkX211dGV4X3VubG9jaygmY3NfbG9jayk7CgogIC8qIDgu
ICovCiAgZGJnb3V0KCJDIHdha2VzXG4iKTsKfQoKdm9pZCAqY3NfRCh2b2lkICppZ25vcmUpCnsK
ICBzdHJ1Y3QgdGltZXNwZWMgd2FrZXRpbWU7CiAgc3RydWN0IHRpbWV2YWwgbm93OwogIGludCBy
ZXM7CgogIHVzbGVlcCgzNTAwMDApOwoKICAvKiA1LiAqLwogIHB0aHJlYWRfbXV0ZXhfbG9jaygm
Y3NfbG9jayk7CiAgZGJnb3V0KCJEIHdhaXRzXG4iKTsKICBnZXR0aW1lb2ZkYXkoJm5vdywgTlVM
TCk7CiAgd2FrZXRpbWUudHZfc2VjID0gbm93LnR2X3NlYyArIDE7CiAgd2FrZXRpbWUudHZfbnNl
YyA9IG5vdy50dl91c2VjICogMTAwMDsKICByZXMgPSBwdGhyZWFkX2NvbmRfdGltZWR3YWl0KCZj
c19jb25kLCAmY3NfbG9jaywgJndha2V0aW1lKTsKICBwdGhyZWFkX211dGV4X3VubG9jaygmY3Nf
bG9jayk7CgogIC8qIDEwLiAqLwogIGlmIChyZXMgPT0gRVRJTUVET1VUKSB7CiAgICBkYmdvdXQo
IkQgd2FrZXMgd2l0aCBFVElNRURPVVRcbiIpOwogIH0gZWxzZSB7CiAgICBjaGFyIGJ1ZlsyNTZd
OwogICAgc3ByaW50ZihidWYsICJEIHdha2VzIHdpdGggY29kZSAlZFxuIiwgcmVzKTsKICAgIGRi
Z291dChidWYpOwogIH0KfQo=
66392012-09-20 14:01:28 +00002012-09-20 14:01:28 +0000Test case with explicit happens-before logic rather than usleepcond-ordering-timeout-unsleepy.ctext/plain1070martinI2luY2x1ZGUgPHN5cy90aW1lLmg+CiNpbmNsdWRlIDx1bmlzdGQuaD4KI2luY2x1ZGUgPHNpZ25h
bC5oPgojaW5jbHVkZSA8ZXJybm8uaD4KI2luY2x1ZGUgPHB0aHJlYWQuaD4KI2luY2x1ZGUgPHN0
ZGlvLmg+CiNpbmNsdWRlIDxzdHJpbmcuaD4KCnB0aHJlYWRfbXV0ZXhfdCBjc19sb2NrOwpwdGhy
ZWFkX2NvbmRfdCBjc19jb25kOwpwdGhyZWFkX2JhcnJpZXJfdCBjc19iYXJyaWVyOwpwdGhyZWFk
X3QgY3NfcHRBLCBjc19wdEIsIGNzX3B0QywgY3NfcHREOwp2b2xhdGlsZSBpbnQgY3NfZGVsYXlf
ZG9uZTsKCnZvaWQgKmNzX0Eodm9pZCAqKTsKdm9pZCAqY3NfQih2b2lkICopOwp2b2lkICpjc19D
KHZvaWQgKik7CnZvaWQgKmNzX0Qodm9pZCAqKTsKdm9pZCAqY3NfdGltZXdhc3Rlcih2b2lkICop
Owp2b2lkIGNzX2RlbGF5d2FzdGVyKGludCBzaWcpOwoKaW50IG1haW4oaW50IGFyZ2MsIGNoYXIg
Kiphcmd2KQp7CiAgcHRocmVhZF9tdXRleF9pbml0KCZjc19sb2NrLCBOVUxMKTsKICBwdGhyZWFk
X2NvbmRfaW5pdCgmY3NfY29uZCwgTlVMTCk7CiAgc2lnbmFsKFNJR1VTUjEsIGNzX2RlbGF5d2Fz
dGVyKTsKICBjc19kZWxheV9kb25lID0gMDsKICBwdGhyZWFkX2NyZWF0ZSgmY3NfcHRBLCBOVUxM
LCBjc19BLCBOVUxMKTsKICBwdGhyZWFkX2pvaW4oY3NfcHRBLCBOVUxMKTsKICBwdGhyZWFkX2pv
aW4oY3NfcHRCLCBOVUxMKTsKICBwdGhyZWFkX2pvaW4oY3NfcHRELCBOVUxMKTsKICByZXR1cm4g
MDsKfQoKdm9pZCBkYmdvdXQoY2hhciAqbWVzc2FnZSkKewogIHdyaXRlKDEsIG1lc3NhZ2UsIHN0
cmxlbihtZXNzYWdlKSk7Cn0KCi8qIFRoaXMgZnVuY3Rpb24gcnVucyBpbiBhIHNpZ25hbCBoYW5k
bGVyIGluIHRocmVhZCBBIHRvIG1ha2UgaXQgcGF1c2UuICovCnZvaWQgY3NfZGVsYXl3YXN0ZXIo
aW50IHNpZykKewogIGRiZ291dCgiY3NfZGVsYXl3YXN0ZXIgc3RhcnRzXG4iKTsKICAvKiBoYW5n
IHVudGlsIEQgaGFzIHRpbWVkIG91dC4gKi8KICB3aGlsZSghX19zeW5jX2Jvb2xfY29tcGFyZV9h
bmRfc3dhcCgmY3NfZGVsYXlfZG9uZSwgMSwgMCkpOwogIGRiZ291dCgiY3NfZGVsYXl3YXN0ZXIg
ZW5kc1xuIik7Cn0KCnZvaWQgKmNzX0Eodm9pZCAqaWdub3JlKQp7CiAgLyogMS4gKi8KICBwdGhy
ZWFkX211dGV4X2xvY2soJmNzX2xvY2spOwogIHB0aHJlYWRfY3JlYXRlKCZjc19wdEIsIE5VTEws
IGNzX0IsIE5VTEwpOwogIGRiZ291dCgiQSB3YWl0c1xuIik7CiAgcHRocmVhZF9jb25kX3dhaXQo
JmNzX2NvbmQsICZjc19sb2NrKTsKICBwdGhyZWFkX211dGV4X3VubG9jaygmY3NfbG9jayk7Cgog
IGRiZ291dCgiQSB3YWtlc1xuIik7CgogIHJldHVybiBOVUxMOwp9Cgp2b2lkICpjc19CKHZvaWQg
Kmlnbm9yZSkKewogIC8qIDIuICovCiAgcHRocmVhZF9tdXRleF9sb2NrKCZjc19sb2NrKTsgCiAg
LyogSGV1cmlzdGljIDAuMnMgc2xlZXAgdG8gd2FpdCBmb3IgQSB0byBmaW5pc2ggYmxvY2tpbmcg
b24gY3NfY29uZC4gKi8KICB1c2xlZXAoMjAwMDAwKTsKICBkYmdvdXQoIkIgc2lnbmFsc1xuIik7
CiAgcHRocmVhZF9jb25kX3NpZ25hbCgmY3NfY29uZCk7CiAgLyogU3RhcnQgdGhlIGRlbGF5aW5n
IHRhY3RpYyB0byBtYWtlIEEgY29uc3VtZSB0aGUgc2lnbmFsIHRvbyBzbG93bHkuICovCiAgcHRo
cmVhZF9raWxsKGNzX3B0QSwgU0lHVVNSMSk7CiAgcHRocmVhZF9tdXRleF91bmxvY2soJmNzX2xv
Y2spOwoKICBwdGhyZWFkX2JhcnJpZXJfaW5pdCgmY3NfYmFycmllciwgTlVMTCwgMik7CiAgcHRo
cmVhZF9jcmVhdGUoJmNzX3B0QywgTlVMTCwgY3NfQywgTlVMTCk7CiAgcHRocmVhZF9iYXJyaWVy
X3dhaXQoJmNzX2JhcnJpZXIpOyAgLyogV2FpdCBmb3IgQyB0byBjbGFpbSB0aGUgbG9jay4gKi8K
ICBwdGhyZWFkX2JhcnJpZXJfZGVzdHJveSgmY3NfYmFycmllcik7CgogIHB0aHJlYWRfYmFycmll
cl9pbml0KCZjc19iYXJyaWVyLCBOVUxMLCAyKTsKICBwdGhyZWFkX2NyZWF0ZSgmY3NfcHRELCBO
VUxMLCBjc19ELCBOVUxMKTsKICBwdGhyZWFkX2JhcnJpZXJfd2FpdCgmY3NfYmFycmllcik7ICAv
KiBXYWl0IGZvciBEIHRvIGNsYWltIHRoZSBsb2NrLiAqLwogIHB0aHJlYWRfYmFycmllcl9kZXN0
cm95KCZjc19iYXJyaWVyKTsKCiAgLyogNi4gKi8KICBwdGhyZWFkX211dGV4X2xvY2soJmNzX2xv
Y2spOwogIC8qIEhldXJpc3RpYyAwLjJzIHNsZWVwIHRvIHdhaXQgZm9yIEMgJiBEIHRvIGZpbmlz
aCBibG9ja2luZyBvbiBjc19jb25kLiAqLwogIC8qIE11c3QgYmUgbGVzcyB0aGFuIHRoZSAxIHNl
Y29uZCB0aGF0IEQgdXNlcyBmb3IgcHRocmVhZF9jb25kX3RpbWVkd2FpdC4gKi8KICB1c2xlZXAo
MjAwMDAwKTsKICBkYmdvdXQoIkIgc2lnbmFsc1xuIik7CiAgcHRocmVhZF9jb25kX3NpZ25hbCgm
Y3NfY29uZCk7CiAgcHRocmVhZF9tdXRleF91bmxvY2soJmNzX2xvY2spOwoKICByZXR1cm4gTlVM
TDsKfQoKdm9pZCAqY3NfQyh2b2lkICppZ25vcmUpCnsKICAvKiA0LiAqLwogIHB0aHJlYWRfbXV0
ZXhfbG9jaygmY3NfbG9jayk7CiAgcHRocmVhZF9iYXJyaWVyX3dhaXQoJmNzX2JhcnJpZXIpOyAg
LyogVGVsbCBCIHdlIGhhdmUgY2xhaW1lZCB0aGUgbG9jay4gKi8KICBkYmdvdXQoIkMgd2FpdHNc
biIpOwogIHB0aHJlYWRfY29uZF93YWl0KCZjc19jb25kLCAmY3NfbG9jayk7CiAgcHRocmVhZF9t
dXRleF91bmxvY2soJmNzX2xvY2spOwoKICAvKiA4LiAqLwogIGRiZ291dCgiQyB3YWtlc1xuIik7
CgogIHJldHVybiBOVUxMOwp9Cgp2b2lkICpjc19EKHZvaWQgKmlnbm9yZSkKewogIHN0cnVjdCB0
aW1lc3BlYyB3YWtldGltZTsKICBzdHJ1Y3QgdGltZXZhbCBub3c7CiAgaW50IHJlczsKCiAgLyog
NS4gKi8KICBwdGhyZWFkX211dGV4X2xvY2soJmNzX2xvY2spOwogIHB0aHJlYWRfYmFycmllcl93
YWl0KCZjc19iYXJyaWVyKTsgIC8qIFRlbGwgQiB3ZSBoYXZlIGNsYWltZWQgdGhlIGxvY2suICov
CiAgZGJnb3V0KCJEIHdhaXRzXG4iKTsKICAvKiAxcyB0aW1lb3V0LCBtdXN0IGJlIG1vcmUgdGhh
biBzdGVwIDYgaW4gQi4gKi8KICBnZXR0aW1lb2ZkYXkoJm5vdywgTlVMTCk7CiAgd2FrZXRpbWUu
dHZfc2VjID0gbm93LnR2X3NlYyArIDE7CiAgd2FrZXRpbWUudHZfbnNlYyA9IG5vdy50dl91c2Vj
ICogMTAwMDsKICByZXMgPSBwdGhyZWFkX2NvbmRfdGltZWR3YWl0KCZjc19jb25kLCAmY3NfbG9j
aywgJndha2V0aW1lKTsKICBwdGhyZWFkX211dGV4X3VubG9jaygmY3NfbG9jayk7CgogIC8qIDEw
LiAqLwogIGlmIChyZXMgPT0gRVRJTUVET1VUKSB7CiAgICBkYmdvdXQoIkQgd2FrZXMgd2l0aCBF
VElNRURPVVQgKHdvcmtpbmcgYXMgZXhwZWN0ZWQpXG4iKTsKICB9IGVsc2UgewogICAgY2hhciBi
dWZbMjU2XTsKICAgIHNwcmludGYoYnVmLCAiRCB3YWtlcyB3aXRoIGNvZGUgJWQgKHVuZXhwZWN0
ZWRseSB1c2VkIEEncyBzaWduYWwpXG4iLCByZXMpOwogICAgZGJnb3V0KGJ1Zik7CiAgfQoKICAo
dm9pZClfX3N5bmNfZmV0Y2hfYW5kX2FkZCgmY3NfZGVsYXlfZG9uZSwgMSk7CgogIHJldHVybiBO
VUxMOwp9Cg==
101402017-06-14 16:51:48 +00002017-06-14 16:51:48 +0000Simplified test casecond-hang.ctext/x-csrc605stephen.dolanI2luY2x1ZGUgPHN5cy90aW1lLmg+CiNpbmNsdWRlIDx1bmlzdGQuaD4KI2luY2x1ZGUgPHNpZ25h
bC5oPgojaW5jbHVkZSA8ZXJybm8uaD4KI2luY2x1ZGUgPHB0aHJlYWQuaD4KI2luY2x1ZGUgPHN0
ZGlvLmg+CiNpbmNsdWRlIDxzdHJpbmcuaD4KCmludCBhX3dhaXQsIGFfd2FrZSwgYl93YWl0LCBi
X3dha2UsIGNfd2FpdCwgY193YWtlOwpwdGhyZWFkX211dGV4X3QgbG9jayA9IFBUSFJFQURfTVVU
RVhfSU5JVElBTElaRVI7CnB0aHJlYWRfY29uZF90IGNvbmQgPSBQVEhSRUFEX0NPTkRfSU5JVElB
TElaRVI7Cgp2b2lkKiB0aF9hKHZvaWQqIHVudXNlZCkgewogIHB0aHJlYWRfbXV0ZXhfbG9jaygm
bG9jayk7CiAgYV93YWl0ID0gMTsKICBwcmludGYoIkEgd2FpdGluZ1xuIik7CiAgd2hpbGUgKCFh
X3dha2UpIHB0aHJlYWRfY29uZF93YWl0KCZjb25kLCAmbG9jayk7CiAgcHJpbnRmKCJBIHdva2Vc
biIpOwogIHB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZsb2NrKTsKICByZXR1cm4gTlVMTDsKfQoKdm9p
ZCogdGhfYih2b2lkKiB1bnVzZWQpIHsKICBwdGhyZWFkX211dGV4X2xvY2soJmxvY2spOwogIGJf
d2FpdCA9IDE7CiAgcHJpbnRmKCJCIHdhaXRpbmdcbiIpOwogIHdoaWxlICghYl93YWtlKSBwdGhy
ZWFkX2NvbmRfd2FpdCgmY29uZCwgJmxvY2spOwogIHByaW50ZigiQiB3b2tlXG4iKTsKICBwdGhy
ZWFkX211dGV4X3VubG9jaygmbG9jayk7CiAgcmV0dXJuIE5VTEw7Cn0KCnZvaWQqIHRoX2Modm9p
ZCogdW51c2VkKSB7CiAgc3RydWN0IHRpbWVzcGVjIHdha2V0aW1lOwogIHN0cnVjdCB0aW1ldmFs
IG5vdzsKICBpbnQgZXJyOwoKICBwdGhyZWFkX211dGV4X2xvY2soJmxvY2spOwogIGNfd2FpdCA9
IDE7CiAgcHJpbnRmKCJDIHdhaXRpbmdcbiIpOwogIGdldHRpbWVvZmRheSgmbm93LCBOVUxMKTsK
ICB3YWtldGltZS50dl9zZWMgPSBub3cudHZfc2VjICsgMjsKICB3YWtldGltZS50dl9uc2VjID0g
bm93LnR2X3VzZWMgKiAxMDAwOwogIGVyciA9IHB0aHJlYWRfY29uZF90aW1lZHdhaXQoJmNvbmQs
ICZsb2NrLCAmd2FrZXRpbWUpOwogIHByaW50ZigiQzogdGltZWR3YWl0IHJldHVybmVkIFslc11c
biIsIHN0cmVycm9yKGVycikpOwogIHB0aHJlYWRfbXV0ZXhfdW5sb2NrKCZsb2NrKTsKICByZXR1
cm4gTlVMTDsKfQoKdm9pZCBkZWxheShpbnQgc2lnKSB7CiAgc2xlZXAoNSk7Cn0KCmludCBtYWlu
KCkgewogIHNpZ25hbChTSUdVU1IxLCBkZWxheSk7CiAgcHRocmVhZF90IGEsIGIsIGM7CiAgcHRo
cmVhZF9jcmVhdGUoJmEsIE5VTEwsICZ0aF9hLCBOVUxMKTsKCiAgdXNsZWVwKDIwMDAwMCk7Cgog
IHB0aHJlYWRfbXV0ZXhfbG9jaygmbG9jayk7CiAgYV93YWtlID0gMTsKICBwdGhyZWFkX2NvbmRf
c2lnbmFsKCZjb25kKTsKICBwdGhyZWFkX2tpbGwoYSwgU0lHVVNSMSk7CiAgcHJpbnRmKCJzaWdu
YWwgIzEgc2VudCB0byB3YWl0ZXJzOiBhPSVkLCBiPSVkLCBjPSVkXG4iLAogICAgICAgICBhX3dh
aXQsIGJfd2FpdCwgY193YWl0KTsKICBwdGhyZWFkX211dGV4X3VubG9jaygmbG9jayk7CgogIHB0
aHJlYWRfY3JlYXRlKCZiLCBOVUxMLCAmdGhfYiwgTlVMTCk7CiAgcHRocmVhZF9jcmVhdGUoJmMs
IE5VTEwsICZ0aF9jLCBOVUxMKTsKCiAgdXNsZWVwKDIwMDAwMCk7CiAgcHRocmVhZF9tdXRleF9s
b2NrKCZsb2NrKTsKICBiX3dha2UgPSAxOwogIHB0aHJlYWRfY29uZF9zaWduYWwoJmNvbmQpOwog
IHByaW50Zigic2lnbmFsICMyIHNlbnQgdG8gd2FpdGVyczogYT0lZCwgYj0lZCwgYz0lZFxuIiwK
ICAgICAgICAgYV93YWl0LCBiX3dhaXQsIGNfd2FpdCk7CiAgcHRocmVhZF9tdXRleF91bmxvY2so
JmxvY2spOwoKICBwdGhyZWFkX2pvaW4oYSwgTlVMTCk7CiAgcHRocmVhZF9qb2luKGIsIE5VTEwp
OwogIHB0aHJlYWRfam9pbihjLCBOVUxMKTsKICByZXR1cm4gMDsKfQo=