Let's say I have two variables, protected_var1 and protected_var2. Let's further assume that these variables are updated via multiple threads, and are fairly independent in that usually one or the other but not both is worked on - so they both have their own mutex guard for efficiency.

Assuming:

-I always lock mutexes in order (mutex1 then mutex2) in my code in regions where both locks are required.

-Both mutexes are used many other places by them selves (like just lock mutex1, or just lock mutex2).

Does the order in which I unlock the mutexes at the end of a function using both make a difference in this situation?

I was asked a question a long time ago in an interview regarding this situation, and I came out feeling that the answer was yes - the order of those two unlocks does matter. I cannot for the life of me figure out how a deadlock could result from this though if the locks are always obtained in the same order wherever both are used.

The order shouldn't matter, as long as you don't attempt to acquire
another lock between the releases. The important thing is to always
acquire the locks in the same order; otherwise, you risk a deadlock.

EDIT:

To expand on the constraint: You must establish a strict ordering among
the mutexes, e.g. mutex1 precedes mutex2 (but this rule is valid for
any number of mutexes). You may only request a lock on a mutex if you
don't hold a mutex which comes after it in the order; e.g. you may not
request a lock on mutex1 if you hold a lock on mutex2. Anytime
these rules are respected, you should be safe. With regards to
releasing, if you release mutex1, then try to reacquire it before
releasing mutex2, you've violated the rule. In this regard, there may
be some advantage in respecting a stack-like order: last acquired is
always the first released. But it's sort of an indirect effect: the
rule is that you cannot request a lock on mutex1 if you hold one on
mutex2. Regardless of whether you had a lock on mutex1 when you
acquired the lock on mutex2 or not.

Very very important point. I knew that there was a reason for unlock ordering. Please expand this a little so people see it. Consider unlocking mutex1, doing something, and then grabbing mutex1 again. I remember having a deadlock that was resolved by reverse ordering the mutex release and that was the cause.
–
D.ShawleyFeb 23 '12 at 15:10

@D.Shawley That would cause a deadlock. (I seem to recall analyzing this in more detail in a recent answer, to a question concerning using a mutex on a container and on the individual elements in the container.) But that violates the ordering constraint: if you acquire both mutexes, you must acquire mutex1 first.
–
James KanzeFeb 23 '12 at 15:18

It doesn't matter for correctness of locking. The reason is that, even supposing some other thread is waiting to lock mutex1 and then mutex2, the worst case is that it gets immediately scheduled as soon as you release mutex1 (and acquires mutex1). It then blocks waiting for mutex2, which the thread you're asking about will release as soon as it gets scheduled again, and there's no reason that shouldn't happen soon (immediately, if these are the only two threads in play).

So there might be a small cost in performance in that exact situation, compared with if you released mutex2 first and so there was only one rescheduling operation. Nothing you'd normally expect to predict or worry about, though, it's all within the boundaries of "scheduling often isn't deterministic".

The order you release the locks could certainly affect scheduling in general, though. Suppose that there are two threads waiting for your thread, and one of them is blocked on mutex1 while the other is blocked on mutex2. It might turn out that whichever lock you release first, that thread gets to run first, simply because your thread has outlived its welcome (consumed more than an entire time-slice), and hence gets descheduled as soon as anything else is runnable. But that can't cause a fault in an otherwise-correct program: you aren't allowed to rely on your thread being descheduled as soon as it releases the first lock. So either order of those two waiting threads running, both running simultaneously if you have multiple cores, or the two alternating on one core, must all be equally safe whichever order you release the locks.

The order of unlocking cannot cause deadlocks. However, if give the opportunity, I recommend unlocking them in reverse locking order. This has negligible effect on the running of the code. However, developers are used to thinking in terms of scopes, and scopes "close" in reverse order. Seeing them in reverse order to simply think of scoping locks. This brings me to the second point, which is that, in most cases, it is safest to replace the direct calls to lock and unlock with a stack based guard that calls them for you. Doing so yields the greatest degree of correctness for the least mental effort, and also happens to be safe in the presence of exceptions (which can really muck with a manual unlock)!

A simple guard (there are many out there.. this just just a quick roll-your-own):

This can lead to a deadlock since foo might have locked mutex1 and now waits for mutex2 while bar has locked mutex2 and now waits for mutex1. So it's a good idea to somehow ensure that nested mutex locks always get locked in the same order.

What I can see is that if another operation is taking mutex2 and keeps it for a long time, your foo() function will be stuck after pthread_mutex_lock(&mutex1); and will probably have some performance hit.

So long as whenever var1 and var2 are locked they are locked in the same order you are safe regardless of the freeing order. In fact freeing in the order they were locked is the RAII freeing behavior found in STL and BOOST locks.