Given a program consisting of variables and instructions which modify these variables, and a synchronization primitive (a monitor, mutex, java's synchronized or C#'s lock), is it possible to prove that such a program is thread safe?

Is there even a formal model for describing things like thread safety or racing conditions?

$\begingroup$Yes, but real-world languages can be a pain in the ass since their concurrent semantics are not always well-defined/fixed. Also, not everything is decidable in every model. It's a wide field; google "Concurrency Theory" to get an impression. In particular, there is a rich theory involving Petri nets.$\endgroup$
– Raphael♦Sep 17 '13 at 8:03

Given a trace of an execution we first define a happens-before partial-order between events in the trace. Given two events $a$ and $b$ that occur on the same thread then $a < b$ or $b < a$. (The events on the same thread form a total order given by the sequential semantics of the programming language.) Synchronization events (these could be mutex acquires and releases, for example), give an additional inter-thread happens-before partial order. (If thread $S$ releases a mutex and then thread $T$ acquires that mutex we say that the release happens-before the acquire.)

Then given two data accesses (reads or writes to variables that are not synchronization variables) $a$ and $b$ that are to the same memory location, but by different threads and where either $a$ or $b$ is a write operation we say that there is a data-race between $a$ and $b$ if neither $a < b$ nor $b < a$.

The C++11 standard is a good example. (The relevant section is 1.10 in the draft specs that are available online.) C++11 distinguishes between synchronization objects (mutexes, and variables declared with an atomic<> type) and all other data. The C++11 spec says that the programmer can reason about the data accesses on a trace of a multithreaded program as if it were sequentially consistent if the data accesses are all data-race free.

From the pratical side, there is a verification system VCC which can be used to formally prove thread safety of C programs.

This is a citation from the web site:

VCC supports concurrency -- you can use VCC to verify programs that use both coarse-grained and fine-grained concurrency. You can even use it to verify your concurrency control primitives. Verifying a function implicitly guarantees its thread safety in any concurrent environment that respects the contracts on its functions and data structures.

This is a very difficult area to ensure program correctness as far as ruling out race conditions, a sort of "achilles heel" of parallel processing. The best approach for program correctness is generally to avoid low-level primitives and work with higher-level design patterns (eg from libraries) that ensure thread synchronization. There is one model CSP, communicating sequential processes by Hoare, that has some proofs of correctness given that developers limit themselves to the "framework". It has some conceptual similarity & chronological origination/overlap to unix "pipes and filters" although havent (yet?) found a direct link between the two.

Two other frameworks that attempt to improve parallelization correctness through design patterns, and which have most of the standard/known algorithms/design patterns for this purpose: