Parallel

Deadlock-Proof Your Code: Part 1

By Kirk J. Krauss, June 04, 2010

An alternative approach to avoiding deadlock

Listing 2 is functionally identical to the deadlocking code of Listing 1, except that it adds thread and lock tracking as described above and provides for surrogate locks as described above. As many surrogate locks are created as are called for, while deadlocks continue to be averted until the run completes. In this code, self-aware lock (*SelfAwareLock) API wrapper functions are substituted in place of several standard Windows API function calls coded in Listing 1. You can achieve the same wrapper effect in production code if you arrange dynamic interception of the necessary API functions via object code runtime patching or via other means. Your intercepts can be set up at or near the start of the run or when a relevant component is loaded into the process.

Listing 2 is intended only as a proof of concept and does not exhaustively cover all available Windows synchronization API functions. You'll almost certainly have to modify this code to meet your needs, rather than simply deploy it. The watchdog methods described here, or something similar to them, can fit virtually any platform that supports multithreaded applications. Though the code in Listing 2 is oriented toward native-code applications, the same techniques can be applied to Java or managed applications too. The locks used in the listings are critical sections, but other types of synchronization objects may benefit from similar deadlock protection.

You may be wondering about the overhead of the thread and lock tracking needed to prevent deadlocks. For most programs, the biggest slowdown will occur when locks are acquired and released. A number of memory operations must be performed, at these times, to enable deadlock protection as described here. The performance impact will depend on the number of threads your application creates, the frequency with which it acquires and releases locks, and the number of processors available to it. The memory overhead will be on the order of a fistful of rather small heap blocks, unless your program creates large numbers of threads. This modest overhead may be worthwhile to ensure that your application won't deadlock in the field.

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!