Tips to avoid deadlocks?

A database server should be able to service requests from a large number of concurrent users. When a database server is servicing requests from many clients, there is a strong possibility that conflicts will occur because different processes request access to the same resources at the same time. A conflict in which one process is waiting for another to release a resource is called a block. A blocked process usually resolves itself when the first process releases the resource.

A more serious condition, called a deadlock or fatal embrace, occurs when the first process is also waiting on a resource held by the second process (see below). Each process is stuck waiting for the other to release a resource. Unless one process times out, the lock won’t clear itself.

Although we cannot completely avoid deadlocks but they can be minimised by following the tips below:

Ensure the database is normalized properly because bad database design can increase the number of deadlocks to occur inside database.

Deadlocks can also occur if the resources are not acquired in some well-defined order because if all concurrent transactions access objects in the same order, deadlocks are less likely to occur. For example, if two concurrent transactions obtain a lock on the Department table and then on the Sales table, one transaction is blocked on the Department table until the other transaction is completed. After the first transaction commits or rolls back, the second continues, and a deadlock does not occur. Using stored procedures for all data modifications can standardize the order of accessing objects. Also define some programming policy that defines the order in which database objects can be accessed. As long as all programmers in the project know and apply the policy of acquiring the lock in well-defined order, you will avoid deadlock. For hopefully obvious reasons, we must release locks in the opposite order to that in which we acquired them, and should release them in a finally clause of TRY/CATCH block. Properly analyse the situation and ensure that all resources within your code are acquired in some well-defined order.

Do not allow users to input the data during transactions. Update all the data before the transaction begins.

Avoid cursors if possible because same transaction locking rules will apply to SELECT statement within a cursor definition that applies to any other SELECT statement. You can control the transaction locks for cursors definition SELECT statement by choosing the correct isolation and/or using the locking hints specified in the FROM clause. The locks are held until the current transaction for both cursors and independent SELECT statements ends. When SQL Server is running in auto commit mode, each individual SQL statement is a transaction and the locks are freed when the statement finishes. If SQL Server is running in explicit or implicit transaction mode, then the locks are held until the transaction is either committed or rolled back.

Deadlocks typically occurs when several long-running transaction execute concurrently. Keep transactions as short as possible because when several long-running transactions execute simultaneously against the same database then there is more chance of deadlocks to occur as a result of these transaction because exclusive or update locks are held longer for longer transactions which blocks other activities which leads to possible deadlock situations. Keeping the transaction in one batch minimizes network roundtrips during a transaction, reducing possible delays in completing the transaction and releasing locks.

Reduce the time a transaction takes to complete by making sure you are not performing the same reads over and over again. If your application does need to read the same data more than once, cache it by storing it in a variable or an array, and then re-reading it from there, not from SQL Server. Reduce lock time. Try to develop your application so that it grabs locks at the latest possible time, and then releases them at the very earliest time.

If appropriate, control locks escalation by using the ROWLOCK or PAGLOCK because transaction locks in SQL Server consumes memory resources and when number of locks increases, memory decreases. If the percentage of memory used for transaction lock exceeds a certain threshold then SQL Server convert the fine-grained locks page or row) into a coarse-grained locks (table locks) which is also known as lock escalation. Lock escalation reduces the overall number of locks being held on the SQL Server instance, reducing the lock memory usage. While finer grained locks do consume more memory, they also can improve concurrency, as multiple queries can access unlocked rows.

Consider using the NOLOCK hint where possible because when you run the query against table in SQL Server default isolation level it will put a lock on table and any other query that will try to access the table will have to wait for the lock to be released. This is fine if your table is small but the things get slow if your table is big. A way to get around that is to add a NOLOCK hint to the query, which will override locking of the whole table and allow access to it to other queries. You have to be careful with using NOLOCK hint because it will allow for dirty reads for example if you execute the query with NOLOCK and the query runs for 30 seconds. Because the table is not locked, during these 30 seconds other queries may have added new rows that the query will not return or modified or deleted rows that were already read. It may also have read data from other queries that were uncommitted and could have been rolled back. So keep that in mind when using the NOLOCK hint.

Determine whether a transaction can run at a lower isolation level. Implementing read committed allows a transaction to read data previously read (not modified) by another transaction without waiting for the first transaction to complete. Using a lower isolation level, such as read committed, holds shared locks for a shorter duration than a higher isolation level, such as SERIALIZABLE. This reduces locking contention.

Use bound connections because two or more connections opened by the same application can cooperate with each other. Any locks acquired by the secondary connections are held as if they were acquired by the primary connection, and vice versa. Therefore they do not block each other.

If your database has a large number of deadlocks because of writers blocking readers and/or readers blocking writers, and you can’t have a separate copy of the data for reporting purposes, using row versioning (available in SQL Server 2005 and later) provides the advantages of being able to read and write at the same time without side effects of NOLOCK or READ UNCOMMITTED.

I agree we should not use NOLOCK but there are situations where we can use NOLOCK for example: If you have a bunch of large tables, these tables only store the historical data for clients. If these tables will only going to be updated as part of overnight batch process but these tables will be queried frequently during the day then in this situation using NOLOCK will be a better option against using row versioning. My post clearly suggest to carefully consider your situation before using NOLOCK hint.

dumb question… why dont SQL servers have an option to delay the response for the deadlock victim and restart the transaction when the resource became available. Do this as many times possible until success or timeout.

My Book

SQL Server 2014 Development Essentials (ISBN: 978-1782172550) is an easy-to-follow yet comprehensive guide that is full of hands-on examples. This book will provide you with all the skills you need to successfully design, build, and deploy databases using SQL Server 2014. Starting from the beginning, this book gives you an insight into the key stages of the SQL Server database process, provides you with an in-depth knowledge of the SQL Server database architecture, and shares tips to help you design the new database.

By sequentially working through the steps in each chapter, you will gain hands-on experience in designing, creating, and deploying SQL Server databases and objects. You will learn how to use SQL Server 2014 Management Studio and the advanced Transact-SQL queries to retrieve data from the SQL Server database. You will also learn how to add, modify, and delete data stored within a database. You will use Transact-SQL statements to create and manage advanced database objects that include scalar and table-valued functions, views, stored procedures, and triggers. Finally, you will learn about how the SQL Server 2014 relation engine works, how indexes and statistics improve query performance, and the new SQL Server 2014 in-memory technologies.

Email Subscription

Click to subscribe to this blog and receive notifications of new posts by email.

Join 570 other followers

Copyright

All of the entries on this blog are copyright by Basit Farooq. The full content of any post (a post is an entry on the blog) may not be published elsewhere without prior permission from the copyright holder. Excerpts from blog posts may be posted elsewhere providing they are short (around 10% of the article) and are attributed to me by name and with a link back to this blog, currently located at the following address: http://basitaalishan.com.

The following sites currently have full permission to publish complete copies of posts that are published on this web blog:

Disclaimer

This is personal blog and opinions expressed here represent my own thoughts and not those of my employer. For accuracy and official references, refer to MSDN, Microsoft TechNet, Books Online. I or my employer do not endorse any of the tools / applications / books / concepts mentioned here on my blog. I've simply documented my personal experiences on this blog.