a table that is accessed (update, delete, insert and select) by multiple programs. In fact they are the same, but instantiated by multiple users. This table never grows to more than 1000 rows as the program deletes data after use and inserts new data again. It's like a Supplier/Collector situation.

as this is a industrial production scenario and I must guarantee some operations, so when a user confirms any action, the program updates that table with data coming from other tables on the system.

So we implemented transaction on a lot of commands.And the result was a lot of deadlock situations.

I want some tip about what we could do for avoid those locks. In fact we don't need the transaction, we just need to guarantee that a command will run and if for any reason it fails, the whole operation gets rolled back. I don't know if there's a way to do that without using transactions.

PS: We're using SQL Server 2008 R2.

PS2: I discovered that some system tables I used in the clause FROM on the update was the big problem. Those tables are used for the whole system and gets tons of insert/update/select. So I was just locking things that should not because I didn't change data on that tables with this program.

4 Answers
4

First, yes you need transactions to ensure success or failure (rollback). Only a 1,000 records? That table must be getting slammed with inserts/updates/deletes! So to me this sounds like a heavy transaction table - so be careful with adding indexes as they will only make your inserts/updates/deletes slower. And I have to confirm there are no triggers on your heavy transaction table, right?

So what about your reads? Ever think about separating out a reporting table or something? Replication might be overkill. How accurate and up-to-the-minute does the data need to be?

Using readuncommitted is a possible solution though it has knock on effects. I'd try rowlock first though. SQL Server optimises to page lock to reduce the number of locks on a record, as you only have a thousand records unless they are very wide, a page lock will lock a goodly few of them.

Start by performance tuning all of the queries inside the Transaction. Sometimes speeding up a query on the inside of the transaction by adding an index can make a big difference in the amount of deadlocks yous see.

Also, keep the transactions as small as possible to achieve the rollback that you need when something fails. For instance if you had 5 queries that look up data, but only 3 that change data, then you might be able to shrink the transaction down to just the 3 queries changing the data.