This article talks about 6 ways of doing locking in .NET. It starts with
concurrency problems and then discusses about 3 ways of doing optimistic
locking. As optimistic locking does not solve the concurrency issues from roots,
it introduces pessimistic locking. It then moves ahead to explain how isolation
levels can help us implement pessimistic locking. Each isolation level is
explained with sample demonstration to make concepts clearer.

This is a small Ebook for all my .NET friends which covers topics like
WCF,WPF,WWF,Ajax,Core .NET,SQL, Entity framework, Design patterns , Agile etc
you can download the same from here
or else you can catch me on my daily free training @ from
here

There are 4 kinds of major problems caused because of concurrency, below
table shows the details of the same.

Problems

Short description

Explanation

Dirty reads

"Dirty Read" occurs when one transaction is reading a
record, which is part of a half, finished work of other transaction.

• User A and user B are seeing value as “5”.
• User B changes the value “5” to “2”.
• User A is still seeing the value as “5”….Dirty read has happened.

Unrepeatable read

In every data read if you get different values then it’s
an “Unrepeatable Read” problem.

• User A is seeing value as “5”.
• User B changes the value”5” to “2”.
• User A refreshes see values “5”, he is surprised….unrepeatable read
has happened.

Phantom rows

If “UPDATE” and “DELETE” SQL statements does not affect
the data then it can be “Phantom Rows” problem.

• User A updates all value “5’ to “2”.
• User B inserts a new record with value “2”.
• User A selects all record with value “2’ if all the values have
changed, he is surprised to still find value “2” records…..Phantom rows
have been inserted.

Lost updates

"Lost Updates" are scenario where one updates which is
successfully written to database is overwritten with other updates of
other transaction.

• User A updates all value form “5” to “2”.
• User B comes and updates all “2” values to “5”.
• User A has lost all his updates.

As the name suggests “optimistic” it assumes that multiple transaction will
work without affecting each other. In other words no locks are enforced while
doing optimistic locking. The transaction just verifies that no other
transaction has modified the data. In case of modification the transaction is
rolled back.

There are 3 primary ways by which we can implement optimistic locking in
.NET:-

• Datasets: - Dataset by default implement optimistic locking. They do a
check of old values and new values before updating.

• Timestamp Data type: - Create a timestamp data type in your table and
while updating check if old timestamp is equal to new timestamp.

• Check old and new value: - Fetch the values, do the changes and while
doing the final updates check if the old value and current values in database
are equal. If they are not equal then rollback or else commits the values.

As said in the previous section dataset handles optimistic concurrency by
itself. Below is a simple snapshot where we held the debug point on Adapter’s
update function and then changed the value from the SQL Server. When we ran the
“update” function by removing the break point it threw “Concurrency” exception
error as shown below.

If you run the profiler at the back end you can see it fires the update
statement checking of the current values and the old values are same.

In this scenario we were trying to change the field value “AuthorName” to
“This is new” but while updating it makes a check with the
old value “This is old author”. Below is the downsized code snippet of the
above SQL which shows the comparison with old value.

The other way of doing optimistic locking is by using ‘TimeStamp’ data type
of SQL Server. Time stamp automatically generates
a unique binary number every time you update the SQL Server data. Time stamp
data types are for versioning your record updates.

To implement optimistic locking we first fetch the old ‘TimeStamp’ value and
when we are trying to update we check if the old time
stamp is equal to the current time stamp as shown in the below code snippet.

update tbl_items set itemname=@itemname where CurrentTimestamp=@OldTimeStamp

We then check if any updates has happened, in case updates has not happened
then we raise a serious error ‘16’ using SQL Server ‘raiserror’
as shown in the below code snippet.

if(@@rowcount=0)
begin
raiserror('Hello some else changed the value',16,10)
end

If any concurrency violation takes place you should see the error propagated
when you call ‘ExecuteNonQuery’ to the client side as
shown in the below figure.

Many times we would like to check concurrency on only certain fields and omit
fields like identity etc. For those kind of scenarios
we can check the old value and the new value of the updated fields as shown in
the below code snippet.

update tbl_items set itemname=@itemname where itemname=@OldItemNameValue

Yes, you said right. By using optimistic locking you only detect the
concurrency problem. To solve concurrency issues from the roots
itself we need to use pessimistic locking. Optimistic is like prevention while
pessimistic locking is actually the cure.

There are 4 kinds of locks you can acquire Shared, Exclusive, Update and
intent. The first two are actual locks while the other two
are hybrid locks and marker.

When to use?

Reads Allowed

Writes Allowed

Shared lock

When you want only to read and you do not
want any other transactions to do update.

Yes

No

Exclusive

When you want to modify data and you do not want anyone
to read the transaction, neither you want anyone to update.

No

No

Update lock

This is a hybrid lock. This lock is used when you want
to do update operation which passes through multiple phases before the
actual update happens. It first starts with shared lock in the read
phase and then on the actual update it acquires an exclusive lock.

Read phase

Yes

No

Manipulating phase

Yes

No

Update phas

No

No

Intent Lock ( Demand locks)

Intent lock is for lock hierarchy. This lock is used
when you want to lock resources down in the hierarchy. For example a
shared intent lock on a table means shared locks are placed on pages and
rows with the table.

The other locks are pretty straight forward; the update lock is confusing
because of its hybrid nature. Many times before we update we read the record. So
during read the lock is shared and while actually updating we would like to have
an exclusive lock. Update locks are more of transient locks.

Isolation levels are features of RDBMS software, in other words they
fundamental really belong to SQL Server and not to Ado.NET, EF or LINQ. Said and
done that you can always set the transaction isolation level from any of these
components.

Middle tier

In middle tier you can specify isolation level using transaction scope
object.

Yes, read uncommitted is opposite to read committed. When you set the
transaction isolation level to read uncommitted, uncommitted data is also read.
Some important key points for read committed:-

• Uncommitted is see so dirty read possible.

• No locks held.

• Useful when locking is not important and more important is concurrency and
throughput.

If you want to test the same, fire the below SQL statement which is doing an
update and roll back. The roll back happens after 20 seconds halt. Within that
time if you fire a select query you will get the uncommitted data and after 20
seconds you will see the old data this committed data is rolled back.

Comments and Discussions

Just a quick note to say the different tables within this article would benefit from borders. Look at currency problems for example: it's hard to tell which step belongs to which kind of problem. Borders would properly divide sections and make the whole thing a lot more easier to read.

The term "optimistic locking" is misleading - as the author himself points out, there is no locking involved. The correct terminology is optimistic concurrency.

I think the author dismisses optimistic concurrency far too easily. As the article points out, only the serializable isolation level solves all four concurrency issues. But there is a reason why read committed is the default isolation level - serializable transactions lead to unacceptable performance in many scenarios, and of course especially when there's a lot of concurrenct users (i.e. when you most need to deal with concurrency).

Optimistic concurrency is often the superior approach. In systems where reads occur very frequently compared to updates, but the integrity of the data is of high importance, it makes sense to *detect* concurrency conflict rather than prevent it by means of locking, because when data is not volatile conflicts do not happen frequently. And in cases where more conflicts occur, it isn't given that you must just give up and start over when there is a conflict (as hinted by "rollback" being the end of the story). You could opt to fetch the changed data and let the user choose between discarding her own changes, overwriting the other user's changes, or merge the two, for example. Such a sophisticated solution requires considerable extra work though, since it increases complexity in both DAL and UI, and may require some user training in order to have real value. (Most users of most systems haven't ever given a thought to concurrency issues, after all.)

A deeper discussion of optimistic concurrency would have improved the article. And then it ought to also mention SQL Servers "timestamp" datatype, which is intended for precisely this purpose. Adding a column of type timestamp to any table causes SQL server to automatically update the timestamp whenever the row is modified. Thus the concurrency conflict can be detected without having to evaluate a where clause that filters on every single column, rather filtering only on primary key and timestamp.

Since 2005 we've also had the option of using a transaction isolation level that isn't mentioned at all in the article: snapshot. I have no personal experience with it, but snapshot isolation is completely lock-free and kills most of the anomalies that otherwise only serializable isolation can take care of. It works by versioning rows (in tempdb) rather than locking, so every transaction can see the database as it appeared at the time the transaction started - conceptually it's as if you'd taken a photo of the database, then work against that photo. It works best when there are relatively few updates and short transactions; otherwise tempdb can grow large (makes things slow) and conflicting updates will always have both transactions running to the very end - with all but the first to commit failing when they attempt to do so!

Finally, I wish there was some discussion of how to "mix-and-match" transaction isolation levels. Some databases are used by multiple client applications, and it can be important to understand how things behave when you have concurrent transactions that use different isolation levels. Unfortunately this is a topic that is largely neglected by those who write articles like this.

Respected Sir,
First of all thanks for the great , big and valuable input. I understand how much pain it takes to write a good constructive criticism. I will be taking all your points in the coming 2 weeks and update the article accordingly. I do agree on the most of the points you have mentioned.

In case i have issues of some point i would be glad to discuss here and then take the updates to the articles. Thanks again for improving me.

- Have you checked if optimistic concurrency is supported with MS Access?

I've never really used it, but the adapter is dependent on the server reporting the number of rows affected, since it implements optimistic concurrency by using an UPDATE like

UPDATE myTable SET col1='newValue' WHERE id=342 AND col1='oldValue' AND col2='oldValue2

so that the update doesn't affect any rows if col1 or col2 now have different values from what they had when the row was read.

- Is your adapter configured to use optimistic concurrency?

- Does the datatable hold the column that has changed?

- Is the UpdateCommand correctly configured to include all columns in the WHERE clause? (Imagine the above UPDATE; if it only filtered on id the row would always be updated unless it had been deleted, and concurrency violation would never be detected.)

hi,
This is a nice article. I am searching for an article like this beacuse i am strugling with onw problem. I think some one guid me regarding this from this article. I have an stored procedure which returns some data by performing some operations. I used this stored procedure in my web application to bind grid also in the windows service application which show the returned data in one output device (LED). That stored procedure actually inserted the calculated values in one table and return that table values. Some time the values are duplicated . I am in doubt that from web application and windows service sp will executed at the same time and got this issue. Can anyone give idea about this or any solution like locking

@p3 is a variable. When the dataset fires it creates/defines its own variable to store value and which is later compared in the SQl statment with old and new values. As its autogenerated i have extracted the comparison of old values in a different code snippet in the article.

AmazingFactory wrote:

PS I like the way your "friends" write obviously fake glowing comments on your articles

You seem to have caught a disease of ANSI NULL and this has no cure at all.Its neither the subject of my article.You can start with some new diseases like QUOTED_IDENTIFIER , NO COUNT etc.

The main existence of those variables from my article context is comparing old values and new values. There are 10 different ways by which you can handle NULLs.

AmazingFactory wrote:

so *now* you know

Ok , good quiz. But thats not my article context.

AmazingFactory wrote:

I know hardly any one of them"
"hardly" is the key word in that sentence - you know giridhar 3

I have better things to do. You can contact Chris maunder to know if its fishy.

I am happy you have read my article many times but you seem to be gone pale with that stupid comment of offline pessimistic locking hahahahahha. That word i will never forget "Offline pessimistic locking".

Finally! It is the first time I found few words about pessimistic lock in .NET. I used it with ADO in VB6 & SQL Server 7-2000, but since i started to learn C# & ADO.NET i did't find the subject 'Pessimistic Lock' anymore. Why ? They say to use Optimistic Lock because it doesn't lock the data and the performance is good. OK, but i feel i still need it in .NET, for example to get a unique and progressive code from a registry (field in a table). I feel more secure with pessimistic lock. Of course, it must be used just for few milli-seconds. The article is very good and well written. I'm sorry it is not possible to give more than 5 points, otherwise i would have given 5000 Points !!!
Thanks and God bless you.

This is a great topic and good start to your aticle. You may also want to mention the concept of lock escalation -- a gotcha for many applications. Maybe you can include it in your "What are lock hints?" section.

I think that many developers working with a DBMS for the first time may think that the database will "handle concurrent users" with its locking mechanisms.

It is very important to discuss the various methods of locking in .NET as you have done. But the next piece of the puzzle will be to explain the pros and cons of each method.

For example, if you use pessimistic locking you will not have to "worry about" (code for) collisions. However, this lower degree of hassle also comes with a lower degree of concurrency -- other users will experience timeouts more frequently because one user has the resources "locked". On the other hand, if you are not careful with optimistic locking how will your user feel if they spend an hour filling out your screen only to be told that "Hello some else changed the value" and you must refresh the screen (and timestamp / values).

Be careful not to call either one of those "prevention" or "the cure" for concurrency issues.

You could discuss a few methods to handle collisions including anything that ADO.net objects do to help with this (for example, showing original vs. changed values and allowing user to "merge").

I hope I die in my sleep like my grandpa Bart, not screaming and kicking like the passengers of his cab.

You should be an expert before you write articles. You are not an expert.

You are showing one example of disconnected locking in your own article. So it must exist.

Disconnected locking is when there is no single database connection held (which you show in your optimistic example)
Connected locking is when there is a single database connection held (which you show in your pessimistic and optimistic examples)

Your article will only service to confuse people, because you are confused.

In your article you said that for read committed transactions, the transaction will block until the data is committed. This is not true for all databases and database configuration. The version and configuration of oracle we use will read the previously committed data and not block waiting for the new data to be committed. And from sql server 2005 onwards you can specify READ_COMMITTED_SNAPSHOT (see http://msdn.microsoft.com/en-us/library/tcbchxcb%28VS.80%29.aspx[^]) on the database to get this same behavior. This is especially important if you never want reads to block your application, but only ever want to read committed data.

"Facts are meaningless. You could use facts to prove anything that's even remotely true!" Homer Simpsons3ware, Family Business