This article talks about 6 ways of doing locking in .NET. It starts with
concurrency problems and then discusses about 3 ways of doing optimistic
locking. As optimistic locking does not solve the concurrency issues from roots,
it introduces pessimistic locking. It then moves ahead to explain how isolation
levels can help us implement pessimistic locking. Each isolation level is
explained with sample demonstration to make concepts clearer.

This is a small Ebook for all my .NET friends which covers topics like
WCF,WPF,WWF,Ajax,Core .NET,SQL, Entity framework, Design patterns , Agile etc
you can download the same from here
or else you can catch me on my daily free training @ from
here

There are 4 kinds of major problems caused because of concurrency, below
table shows the details of the same.

Problems

Short description

Explanation

Dirty reads

"Dirty Read" occurs when one transaction is reading a
record, which is part of a half, finished work of other transaction.

• User A and user B are seeing value as “5”.
• User B changes the value “5” to “2”.
• User A is still seeing the value as “5”….Dirty read has happened.

Unrepeatable read

In every data read if you get different values then it’s
an “Unrepeatable Read” problem.

• User A is seeing value as “5”.
• User B changes the value”5” to “2”.
• User A refreshes see values “5”, he is surprised….unrepeatable read
has happened.

Phantom rows

If “UPDATE” and “DELETE” SQL statements does not affect
the data then it can be “Phantom Rows” problem.

• User A updates all value “5’ to “2”.
• User B inserts a new record with value “2”.
• User A selects all record with value “2’ if all the values have
changed, he is surprised to still find value “2” records…..Phantom rows
have been inserted.

Lost updates

"Lost Updates" are scenario where one updates which is
successfully written to database is overwritten with other updates of
other transaction.

• User A updates all value form “5” to “2”.
• User B comes and updates all “2” values to “5”.
• User A has lost all his updates.

As the name suggests “optimistic” it assumes that multiple transaction will
work without affecting each other. In other words no locks are enforced while
doing optimistic locking. The transaction just verifies that no other
transaction has modified the data. In case of modification the transaction is
rolled back.

There are 3 primary ways by which we can implement optimistic locking in
.NET:-

• Datasets: - Dataset by default implement optimistic locking. They do a
check of old values and new values before updating.

• Timestamp Data type: - Create a timestamp data type in your table and
while updating check if old timestamp is equal to new timestamp.

• Check old and new value: - Fetch the values, do the changes and while
doing the final updates check if the old value and current values in database
are equal. If they are not equal then rollback or else commits the values.

As said in the previous section dataset handles optimistic concurrency by
itself. Below is a simple snapshot where we held the debug point on Adapter’s
update function and then changed the value from the SQL Server. When we ran the
“update” function by removing the break point it threw “Concurrency” exception
error as shown below.

If you run the profiler at the back end you can see it fires the update
statement checking of the current values and the old values are same.

In this scenario we were trying to change the field value “AuthorName” to
“This is new” but while updating it makes a check with the
old value “This is old author”. Below is the downsized code snippet of the
above SQL which shows the comparison with old value.

The other way of doing optimistic locking is by using ‘TimeStamp’ data type
of SQL Server. Time stamp automatically generates
a unique binary number every time you update the SQL Server data. Time stamp
data types are for versioning your record updates.

To implement optimistic locking we first fetch the old ‘TimeStamp’ value and
when we are trying to update we check if the old time
stamp is equal to the current time stamp as shown in the below code snippet.

update tbl_items set itemname=@itemname where CurrentTimestamp=@OldTimeStamp

We then check if any updates has happened, in case updates has not happened
then we raise a serious error ‘16’ using SQL Server ‘raiserror’
as shown in the below code snippet.

if(@@rowcount=0)
begin
raiserror('Hello some else changed the value',16,10)
end

If any concurrency violation takes place you should see the error propagated
when you call ‘ExecuteNonQuery’ to the client side as
shown in the below figure.

Many times we would like to check concurrency on only certain fields and omit
fields like identity etc. For those kind of scenarios
we can check the old value and the new value of the updated fields as shown in
the below code snippet.

update tbl_items set itemname=@itemname where itemname=@OldItemNameValue

Yes, you said right. By using optimistic locking you only detect the
concurrency problem. To solve concurrency issues from the roots
itself we need to use pessimistic locking. Optimistic is like prevention while
pessimistic locking is actually the cure.

There are 4 kinds of locks you can acquire Shared, Exclusive, Update and
intent. The first two are actual locks while the other two
are hybrid locks and marker.

When to use?

Reads Allowed

Writes Allowed

Shared lock

When you want only to read and you do not
want any other transactions to do update.

Yes

No

Exclusive

When you want to modify data and you do not want anyone
to read the transaction, neither you want anyone to update.

No

No

Update lock

This is a hybrid lock. This lock is used when you want
to do update operation which passes through multiple phases before the
actual update happens. It first starts with shared lock in the read
phase and then on the actual update it acquires an exclusive lock.

Read phase

Yes

No

Manipulating phase

Yes

No

Update phas

No

No

Intent Lock ( Demand locks)

Intent lock is for lock hierarchy. This lock is used
when you want to lock resources down in the hierarchy. For example a
shared intent lock on a table means shared locks are placed on pages and
rows with the table.

The other locks are pretty straight forward; the update lock is confusing
because of its hybrid nature. Many times before we update we read the record. So
during read the lock is shared and while actually updating we would like to have
an exclusive lock. Update locks are more of transient locks.

Isolation levels are features of RDBMS software, in other words they
fundamental really belong to SQL Server and not to Ado.NET, EF or LINQ. Said and
done that you can always set the transaction isolation level from any of these
components.

Middle tier

In middle tier you can specify isolation level using transaction scope
object.

Yes, read uncommitted is opposite to read committed. When you set the
transaction isolation level to read uncommitted, uncommitted data is also read.
Some important key points for read committed:-

• Uncommitted is see so dirty read possible.

• No locks held.

• Useful when locking is not important and more important is concurrency and
throughput.

If you want to test the same, fire the below SQL statement which is doing an
update and roll back. The roll back happens after 20 seconds halt. Within that
time if you fire a select query you will get the uncommitted data and after 20
seconds you will see the old data this committed data is rolled back.