Locking and Blocking and Immutable Data

Locking and blocking feel like a normal part of working with databases, right? Locking and blocking inevitably lead to deadlocks in our databases, too. A significant amount of both human and computer time is spent resolving these problems in the database. After all, if it weren’t for locking and blocking, database professionals would spend their time solving more important problems (like where to go to lunch). What if we didn’t have to deal with locking and blocking?

What’s in a Lock?

First off, what’s a lock?

A lock is a way for a database system to say “This record is mine, nobody touch it”. In addition to giving one process some kind of ownership over a record, the lock also tells the entire database system how that ownership will work out. Some locks are shared locks – they’re for reads. Other locks are exclusive locks – these are for writes.

Once upon a time, databases were developed using locking behavior for both reads and writes. Each read places a lock on a row or series of rows. Each write does the same thing. It’s up to the database engine to figure out how to deal with locks in the most efficient way; this makes life easier for the developer.

Rusty locks make for rusty reads

Reading, Locking, and Blocking

Using locks for reading data can lead to problems – if I’m reading a row for a long time, due to a slow network or crappy program, my read lock can get in the way of other processes and block their activity. DBAs and other database professionals have been able to solve these types of problems through a few different tools.

One way to improve database read performance is to use indexes. Indexes minimize the volume of data that an application has to read. By reducing the read footprint we’re hoping that we can decrease the time spent locking and our reads can get out of the way of writes.

Using indexes doesn’t always solve the problem, though. When our database can use locks for reads, those locks can always get in the way of write activity. It’s easy enough to open up a transaction, read a few rows, and then go to lunch. This is, coincidentally, a hilarious trick to play on your development database servers. Indexes can’t fix problems like this – long held locks eventually block activity and can bring a database to a screeching halt.

One of the ways to solve these blocking situations is through using multi-version concurrency control (MVCC). SQL Server people will know this as some kind of snapshot isolation level. In short, data modifications will create a new copy of the row. Transactions that existed before the modification completes will see the old copy of the row; transactions that began after the modification was complete will see the new copy of the row.

Using snapshots works well to avoid readers blocking writers, but writers can still cause problems – writes require locks. Depending on how our database has chosen to take out those locks, they may be at a coarser granularity than we’d like. SQL Server can choose escalate locks in order to conserve memory. Lock escalation can lead to writers blocking each others’ activity.

Even worse, we never get to see those old row versions – they’re typically removed by the database system once the transaction is complete. If a later write ever clobbers an old version of a row, there’s no undo.

Why Change Anything?

One of the easiest ways to resolve these problems is to never change data in the database. Or, at least, to change data so infrequently that it might as well be never.

If you’ve worked with a data warehouse, you’re already familiar with the idea of slowly changing dimensions. In a type 2 slowly changing dimension, additional columns are used to mark the start and end of a record’s validity. This lets report writers go back in time and compare different aspects of facts in the system.

We can use this kind of system in an OLTP platform as well. Developers have, for some time, been usingevent sourcing patterns to create events that record changes in the application. Over time the events are synchronized with the main records. If you think about it, your bank must operate in the same way.

When you go to the store and purchase food, your bank isn’t going to open a distributed transaction with the grocery store’s bank. Instead, they’ll record that money left your account and went somewhere else. To keep the system running quickly, the debit from your account won’t result in your account balance changing, though. Instead you’ll have a pending charge placed on your account. Your effective current balance can be found by summing up your pending charges and subtracting that from your current balance.

Never Changing Dimensions

We can avoid almost all locking and blocking in a database by never modifying a row – no moreUPDATE!

In an immutable database system, we avoid changing rows. Instead, we add new facts to an existing table. If there’s ever a conflict between two changes made to a row, we can even look back at the history of our data and use our data to determine which change is correct, or even cherry pick portions of each change.

Meanwhile, In The Real World…

Unfortunately, the real world rarely agrees with what we want to build. In order to make data appear to be immutable, we’re going to have to jump through some hoops. Astute readers will notice that this system isn’t entirely append only – we need some mechanism to track row validity. If you’re implementing this yourself, a “modification” becomes:

Start a transaction.

Mark the old row as no longer valid.

Insert the new version of the row.

Commit the transaction.

Depending on the data platform it may be necessary to implement this functionality ourselves and, let’s face it, we’ll probably mess that up. It’s still worth giving this a shot, nothing says that our immutable storage has to be handled live. Enterprising developers could find a way to implement this with a combination of event sourcing, a read cache, and a batch processing framework to update the source system. Other teams can take advantage of features in existing RDBMSes – SQL Server 2016 temporal tables and Oracle Flashback both fit the bill.

Using an immutable data format gives us the ability to avoid many of the problems associated with concurrency. When data never changes, we can avoid locking and blocking problems with many additional benefits.

As soon as you mark the previius row you’re back to the same issues. You can make it immutable though but it’s tricky. Columnstore indexes use separate bitmaps to track state but you’re then back to a smaller version of the same problem. You could always just append and always look for the latest row. You’d have to have a way to represent deletes via new rows. As soon as you have any sort of index or structure though you’re back into the need for locks or at least periodic rearrangement. Yep DBs are hard.

You’ve got it! Bw-trees use tombstone deletes, too. Most of the newer databases (Cassandra, Hadoop, Riak, et al) do this, too. There are a lot of things that it makes easier, but the implementation is a bear.